ETD Collection
Permanent URI for this collection
Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954
Follow the link below for important information about Electronic Theses and Dissertations (ETD)
Library Guide about ETD
Browse
Browsing ETD Collection by Faculty "Faculty of Engineering and the Built Environment"
Now showing 1 - 20 of 70
Results Per Page
Sort Options
Item A covariance based method to describe power processing in power electronics converters(2022) Eardley, ArloThis paper explores the use of a new topology evaluation framework to describe the internal power processing of a power electronics converter. This method is called the matrix method and leverages off a covariance matrix to describe power processing patterns in a power electronics converter. Covariance measures how two signals interact with each other. The covarianc between the power waveforms of the components in a converter describes how these components interact in terms of power. These are called “power interactions”. These power interactions between component powers provide insight into the power processing of a topology. The covariance matrix contains all combinations of component power pairs. This aims to describe all the power interactions components have in the entire converter. The covariance matrix is interpreted as describing the power processing inside a converter, where circulating power is occurring and which components are most involved in power processing. The covariance matrices of converters are able to be compared in a quantitative manner with the aim of providing a more justifiable reason for topology selection rather than personal bias. The matrix method is shown to be aligned with the principles of differential power theories. The matrix method is shown to be useful in comparing topologies, aiding in the topology selection processItem A dual interchange algorithm for three-dimensional stope boundary optimisation for underground mines(2022) Nhleko, Adeodatus SihesenkosiMineral Resources are extracted using surface or underground mining methods to generate maximum economic value for the mining company extracting the resources. The objective of value maximisation necessitates the development and application of optimisation algorithms that will ensure maximum value is realised. Several algorithms have been developed to solve the value optimisation problem for near surface deposits. Examples of these algorithms include the Lerch-Grossman algorithm and dynamic programming. It has been observed by some researchers that the study on open pit mine geometry optimisation has reached saturation levels as there are many algorithms that can produce a ‘guaranteed optimal solution’. However, the underground geometry optimisation problem remains largely unsolved due to its complexity, thus, there are limited algorithms developed for it. This thesis was therefore, undertaken to contribute to the few existing algorithms for underground geometry optimisation by developing a more versatile algorithm in handling variable stope boundaries and it is a dual interchange algorithm (DIA) that works by combining the strengths of two existing generic algorithms. Once an appropriate underground mining method has been selected to extract a mineral deposit, mine planners need to generate optimal layouts for development and infrastructure, stope and production schedules that incorporate equipment selection, which are solved as optimisation problems. These problems introduce a circular logicwhich introduces complexity in deciding on which part of the optimisation problem should be the starting point. In this study, the stope layout optimisation problem was selected as the starting point because when optimising for development layout, production schedule and equipment selection, the spatial position of the stopes to be extracted is one of the constraints. The DIA was developed by incorporating the principles of the particle swarm optimisation (PSO) algorithm and genetic algorithm (GA). The PSO algorithm was applied for the stope layout optimisation problem to exploit its strength for solving the problem in three-dimensional (3D) space to generate feasible solutions. The GA was used to optimise the stope layout in each level since its evolution capabilities are well-suited for stope layout optimisation. Since metaheuristic-based algorithms also do not guarantee true optimality, the DIA exploited the strengths of both PSO and GA to generate superior solutions in 3D space.The DIA was then coded in Python programming language because it is simple to code in Python language and the execution of the code is much faster compared to other programming languages. The DIA was tested using a synthetic Platreef reef mineral deposit where a resource model was used as an input and then converted to an economic block model using economic parameters. The Platreef deposit is a platinum group elements (PGEs) deposit which is amenable to extraction using bulk (or massive) mining methods such as longhole stoping, making it an ideal candidate for stope boundary optimisation. Thereafter, the DIA generated several solutions and selected the one with a maximum value as the optimum solution. The results of the algorithm were validated using the Mineable Shape Optimizer (MSO) available in the commercial Datamine software. Different scenarios were used to demonstrate the performance of the DIA in different mining scenarios. Two Scenarios A and C considered a fixed stope width, while Scenarios B and D considered a variable stope width to better reflect variable orebody boundaries or contours as encountered in actual mining practice. The DIA generated superior results compared to the MSO where the stope layout economic value solutions for Scenarios A, B and D were 0.3%, 3.4% and 8.3% more profitable than those generated by the MSO, while the MSO generated a solution that was 9.7% more profitable than DIA for Scenario C. The undiscounted economic value was used as a proxy since the study is on stope boundary optimisation, it does not include stope production scheduling. The DIA produced superior results because its architecture is such that it is best suited for variable stope width of the orebody. The solutions of the MSO were generated in much shorter run times than those of the DIA. This is alluded to the fact the MSO creates a single solution, while the DIA generates several solutions during the optimisation process. It is recommended that the DIA could be adapted and applied to other mineral resource models to maximise the economic value of the respective mining projects.Item A study to investigate if stokvels can be used to finance property transactions(2022) Madziwanzira, Admire; Admire, MadziwanziraThe purpose of the study was to investigate the viability of stokvels as a financing option for the purchase of property in South Africa, thus promoting stokvels as another vehicle to reduce the housing backlog. Previous studies have not investigated stokvels as a property finance instrument, neither have they investigated their structure and operations with a view to document best practice and investigate if the concept can be considered as a sustainable housing finance option in South Africa. Findings were reached through a mixed methods approach, combining primary data obtained from interviews and questionnaires, as well as secondary data from available literature. The findings of the study affirm that there is an opportunity for stokvels to participate with meaningful impact in the property development sector, be it through the home ownership model, building supplies or property investment/wealth creation model. This would require stokvels to formalize their operations, increase member subscriptions, and also for government to develop policies to protect investors from fraud through the regulation of stokvels. As stokvels in the country continue to innovate, participation in the property development sector is desirable for members, but it would require a significant shift in how stokvels currently operate.Item A systems architecture for IoT connected-edge runtime configuration(2022) Said, Ebrahim YThis thesis presents a systems architecture method that applies object, process, and state as primary attributes for modelling smart devices in an IoT application ecosystem. This systems architecture method is grounded in pure systems thinking that enables a significant focus on systems purpose using a state-of-the-art modelling notation while supporting complexity reduction in systems architecture model representation. In current research, the Internet of Things (IoT) body of work mainly applies functional and domain coverage related to the representation of its architecture. This abstraction aims to hide enabling technologies while simultaneously trying to handle system complexity. Increases in complexity due to the evolution of IoT edge systems have seen requisite increases in technological needs advancement. IoT edge systems enable the physical extensions of the internet that exists today. This cyber-physical evolution goes beyond the services that individual embedded devices expose. It underpins the IoT by creating connected edge systems that integrate large-scale digital and physical processes. The variation in heterogeneity predominantly causes increases in complexity in these systems. A systems approach can reduce the complexity in stating a systems architecture for the IoT Connected-edge (IoT-Ce). This approach stems from the belief that low-level IoT-Ce subsystem characteristics that influence the state of the system can abstract to a conceptual IoT system architecture to increase the accuracy of model dependability through increased state awareness. This thesis presents two contributions. Firstly, the study proposes a standards-based systems architecture method that outlines the systems, subsystems, and components employed at runtime in the IoT-Ce. The Object Process Methodology (OPM) ISO-19450 guides the modelling of the systems architecture. The IoT-Ce Systems Architecture for Runtime Configuration (ISARC) suggests a hierarchical structure for a multi-layered connected heterogeneous IoT edge system. The study of the runtime properties of systems can provide a better understanding of how to model systems that are dependable and scalable. Additionally, these systems have the essential properties to find application in large-scale distributed systems architecture. The primary contribution is the suggestion that identifying well-defined system, subsystem, and component boundaries using a known approach supports complexity reduction in IoT-Ce models. The proposed approach provides the method for testing variations of configuration for the IoT-Ce. Secondly, the thesis provides a method to validate the ISARC using OPM and Design Structure Matrices (DSM). The method for modelling configuration decisions for the IoT-Ce uses OPM as the basis for decision parameter extraction and DSM to represent the decision structure. A test of conceptual architecture variations of the ISARC uses a novel concrete control design based on instantaneous and accumulated errors. This contribution's primary benefit extends the systems architecture by proving that the ISARC can support ongoing runtime configuration decisions using a series of Python simulated data sets. Finally, these data sets compare a historical data set that mimics a hierarchical systems architecture to argue the applicability of parameter-based architecture decisions for the IoT-Ce. The ISARC aims to position an approach to simplify system complexities by suggesting the function and form of the IoT-Ce. The effort to reduce complexity in design is in line with an existing body of research in model-based systems engineering using the OPM ISO- 19450 standard.Item An eco-driving strategy for an electric bus: insert permanent magnet synchronous motor (IPMSM) drivetrain(2022) Jele, Mjozi RobsonThere has been a growing interest in adopting electric vehicles (EVs) over internal combustion engine (ICE) vehicles. However, there are a number of challenges to the adoption of EVs. Firstly, they suffer from low mileage coverage due to poor speci c energy in batteries. Secondly, there is a de cit in electric vehicle charging infrastructure and, thirdly, the current battery technology has low charging rates (C-rate). These concerns makes EV to be undesirable as compared to ICE vehicles since gasoline and diesel have high speci c energy as compared to batteries. Therefore, it becomes extremely imperative that the onboard nite energy must be used optimally during a driving schedule. This research is concerned with the further development and simulation of an eco-driving strategy for an electric vehicle. This strategy makes use of the vehicle's technical parameters, road gradient, and the electric traction machine effciency map. A Matlab/Simulink EV model based on an existing EV of the Technical University of Munich (TUM) was developed and validated. The EV model is based on an insert permanent magnet synchronous motor (IPMSM) drivetrain. The EV model was used to develop and validate the eco-driving strategy. An error of 1.8% in terms of energy consumption for a test drive cycle was observed between the Simulink EV model and TUM EV. Firstly, the TUM EV eco-driving strategy is further developed and simulated by the Simulink EV model. Secondly, the TUM EV is test-driven according to the Simulink EV model simulations. The TUM EV was found to have an optimal speed of 24km/hr for a zero gradient drive route and an error between the energy consumption rates of the Simulink EV model and TUM EV was found to be 1.93%. The TUM EV is found to have an energy saving of 31.2% when driving using the eco-driving strategy. For an acceleration from zero to 24km/hr, the as-fast-as-possible acceleration architecture ( = 0:4) is found to be 33.33% more efficient as compared to the as-slow-as-possible mode ( = 4). The eco-drive strategy saves 8.7% for a distance of 550m with a road gradient of 0.076 as compared to conventional driving of an urban eBus. The eco-driving strategy can be applied to any electric vehicle and can be used in conjunction with a GPS navigation system. It can be offered by electric vehicle manufacturers for optimal use of nite battery energy.Item An efficiency comparison of quantum variational and classical satisfiability algorithms for debugging combinational logic circuits(2021) Demetriou, PeterThis research presents an extension and contribution to combinational logic circuit debugging for future use in Very Large-Scale Integration system design. It also extends application of quantum computation to an engineering problem. Although previous work in this area has sped up approximate debugging solutions through the use of satisfiability solvers, the exact solutions become classically intractable as the number of gates in the system increases. Additionally, whilst many quantum algorithms show promise in solving classically intractable problems in theory, there have been few domain-specific implementations, particularly where hard and soft constraints exist, multiple solutions are required and noisy intermediate-scale quantum hardware is used. Two variational quantum algorithms are applied to a reduction of the satisfiability problem into a maximum independent set problem. While the Quantum Alternating Operator Ansatz approach constrains the optimisation to feasible solutions, the resulting ansatz is too large for current devices. A custom ansatz for a Variational Quantum Eigensolver along with a solution enumeration method is however able to make use of current hardware for the debugging problem. A small problem set is generated and solved using the aforementioned methods. The accuracy and complexity of the algorithms are benchmarked against an exact and approximate classical solver. The quantum approach outperforms the exact solver in both temporal and spatial scaling. With a theoretical mathematical result, it is argued that the quantum approach will likely outperform the approximate solver in the same way for future larger problems when quantum hardware size increases and becomes more robust to noise.Item An investigation into the selection of an optimised maintenance strategy for conveyor systems within the port of Richards Bay(2022) Naidoo, LaventhranThis research aimed to use conveyor failure data from the PORB to: 1) Review and identify effects of the existing maintenance approach, 2) Highlight failure causes and consequences and 3) To determine the most suitable maintenance strategy for conveyors in the PORB that will reduce failures thereby reducing downtime and loss of revenue. The scope of research was narrowed to the routes which had experienced the most failures which was supported through use of the Pareto principle. The failure data was put through a logical sequence of failure analysis tools to identify cause-and-effect relationships and survey questionnaires were sent to key personnel which formed the basis for the selection of a maintenance strategy. The research has shown that a modified version of Reliability Centred Maintenance (RCM), with dominant predictive methods, will provide benefit to the business by applying the appropriate combination of maintenance strategies (RM, PM or PdM) and prioritising maintenance tasks based on the equipment and components that pose significant consequential risks.Item Analysing the contribution of stakeholder engagement in public hospital infrastructure projects(2022) Zuma, Malibongwe H WStakeholder engagement makes a major contribution to the implementation of public hospital infrastructure projects. This contribution is signified by the negative response the public infrastructure projects receive when end-users feel not being properly engaged. The aim of this research, therefore, is to identify the means of addressing lack of stakeholder engagement in the delivery of public infrastructure projects. This research uses a mono method qualitative strategy to collect empirical data from participants involved in the public infrastructure project management space. To fully comprehend the dynamics of the public infrastructure project stakeholder engagement phenomenon, the researcher conducted semi-structured interviews with participants. The data was collected using a face to face interview using an interview guide from eight (8) participants. Results showed themes for key stakeholder, Involvement, Contribution, Critical issues, Standard and regulations and the status of the project. Results showed that participants can identify the key stakeholder for the respective projects however there is lack of stakeholder involvement. The findings indicate that lack of communication contributes to stakeholder engagement not being properly done and that the stakeholder engagement checklist and the standardisation of stakeholder engagement are tantamount to addressing the issue of stakeholder engagement. This study recommends that stakeholder engagement checklist be done, include stakeholder engagement as one of the infrastructure project deliverables at the planning stage of the project, government must start engaging stakeholders at the pre-feasibility stage of the project and that government imposes a regulation that deems non-compliance with the responsibilities and requirements of stakeholder engagementItem Analysis of torpedo ladle refractories experiencing premature failure at the spout(2022) Maistry, NicholeIn recent years, it was found that torpedo ladle refractories, specifically at the spout area, experience premature failure. This has an adverse effect on the cost and efficiency of the process as torpedoes are removed from service at more frequent intervals. The objective of this research was to determine the factors that contribute to decreased torpedo life. Thus, refractory design, installation method and effect of operations were examined. Subsequent investigations were conducted for deviations noticed in procedures. This included evaluation of torpedoes that were removed from service with unsatisfactory campaign tonnages, in terms of molten metal mass, shell temperature, and tap to tap and residence. Furthermore, bricks from a failed torpedo were examined to determine the failure mechanisms experienced. It was found that the thermal and chemical properties of the brick are suitable for the temperature and chemistry it is exposed to as heat losses are minimal and corrosion is controlled. One of the major problems identified was spalling, which is caused by overfilling of torpedoes and long tap to tap times, resulting in rapid brick removal. The next problem was stress cracking as a result of incorrect mortar application, which occurs when the expansion due to thermal cycling cannot be accommodated. Incorrect mortar application also resulted in metal and slag penetration through the joint. When metal penetration occurs, the metal remains in the area it penetrated. The metal expands and contracts during operation, causing further cracking in the brick. The final problem experienced was skull formation, which is attributed to high tap to tap and residence times, as well as low molten metal temperatures. The brick was also subjected to XRF, XRD and SEM/EDS analysis. XRF analysis shows a decrease in the Al2O3 content and an increase in the CaO content at the hot face. The MgO and SiO2 content remain relatively unchanged. There is a 1.4% decrease in the alumina content and 0.6% increase in CaO content, which indicates that there is a small degree of dissolution that takes place, indicating that the refractory has good slag resistance. From the cold to hot face, XRD showed phases changes consistent with dissolution of alumina and iron infiltration. At the hot face, SEM/EDS analysis showed an increase in iron and other trace elements, and a decrease in aluminium and oxygen, indicating slag and iron infiltration, and a small degree of corrosion of the refractoryItem Application of artificial neural network for prediction of sulfidogenic fluidized bed reactor performance and optimization for the co-treatment of acid-mine drainage with liquid pharmaceutical waste(2022) Makhathini, Thobeka PearlAcid mine drainage (AMD) is one of the most troubling water pollution sources in South Africa due to its history and current mining activities. In reality, South Africa's economy is driven mainly by the mining sector's existence; hence it is crucial to abate the impact and find treatment solutions to manage AMD. On the other hand, there is a growing concern about the pharmaceutical compounds abundantly found in South African natural water bodies. Hospitals are found to be significant contributors among diverse sources contributing to pharmaceutical pollutants in the environment. Currently, hospital wastewater is discharged to the municipal sewer, ending up in Wastewater Treatment Plants (WWTPs), decreasing the organic contaminants' biodegradation process in the treatment plant. Subsequently, the direct discharge of the WWTP treated effluent (with untreated pharmaceuticals) to receiving waterbodies raises a concern about the impact of these compounds' persistent presence in the environment. As such, both acidic metal and sulfate-containing water and pharmaceutical-rich wastewater are toxic to the environment, thus need urgent attention. To date, co-treatment of acid mine drainage and municipal wastewater has been investigated with several advantages; however, the potential removal of pharmaceutical compounds through the biological process has not been explored. This work proposed, developed, and assessed the sulfidogenic fluidized-bed reactor system for co-treatment of acid mine drainage and isolated stream of hospital wastewater. A laboratory-scale sulfate-reducing fluidized bed reactor at a controlled mesophilic condition of ±28°C examined the biotreatment process of real acid mine drainage and hospital wastewater. In this study, there was no supplement of carbon source during the treatment, but only hospital wastewater (HWW) provided the dissolved organic carbon. The overall removal for the effluent chemical oxygen demand (COD)concentrations and sulfate were 39.5 mg/l and 42 mg/l at a COD/SO42-the ratio of 0.68 and hydraulic retention time of 8 h. The overall COD oxidation and sulfate reduction performance achieved an average of 96% and 97%, correspondingly; and recorded the removal efficiencies of 44% naproxen and 55% ibuprofen. Furthermore, the study evaluated the inhibition kinetics and microbial communities to better understand the diverse species and the reaction mechanisms within the system.The kinetics and microbiology diversity in the sulfidogenic fluidized-bed bioreactor (at 30°C) for co-treatment of hospital wastewater and metal-containing acidic water were examined. The alkalinity from organic oxidation raised the pH of the effluent from 2.3 to 6.1-8.2. Michaelis-Menten modeling yielded (Km=7.3 mg/l, Vmax=0.12 mg/l min-1) in the batch bioreactor treatment using sulfate-reducing bacteria. For COD oxidation, the dissolved sulfide inhibition constant (Ki) was 3.6 mg/l, and the Ki value for H2S was 9 mg/l. The dominant species in the treatment process belong to the Proteobacteriagroup (especially Deltaproteobacteria). To further explore the co-treatment process, a nanoscalezero valent iron was used to enhance the treatment and monitored through an oxidation-reduction potential (Eh) for 90 days. The removal pathway for the nZVI used co-precipitation, sorption, and reduction. The removal load for Zn and Mn was approximately 198 mg Zn/g Fe and 207 mg Mn/g Fe, correspondingly; achieving sulfate removal efficiency of 94% and organic matter (COD, BOD, DOC), TDN reduced significantly, but ibuprofen and naproxen achieved 31% and 27% removal, respectively. This enriched co-treatment system exhibited a high reducing condition in the reactor, as confirmed by Eh; hence, the nZVI was dosed only a few times in biotreatment duration, demonstrating a cost-effective system.Subsequently, the biological process was run for over 210 days to collect enough data for development of predictive artificial neural network model. The metal concentration removal was more than 99% in effluent for iron and zinc, and precipitated predominantly as FeS, FeS2and ZnS. The alkalinity generated by COD oxidation improved the pH of the wastewater considerably when the concentration of feed sulfate was less than 3500 mg/l. At an HRTof8 h, COD oxidation in the reactor precipitated 1345 mg Fe/l/day, 543 mg Al/l/day and 130 –170 mg Zn/l/day from acidic wastewater and increased the pH from 2.2 to 6.8, due to the formation of metal sulfide precipitate. The ANN model was successfully developed, and the predicted and actual measured concentrations of the outputs found R-value of 0.97.To summarize, this research study demonstrated a new application of co-treating AMD with pharmaceutical-rich wastewater using fluidized-bed reactor (FBR). Further to that, the co-treatment was enhanced with nZVI to evaluate the heavy metals that were not adequately removed in prior experiments. Finally, the study developed a performance model which may easily be adopted for process design.Item Assessing the energy performance gap between 6-star and net-zero energy buildings for South Africa(2022) Analo, AndrewEnergy efficiency in buildings has been systematically coupled with the green-rating of buildings based on systems such as the Star-rating of the Green Building Council of South Africa (GBCSA). Net-zero energy buildings (NZEBs) have also been receiving increased attention as a way of addressing concerns over depleting energy resources (especially for fossil fuels), increasing energy-costs and greenhouse gas (GHG) emissions which contribute to global warming and climate change. With a focus on reduction in contribution to GHG-emissions and thus enhancing climate change mitigation of 6-Star green-rated buildings the study applied a case-study approach based on energy performance of the Department of Environmental Affairs (DEA) Building in Pretoria. Secondary data show that the building’s status quo energy performance is 112kWh/m2/yr. Within the temperate-interior climatic zone for Pretoria (as per energy efficiency regulations for buildings in South Africa), psychrometric chart analysis showed that the building could achieve a higherlevel of thermal comfort through further optimization of passive design interventions. Edge-tool simulation results on full optimization of passive design and energy efficiency interventions indicate that a net-zero energy building (NZEB) performance of the same sized building could achieve an energy performance level of 45kWh/m2/yr, thus revealing an energy performance gap of 67kWh/m2/yr. This translates to 60% savings compared to the status quo 6-Star performance of 3076 291kWh/year. Assessment of roof-area for solar PV system indicated that it is adequate for the energy balance towards a NZEB. Assessment of simple payback period per intervention indicates less than one-year payback period for tenant lighting while tenant equipment indicatesa payback period of just over a year and PV-installation at three-years. The findings indicate that the intervention-costs for migration to NZEB fall within the acceptable range for South African investors (maximum of 3 to 5 years). The above findings indicate that the pursuit of NZEBs would significantly contribute towards mitigation of GHG-emissions and climate change and thus calls for further exploration of pathways towards mandatory NZEBs for South Africa.Item Assessment of calcium sulphate dihydrate on spontaneous combustion at Khwezela Colliery(2021) Ngoepe, Thapelo WilfredCoal spontaneous combustion (CSC) is a major concern in the exploitation and utilisation processes of coal. Various methods for prohibiting the spontaneous combustion of coal have been developed. This study aimed to determine the causes of spontaneous combustion of coal and assess the effectiveness of calcium sulphate dihydrate (gypsum) on spontaneous combustion at Khwezela Colliery in Mpumalanga, South Africa. In order to exploit coal at favourable costs at Khwezela Colliery, gypsum was applied to two drill holes and one coal stockpile affected by spontaneous combustion. Temperature changes of the three drill holes and coal stockpiles were measured daily for 21 days from 06h00 to 16h00. Data from the holes and stockpiles were represented graphically and analysed using one of the statistical techniques called t-test. Furthermore, a t-test of two samples assuming unequal variances was used, and the significance level of 0.05 was chosen. The test statistic critical value was 2.1 for the stockpiles and 2.0 for the holes. The absolute value of test statistics obtained from comparing the hot holes ranged from 0.0 to 1.7. At the same time, the absolute value of test statistics for comparisons of the same sides of the stockpiles ranged from 3.6 to 4.3. The treated stockpile produced an absolute value of test statistic of 2.0. The analysis has revealed that gypsum is effective in managing the spontaneous combustion of stockpiles. An increase in the concentration of gypsum resulted in a decrease in the spontaneous combustion of the treated stockpile. Similarly, an increase in the concentration of gypsum resulted in a decrease in the in-hole temperature fluctuation. The use of gypsum in managing spontaneous combustion results in a decrease in the operation costs, safety and productivity of mining operations affected by spontaneous combustion.Item Challenges of moving into Medium Density Walk-up Residential Flats (MDWRFs): a case of Harare(2022) Manyunzu, David ChinamasaThere has been an increase in the number of developments in medium density walk-up residential flats (MDWRFs) in Harare in the last decade and current policies are increasingly inclined towards multi-storey housing. The Zimbabwe National Human Settlements Policy of 2019 put a 40% minimum threshold of multi-storey housing in every housing project because of the benefits of this form of housing. However, multi-storey housing does not come without its challenges particularly to low-income residents. This study investigates the challenges that are faced with residents when transitioning into medium density walk-up residential flats with particular focus on livelihoods and assets, habitability and management of common spaces and facilities. In doing so, the study fills a research gap of scarcity in studies of MDWRFs in Zimbabwe and present recommendations for future planning and design of this form of housing. The mixed methods research approach which combine open ended and closed ended questions in a single questionnaire survey is adopted. The study explores the residential environment using the experience and evaluation of the residents which in a way also reveals the strengths and weaknesses of the current form of MDWRFs. The study found out that the satisfaction level with the current form of MDWRFs is high and the major weaknesses that need to be improved on are alternatives to municipal water, communal spaces in building, maintenance of communal facilities, private outdoor spaces, local public facilities, fire safety, public security in neighbourhood and green areas and landscape.Item Coding camp Yaselalini(2022) Nkqeto, PhilisaniThe growing sentiment of contemporary social and economic discourse is the inevitability of the digital fourth industrial revolution. Some welcome it for the potential it brings for human evolution and progress, and some reject it for the dystopic forecast that it is projected to have on employability and equality. The common sentiment is, however, that the industrial revolution is inevitable. The research proposes two additional sentiments on the potential that this revolution might bring. The first of these is the dystopic impact that it might have on further alienating marginalized communities in participating in economic activities. The second is alternate to the first, and it engages on the immense potential that such a revolution could have in emancipating these marginalized communities. The research briefly engages on the disparities that will come about if marginalized rural communities are excluded from participating; arguing that that it is the same exclusion from previous economic niches that rendered them marginalized to begin with. The research is rather underpinned on the second sentiment, which is the potential that presents itself should marginalized rural communities of Lusikisiki and other rural communities be integrated in the revolution. This potential is both economic and social. These rural communities can be upskilled in digital skills that will allow them easier access to economic participation and they can use these skills to better the socio-economic challenges that they face. A critical and rudimental step, however, in saturating digital skills into these rural communities is the provision of the necessary infrastructure that they currently lack. The lack of the appropriate infrastructure and skills in these communities to gear themselves for the upcoming revolution is the basis through which the research makes the architectural argument of “Stagnant Typologies”. In the research, stagnant typologies are expanded upon as being the critical societal typologies that are commissioned for socio-economic emancipation, but have remained stagnant and obsolete in their functions and evolution. The research focuses on stagnant educational typologies of the rural Eastern Cape communities and their incapability to meet up with the performative benchmark of the digital skills era. The research therefore proposes a revision of these education-al typologies and a shift towards a newly evolving digital skills typology known as coding camps. These camps teach digital skills to any individual, regardless of age or qualification; and they are recorded to offer skills that are globally in shortage, but in high demand. In the context of rural Lusikisiki, these camps can aid in addressing socio-economic challenges of employability, unemployment, school dropouts and poverty. They can also aid in the current trend of digression away from practices of culture and identity that have ingrained these rural communities. The conclusion of the research is therefore a proposed coding camp facility in the town of Lusikisiki that will serve its sur-rounding communities and adjacent towns. The camp will be designed to suit its rural context and operate to serve the specific needs of its context. It will serve a very foreign concept in these communities; and potentially spark new architectural dis-courses about the potential that this emerging typology brings and its potential in marginalized communities.Item Community of independence: a communal sustainability centre in Soweto(2023) Brynard, JeremiaSouth Africa is a country with a history that tells a story of struggle, oppression, revolution, and freedom. Over the centuries the people of South Africa have seen many rulers and have known many masters but have now finally obtained freedom through democracy. The township of Soweto is no different as they were once oppressed and marginalized but, through collective strength and determination, rose and obtained their freedom. The people of Soweto have proven that they can empower themselves through means of community, thus making the communities of Ekhaya prime candidates for this dissertation. This dissertation will question the possibility of affecting tangible social, political, and economic change within an already established and previously oppressed urban landscape through communal empowerment and socio-economic independence. The proposed intervention targets the communities of Ekhaya through the revitalization of the decommissioned Orlando Power Station within Soweto. The intervention results from the close social, political, and environmental analysis that is the culmination of research done on themes such as community, empowerment, and independence within the context of the Ekhaya communities. This dissertation is divided into five phases that are representative of the architectural design process: Data Acquisition, Problem Definition, Ideation, Prototyping, Implementation. The Data Acquisition phase introduces the proposed intervention’s site, surrounding context and initial observations. To reach the communities of Ekhaya research needs to be done in order to identify the optimal location. This phase will depict the existing urban framework of Orlando East and the context surrounding the communities of Ekhaya and the chosen site. The following phase will attempt to Define The Problems identified within the targeted area of study and initialize an approach towards understanding these problems and how they could be addressed. Once a proper investigation of the target area has been concluded, we will identify areas of concern that require better understanding through further research. The Ideation phase addresses the identified problems through means of academic research and argumentative deliberation. Here the topics identified in the previous phase will be unpacked and argued to obtain a viable solution to be implemented through means of an architectural intervention. Following this, the Prototyping phase will aim to conceptually explore the research findings physically to move closer to a design solution. It will be shown how research can be transformed into conceptual models that will later serve as a basis for design. Furthermore, the Implementation phase will attempt to manifest all previous findings into a physical built form that will be inclusive of community, provide empowerment, and inspire independence.Item Comparative analysis of linear and non-linear estimation techniques for the determination of recoverable resources in a sedimentary hosted cu-co type deposit(2021) Johnson, Russell DouglasMineral Resource estimation heavily impacts the technical and financial merits of mining feasibility studies, carried out prior to any material extraction. Since exploration requires significant investment, the feasibility of a project needs to be understood as soon as possible in the development of a mining lifecycle. To help define the feasibility of a mining project, resource geologists estimate the Mineral Resource and in-situ recoverable resources available for mining. Techno economic studies are then carried out to assess the economic viability of mining and metallurgical extraction of the recoverable resource. This is achieved by geostatistically estimating the tonnage and grade of mineralisation above a given cut-off grade and at a chosen mining unit or size, (Isaaks and Srivastava, 1989). The research presented is a comparative case study aimed at assessing the suitability of linear and non-linear estimation techniques in the determination of recoverable resources from exploration drilling data in a sedimentary hosted copper-cobalt type deposit. In an operating mine, recoverable resources are typically determined after a grade control drilling programme, drilled on a tight grid to identify subtle variations in grade within a deposit. By comparison, exploration data is inherently broadly spaced and occurs at a much earlier stage in the mining project life-cycle. The geostatistical techniques considered for the estimation of recoverable resources are ordinary kriging, uniform conditioning, and localised uniform conditioning. The localised estimate is contrasted against a grade control estimate, produced from ordinary kriging, to verify the success in determining the recoverable resources from exploration drilling data. The research study found that the dense drilling pattern of the grade control data provides an increased understanding of the distribution of average copper grades at Tshifufia than localised uniform conditioning from exploration data. The success of uniform conditioning on exploration data and the subsequent localisation is dependent on the size of the selective mining unit and grades that have been ranked and spatially referenced according to the average ordinary kriging block estimates. This direct proportionality means that where ordinary kriging estimates are high or low, the localised uniform conditioning estimate will be proportionally high and low as well. Despite the aim of determining the recoverable resources at selective mining unit-scale, localised uniform conditioning grades performed on exploration data provide no more resolution than the ordinary kriging mineral resource estimate, since the underlying data inherently determines the uniform conditioning and localised uniform conditioning. Any additional resolution on the distribution of average grades at selective mining unit level and determination of recoverable resources is subject to the amount and spatial representation of available information during estimation. Therefore, no suitable substitute was determined for grade control drilling and the resulting ordinary kriging grade control mineral resource estimate.Item Conditions of harmony: an applied Musical Research Centre(2022) Mtshabe, Thandolwethu Kanya ZenandeThe word ‘harmony’ is commonly used in music, and is defined as “the combination of simultaneously sounded musical notes to produce a pleasing effect” according to the Oxford dictionary (1998). In architecture, harmony is used to describe complexities and differences that co-exist in relation to one another. In this research reported, the word harmony is going to be used as a middle position between music and architecture, allowing music to inform the design process for an architectural outcome. The design process is conducted in a series of exercises that will translate music into architecture, while the theoretical writing navigates through a timeline of expression emphasizing on the importance of evolving. This journey of finding harmony highlights the main idea of this research report: temporal experience and spatial permanent. The new Urban Development Plan of the Buffalo City Metro Municipality which aims to connect marginalised areas to the main economical hub and the lack of multi-functional music spaces in East London is the leading motivation to the research report. The report proposes an extension to the corridor that starts at the selected site, creating a celebrated entrance/end by adding a permanent experience of sound.Item Constraint ranking through system modelling of the sublevel caving production cycle at Finsch Diamond Mine(2021) Oosthuizen, B WFinsch Diamond Mine (FDM) is a sublevel caving (SLC) operation in the Northern Cape province of South Africa and is owned by Petra Diamonds Limited. The mine was experiencing challenges in achieving its planned production throughput of 280 000 tonnes per month. In addition, it was also facing an increase in unit operating costsand low diamond prices. Consequently, the mine carried out two analytical and optimisation studies in 2018 and 2019, and managed to optimise some operational facets. However, the two studies have fallen short in optimising the production cycle of FDM and reaching the planned production throughput. This research study was therefore undertaken to identify and rank the production cycle input parameters that are the most influential constraints prohibiting FDM from reaching its optimal production efficiency. The Theory of Constraints (TOC) was the main approach used during the study to identify, rank and ultimately make suggestions to eliminate the most influential production cycle constraints. The TOC was chosen as it has a five-step systematic approach to identify constraints and improve the efficiency of a system.It was very important to identify the correct constraints successfully, as it helped to focus and prioritise optimisation projects. Optimisation projects can either be extremely expensive or turn out unsuccessful if focus is placed on the wrong constraints. To solve the problem, the study’s objectives included the preparation of data for the use in, and calibration of, a reliable simulation model of the mine. The data was thenused in a software programme called Simio where discrete-event simulation experiments on the production cycle input parameters were run, and compared to a baseline simulation value. The TOC was applied by assuming each time that a parameter might be a constraint. This was followed by testing if it is by incrementally decreasing and increasing its utilisationor influence, and measuring the production output during the second and third steps of the evaluation. Some parameters had to be taken to the fourth step where it was elevated by increasing the number of machines for example. The last step of the TOC was to repeat the process and find a new constraint, as was done in the first step. By exploiting, synchronizing or elevating an assumed constrained, it was not only possible to determine if an input parameter can be listed as an identifiedconstraint, but it could also be ranked on the basis of its magnitude of influence on the production output tonnage. The use of simulation was chosen because it is highly representative of reality and the most cost effective way to experiment with a system that consistsof multiple objects, actions and parameters. Furthermore, qualitative discussions and an analysis of historical data were also used in the study to identify other constraints that could not be identified through simulation. A preliminary constraint ranking was created from the simulation runs which created an ideal framework to evaluate parameters in more detail by means ofadding historical data to support analyses and arguments. After presenting and discussing the results from the simulation runs, historical data analysis, and qualitative discussions,thefollowing constraintranking was established (most influential to less) : Scheduled production time. Number of Load, Haul and Dump (LHD) equipment in use. Tunnels available for loading. LHD load capacity in tonnes. LHD tramming speed. Number of production days scheduled. Number of drill rigs in use. Drilling quality. Charging and blasting quality. Tip availability.It was apparent after the ranking that the available production time, the number and performance of LHD equipment in use, and the availability of ore arecore to a successful SLC production cycle. The ranking may seem relatively obvious; however, a more detailed discussion about each listed constraint is presented in the research project. The final constraint ranking was followed by concluding remarks and recommendations to improve the simulation model and possible solutions to reduce or eliminate the ranked constraints. The main recommendations were that the approach presented inthis study should be used as the precursor for any optimisation study at FDM in the future, and that the constraint ranking should also be core to any optimisation projects to focus and prioritise effort. Other recommendations were focused on improving the simulation model by adding for example, more qualitative logic, including precursory activities and adding the ability to change the time horizon of the model. More focus was placed on recommendationsand solutions to reduce or eliminate the ranked constraints and included some ofthe following: Introduction of a 12-hour shift and a 4-day compressed workweek, with a second blasting opportunity. Reduction of pre-and post-production delays with the assistance of digital technology. Introduction of remote operation capabilities to increase loading and drilling time. Utilising tablets and task management/scheduling software to assist in short-term interval control. Maintaining the operation of nine LHDs in each shift, and always have swing units and tunnels available on each production level. Upgrading the quality of the roadwaysand introduce mobile rail-mats in each draw point. Review the drilling and blasting practice to achieve better blasts. Some constraints are complex enough that major studies can solely be conducted on them in the future. The ranked constraints and recommendations remain a guideline to improve the production cycle at FDM and assist in future optimisationstudies orprojects.Item Critical success factors in mining projects’ post-completion phase: Modikwa Platinum Mine(2022) Khumalo, VusimuziMining plays a crucial role in modern society in providing materials and services needed for everyday living such as raw materials, energy, construction, machinery, and water purification. Mining requires capital-intensive investments to ensure long-term sustainability of its operations. Significant amounts of monies are invested to support mining capital projects, but there are high numbers of unsuccessful projects within the project management critical constraints (scope, time, costs, and quality). Inability to meet mining capital projects’ objectives impacts investors, employees, government, and communities negatively. This manifests in financial losses to investors, loss of jobs for employees and loss in taxes to governments. It is believed that by learning from previous successful and unsuccessful projects, future mining projects could be managed and delivered more successfully. These lessons could be learnt by conducting post-project reviews which could result in the improvement of both current and future mining projects. This research report was undertaken in order to understand the complexity of a post completion review of a capital mining project at Modikwa Platinum Mine. According to the feasibility study, the mine was designed to mill 200 000 tonnes of ore per month and later this figure was changed to 240 000 tonnes per month. The mine has not consistently achieved the design milling capacity mostly due to lack of ore from underground. This stemmed from risks such as complex geology, lower than expected commodity prices and the global financial meltdown. These macro and micro factors contributed to the mine not achieving its feasibility outcomes. The review is conducted using production information, financial information and results from questionnaires and interviews. The mine is located in a challenging socio-economic environment, which adds complexity to the project in terms of stakeholder management. Reviews of successful and unsuccessful mining projects globally and in South Africa through information analysis and interviews indicate a common thread of critical success factors and lessons to be learnt.Item Demand management in health care: the case for failure demand(2022) Hartmann, DieterFailure demand has been shown to have a material impact in many service industries – leading to increased waiting times and reduced system capability. The nature and impact of failure demand in health systems has however not been studied in great depth. This study proposes managing demand, and more finely, failure demand as an alternative focus for closing the gap between capacity and demand. This is contrasted against the traditional focuses on system capacity, which is raised through investment or efficiency improvements. To manage demand, the context must be understood, so a definition of the demand population for the health system is proposed, out of which a proposal is made for a mental model that describes the demand-modalities that exist in health systems. This model contains four key demand classes, namely, value- and failure - demand (using Seddon’s terminology) and expanded by adding escalation demand and false demand. Failure demand is selected for development and an algorithm is proposed that defines failure demand in a complex hierarchical organisation such as health care. A table is presented of common events that drive failure demand in health care. Leading out of this model, a health care setting is selected, in this case, a national pharmaceutical supply-chain in a developing country. The analysis was conducted by data mining order- and dispatch-documents and virtually recreating the operating history. For this, custom code was developed in Visual Basic for Applications, using a Sequential Pattern Mining approach. The Wholesale- and Distribution-networks were analysed and failure demand levels of 56 % and 29 % respectively were found in these networks. Significant service delivery improvements are foreseen if the root causes of failure demand are addressed, which in this case are mainly procurement-policy related. The study shows that failure demand in health systems represents an opportunity to narrow the capacity-demand gap by managing demand through targeted interventions.