Electronic Theses and Dissertations (Masters)
Permanent URI for this collectionhttps://hdl.handle.net/10539/37969
Browse
Item A Longitudinal Study on the Effect of Patches on Software System Maintainability and Code Coverage(University of the Witwatersrand, Johannesburg, 2024) Mamba, Ernest Bonginkosi; Levitt, SteveIn the rapidly evolving landscape of software development, ensuring the quality of code patches could potentially improve the overall health and longevity of a software project. The significance of assessing patch quality arises from its pivotal role in the ongoing evolution of software projects. Patches represent the incremental changes made to the code-base, shaping the trajectory of a project’s development. The identification and understanding of factors influencing patch quality could possibly contribute to enhanced software maintainability, reduced technical debt, and ultimately, a more resilient and adaptive code-base. While previous research predominantly concentrates on analysing releases as static entities, this study extends an existing study of patch testing while incorporating an examination of quality from a maintainability point of view, thereby filling a void in patch-to-patch investigations. Over 90, 000 builds spanning 201 software projects written in 17 programming languages are mined from two popular coverage services, Coveralls and Codecov. To quantify maintainability, a variant of the SIG Maintainability Model, a recognised metric designed to assess the maintainability of incremental code changes is employed. Additionally, the Change Risk Anti-Patterns (CRAP) metric is utilised to identify and measure potential risks associated with code modifications. A moderate correlation of 0.4 was observed between maintainability and patch coverage, indicating that patches with higher coverage tend to exhibit improved maintainability. Similarly, a moderate correlation was identified between the CRAP metric and patch coverage, suggesting that higher patch coverage is associated with reduced change risk anti- patterns. In contrast, patch coverage demonstrates no correlation with overall coverage, underscoring the distinctive nature of patches. However, it is noted that relying solely on patch coverage lacks comprehensive overview of coverage patterns. Thus, it is recommended to supplement it with overall system coverage for a more comprehensive understanding. Moreover, patch maintainability also exhibits no correlation with overall coverage, again, highlighting the unique nature of patches. In conclusion, the study offers valuable insights into the nuanced relationships between patch coverage, maintainability, and change risk anti-patterns, contributing to a more refined understanding of software quality in the context of software evolution.Item A Study of the Effect of Temperature on Cavity Partial Discharges in Polyethylene (PE) Insulation(University of the Witwatersrand, Johannesburg, 2024) Khangale, Mulovhela Kennedy; Nyamupangedengu, CuthbertSynthetic Polymers such as polyethylene are prevalent for high-voltage insulation applications as they offer remarkable insulating and dielectric properties. Notwithstanding precautionary measures made during manufacturing and installation processes, insulation systems are always susceptible to defects for various reasons, which constitute a significant source of Partial Discharge (PD) activity. It is a precursor to insulation degradation leading to premature failure of high-voltage equipment. PD activity is complex due to its non-stationary behaviour and multi- variance dependence. Studies in partial discharge mechanisms have received significant attention over the years to improve phenomena understanding and, in some cases, to allow conclusions to be drawn on the parameters affecting PD mechanisms. These studies have shown that different mechanisms and parameters influence partial discharge activity. In this study , experimental and analytical modelling techniques are used to explore the behaviour of partial discharge mechanisms at varying temperatures. Experimental PD measurements were carried out in accordance with the IEC 60270 standard. A test voltage of 11 kV ac was used. The test temperatures studied were 15C, 40C, 50C, 60C, 70C, 80C and 90C. Test specimens with a cavity diameter of 2.5 mm were assembled using three 1.5 mm thick polyethylene sheets sandwiched between two flat brass electrodes. Partial discharge parameters such as the charge magnitude, inception voltage and PD phase resolved pattern (PDPRP) were measured and analysed at varying temperatures. For analytical modelling, the streamer-like discharge concept is adopted to model PDIV while the apparent charge magnitude is modelled based on the induced charge concept introduced by Pedersen in the 1980s. The curve fitting approach was adopted to replicate and explain the measured experimental data. Results showed that Partial Discharge Inception Voltage (PDIV) increased linearly with temperature for the entire test temperature range. PD charge magnitude initially decreased with temperature from 15°C to 60°C and then increased from 60°C to 90°C. The evolution of PD phase resolved pattern (PDPRP) with temperature was characterised by a turtle-like pattern at ambient temperature, which transitioned into a rabbit ear PDPRP as the temperature increased to 90°C. The findings are interpreted using the mean free path effect on ionisation probability as well as the residual charge dynamics in the cavity as a function of temperature. The overall conclusion is that in polyethylene, cavity discharge characteristics respond to temperature changes. The variations in PD characteristics iv are monotonous for PDIV and non-monotonous for apparent charge magnitude as well as PDPRP. The implications of the findings are that in PD diagnosis,temperature of the equipment under test must be taken into account in interpretation of PD measurements results.Item Analysing Test-Driven Development Adherence in Open-Source Projects Using Test-to-Code Traceability Links(University of the Witwatersrand, Johannesburg, 2024) Kirui, Gerald Kipruto; Levitt, StephenSeveral studies have been conducted to determine the impact of Test-Driven Develop- ment (TDD) on software quality. Many studies utilise test-to-code traceability tools and strategies to detect TDD adherence as part of their methodology. However, most test-to-code traceability tools rely on filename-based matching algorithms, which suffer from low recall; therefore, most TDD detection methods are not accurate. This study aims to assess the effectiveness of a statement-based matching algorithm over a filename-based one and whether it can be used to detect the TDD adherence of a software project. The filename-based and statement-based matching algorithms are implemented in Python. To evaluate these algorithms, 500 tests from sixteen Java projects (encompassing frameworks, libraries, and tools for data processing, testing, and web services) are used. These projects range in size from 5,115 lines of code (LoC) to 378,167 LoC. This evaluation helps in understanding the performance of the algorithms through their weighted F1-scores. A mathematical function is created from first principles to correlate method coverage and time deltas with the test-with-development (TWD) adherence of a project. Thereafter, 100 Java projects are used to demonstrate the utility of this function. The results show that the statement-based matching algorithm, with a weighted F1-score of 0.771 and a 95% confidence interval of [0.741, 0.786], is more accurate than the filename-based matching algorithm, with a weighted F1-score of 0.218 and a 95% confidence interval of [0.193, 0.246]. Additionally, the results show that the relative TWD adherence of eleven projects is found to be highly correlated with TDD scores from surveys (rs = 0.723 and p-value = 0.012), and the method coverage of eighteen projects is found to be highly correlated with code coverage obtained from Codecov (rs = 0.744 and p-value = 0.0003). The literature review reveals that no studies have explored the correlation between TDD adherence and the development step size in software projects. An investigation is conducted using a statement-based test-to-code traceability tool and relative TWD adherence values to determine if TDD projects have smaller development steps compared to non-TDD projects. The study finds no significant correlation between median production method churn and relative TWD adherence of projects in all but one out of eight cases.Item Assessment of DC-DC Converter Selection Metrics(University of the Witwatersrand, Johannesburg, 2024) Letsoalo, Future Malekutu; Hofsajer, IvanThe exponential growth of Internet of Things (IoT) devices, powered by diverse energy sources, poses significant challenges in power electronics. Despite advances in DC-DC converter topologies, a gap remains in the literature regarding standardized performance metrics for selecting suitable converters, making the selection process complex. This study critically assesses metrics from seminal works of the 1960s to contemporary state-of-the-art, proposing a systematic approach to converter assessment. Two major categories of metrics are identified: averaging metrics and waveform-preserving metrics. Averaging metrics, grounded in Wolaver's foundational work, are effective for high-level comparisons among many converter options, establishing a performance baseline. The study introduces an average modeling tool to reveal core converter characteristics for objective comparison. Waveform-preserving metrics, on the other hand, provide detailed performance insights and are suitable for a narrower set of converter options. The study further categorizes these metrics to assess converter switches and reactive components. A new RMS metric is proposed, refining the existing processed power metric for better accuracy. By integrating both averaging and waveform-preserving metrics at relevant design stages, this study offers a systematic framework for converter assessment. This approach bridges the gap between high-level comparison and detailed performance evaluation, facilitating informed decision-making in converter selection.Item Breakdown Strength Influences of Titanium Dioxide Nanoparticles on Midel Canola-Based Natural Ester oil: A Comparison Between the Anatase and Rutile Phases of Titanium Dioxide(University of the Witwatersrand, Johannesburg, 2024) Miya, Mabontsi Koba; Nixon, KenNatural ester oils are an alternative solution for sustainable transformer insulation. They offer good dielectric properties and in addition improve safety of equipment and sustainable environment. They have higher fire resistance than the widely used mineral oil and are less prone to explosions. They are also highly biodegradable and renewable. However, some challenges such as inconsistent breakdown voltage at higher temperatures and higher streamer speeds hinder the wide use of natural esters. Nanotechnology has been found to improve the properties of the oil, including the breakdown voltage. Different nanoparticles have been previously studied, giving varying results. This dissertation presents a study of the use of two phases of TiO2 nanoparticles, namely rutile and anatase, to improve the breakdown voltage of natural ester oil at higher temperatures. The study seeks to find the effects of the nanoparticle phases on the oil under uniform and non-uniform electric fields. Nanofluids of different loading concentrations (0.01 vol%, 0.03 vol%, 0.05, vol%) were created in each nanoparticle phase for the purpose of the study. The findings are that both phases of the nanoparticles improve the breakdown voltage under uniform fields. The anatase portrayed an impressive improvement of 85% at ambient temperature, while the rutile phase enhanced by 61%. At higher temperatures however, the rutile phase had better improvement. Rutile TiO2 nanoparticles consistently outperformed the anatase phase in improving the breakdown voltage at higher temperatures. Under non-uniform electric fields, the rutile TiO2-based nanofluid was found to be superior to the anatase-based fluid. Rutile TiO2 resulted in a significant 10% improvement in the average breakdown voltage and streamer acceleration voltage. An overall decrease in the streamer speeds was observed with the addition of the rutile TiO2 nanoparticles. In contrast, anatase TiO2 resulted in decreased breakdown voltage and increased streamer speeds when compared to both the rutile nanofluid as well as the pure natural ester oil. The rutile phase of TiO2 can be regarded as a feasible solution for breakdown voltage improvement of natural ester oil in both cases of uniform and non-uniform electric field. The effects are attributed to the electron capture phenomenon and the good thermal stability of rutile TiO2. A stable composite is formed between the rutile nanoparticles and the host natural ester. The resultant morphological structure enables stable interfacial regions even at higher temperatures. In conclusion therefore, rutile TiO2 nanocomposite natural fluid is a possible solution to the current limitations in ester oils as power transformer insulation oil alternative.Item Characterisation of Standard Telecommunication Fibre Cables for Cost-Effective Free Space Optical Communication(University of the Witwatersrand, Johannesburg, 2024) Iga, Fortune Kayala; Cox, Mitchell A.In an era marked by an escalating demand for high-speed internet connectivity, optical communication plays a crucial role in meeting these needs. Free Space Optical (FSO) communication, which involves the wireless transmission of optical signals through the atmosphere, holds promise for extending existing fibre optic networks and connecting individuals beyond current coverage areas. Despite the potential, commercial FSO systems remain prohibitively expensive. A cost-effective FSO system can be achieved by utilising small form-factor pluggable (SFP) transceiver modules. These budget-friendly devices offer powerful transmit lasers and highly sensitive receiving photodiodes. To utilise these devices, optical signals are collimated out of a transmitting fibre into the atmosphere and coupled back into a receiving fibre. However, further investigation is needed to determine the optimal fibre cables for transmitting and receiving optical signals across the atmosphere to maximise received optical power and achieve efficient FSO communication. This study aims to characterise the light coupling performance of standard telecommunication fibre cables, with a focus on the optical power transmitted from and received by the fibre cable under atmospheric conditions. The methodology employed for characterising the power transmitted by the fibre cables involves mea- suring the optical power in the fundamental Gaussian mode. This mode optimises transmission through the atmosphere by minimising beam divergence. Subse- quently, light coupling from free-space is characterised by measuring the optical power coupled into the different fibre cables under non-ideal conditions, including misalignment and atmospheric turbulence. The findings of this research show notable correlations between the physical attributes of the fibre cables, namely refractive index profile, core size and numerical aperture, and their transmission and reception performances. The comprehensive characterisations of the standard fibre cables presented in this study provide insights into their suitability for distinct roles within a low-cost FSO system.Item Characterization of high-frequency time-domain e↵ects arising from the transmission line substitutions of reactive components in a buck converter(University of the Witwatersrand, Johannesburg, 2024) Maree, JohnThe work presented in this dissertation is a continuation of a line of research that suggests that the energy storage components within a DC-DC converter may be a source of high frequency e↵ects in power converter circuits. It is shown that for physically large energy storage compo- nents, conventional models are insucient for modelling the e↵ects of these components and that a transmission line approach is required. Very little work has been done within switching circuits using transmission line theory for the primary components themselves, specifically re- garding the time-domian e↵ects of these components. A significant finding of this work is that it is shown that both in simulation and experimental results these components do indeed have a measurable e↵ect on the output of the converter. Furthermore, this dissertation explores time-domain quantification methods for these distributed e↵ects, and shows that the delay ratio between the transmission lines is a key parameter in determining the magnitude of the e↵ects. This work provides strong experimental evidence for the existence of distributed e↵ects occurring from energy storage components within a DC-DC converter, and indicates that this area of research is worth further investigation. Advancements into our understanding of the high-frequency operation of DC-DC converters have become increasingly rare, necessitating a new perspective. This work focusses on using transmission line theory to model energy storage components within a DC-DC converter, and investigating the e↵ects of doing so. The research firstly introduces the design, simulation and experimental evidence for inductors and capacitors using transmission line theory. In fact, it is shown that in order to accurately model a physically large reactive component, transmission line modelling is required. Thereafter, these components in a physically large form are then applied to a DC-DC buck converter circuit where it is shown that the converter manifests high frequency e↵ects that are not predicted by conventional models, but is adequately shown using transmission line models. The e↵ects of these components are then investigated, and it is shown that the delay ratio between the transmission lines is a key parameter in determining the magnitude of the e↵ects. This work provides strong experimental evidence for the existence of distributed e↵ects occurring from energy storage components within a DC-DC converter, and indicates that this area of research is worth further investigation.Item Comparative Study on the Accuracy of the Conventional DGA Techniques and Artificial Neural Network in Classifying Faults Inside Oil Filled Power Transformers(University of the Witwatersrand, Johannesburg, 2024) Mokgosi, Gomotsegang Millicent; Nyamupangedengu , Cuthbert; Nixon , KenPower transformers are expensive yet crucial for power system reliability. As the installed base ages and failure rates rise, there is growing interest in advanced methods for monitoring and diagnosing faults to mitigate risks. Power transformer failures are often due to insulation breakdown from harsh conditions like overloading, that leads to prolonged outages, economic losses and safety hazards. Dissolved Gases Analysis (DGA) is a common diagnostic tool for detecting faults in oil-filled power transformers. However, it heavily relies on expert interpretation and can yield conflicting results, complicating decision-making. Researchers have explored Artificial Intelligence (AI) to address these challenges and improve diagnostic accuracy. This study investigates using Machine Learning (ML) techniques to enhance DGA for diagnosing power transformers. It employs an Artificial Neural Network (ANN) with Feed Forward Back Propagation, a Bayesian Regularizer for predictions, Principal Component Analysis (PCA) for feature selection and Adaptive Synthesizer (ADASYN) for data balancing. While traditional DGA methods are known for their accuracy and non- intrusiveness, they have limitations, particularly with undefined diagnostic areas. This research focuses on these limitations, to demonstrate that ANN provides more accurate predictions compared to conventional methods, with an average accuracy of 76.8% versus lower accuracies of 55% for Dornenburg, 40% for Duval, 38.4% for Roger and 31.8% for IEC (International Electrotechnical Commission) Methods. The study findings prove that ANN can effectively operate independently to improve diagnostic performance.Item Evaluation and algorithmic adaptation of brain state control through audio entertainment(University of the Witwatersrand, Johannesburg, 2023-12) Cassim, Muhammed Rashaad; Rubin, David; Pantanowitz, AdamThis dissertation presents the design and evaluation of a system that can alter the dominant brain state of participants through audio entrainment. The ‘rch broadly aimed to identify the possible improvements of a dynamic entrainment stimulus when compared to a set entrainment stimulus. The dynamic entrainment stimulus was controlled by a Q-Learning (QL) model. The experiment sought to build on previous research by implementing existing entrainment methods in Virtual Reality and dynamically optimising the entrainment stimulus. The neurological effects of the stimuli were evaluated by analysing electroencephalogram measurements. It was found that a set 24 Hz entrainment stimulus increased the power of Beta band brain waves relative to a control condition. Further, contrary to existing research, it was found that the entrainment stimulus did not have a notable effect on brainwave connectivity at the entrainment frequency. The study subsequently evaluated if the QL agent could learn to optimise the entrainment stimulus. The agent was allowed to switch between an 18 and 24 Hz entrainment stimulus and succeeded in learning an optimised policy. The QL driven stimulus yielded results that generally exhibited the same characteristics as the set entrainment stimulus when using power and connectivity analysis methods. Furthermore, the power analysis indicated that the QL driven stimulus was able to affect a broader range of frequencies within the targeted band. The QL driven stimulus, additionally, resulted in higher meta-analysis metric values in some aspects. These factors indicate that it was able to have a more consistent impact on targeted brain waves. Lastly, results from participants whose stimulus was controlled by a QL driven stimulus using optimal actions indicated that the optimised actions created a more sustained increase in Beta band activity when compared to any other results, indicating the impact of the optimised policy learned.Item Exploring the Performance of a PV-Supplied Cooking System with Hybrid Electric-Thermal Energy Storage(University of the Witwatersrand, Johannesburg, 2024) Chiloane, Learn LeagoThis research explores the extent to which the addition of a thermal storage tank can reduce the electrical cooking energy in solar Photo-Voltaic (PV) supplied cooking systems. The aim is to consider a smaller battery size to store the decreased electrical cooking energy and the rest of the solar energy is stored in a low-cost thermal storage tank. This is done to reduce the cost of the storage unit to address the challenges of clean-cooking access in rural African households without grid electricity. The energy performance of the system is evaluated considering a small-scale solar system that powers a storage water heater as the thermal storage tank and a slow cooker as the cooking appliance. A thermal-electrical analogy and experimental tests are employed to model and validate the effectiveness of the system under different cooking conditions. Rice is chosen as the cooking medium because it is the fastest growing staple in Africa, and its cooking process benefits from starting it with pre-heated water. It is shown that starting with pre-heated water accelerates the readiness of rice, and reduces the additional electrical energy to complete the cooking process which translates to a similar battery size reduction. The results from three cooking scenarios indicate that a hybrid cooking system incorporating a battery and thermal storage tank can reduce the battery capacity requirements by up to 24.3% for one meal, 14.7% for two meals, and 10.6% for three meals in comparison to a pure electrical cooking system with a battery only.Item Feasibility of region of interest selection preprocessing using a multi-photodiode fingerprint-based visible light positioning system(University of the Witwatersrand, Johannesburg, 2024) Achari, Dipika; Cheng, LingThis research presents a novel Multi-Photodiode Fingerprint-Based Visible Light Positioning (VLP) system aimed at improving the accuracy and reducing the computational expenses of indoor localization. The system leverages an advanced K-Nearest Neighbors (KNN) algorithm, enhanced by Signal Strength Clustering, alongside a region selection strategy based on frequency-modulated VLC encoded IDs. Through extensive simulations, the system demonstrated a notable reduction in Mean Absolute Error (MAE) to approximately 2.5 meters, with a Root Mean Square Error (RMSE) of around 3.0 meters. In addition, the system exhibited robustness across varying ambient light conditions and room sizes, maintaining an accuracy rate of 95%, even in challenging environments. Analysis revealed that error rates increased in larger rooms, with average errors ranging from 1.50 meters in smaller spaces to 3.51 meters in larger environments. This suggests that while the system is effective in smaller areas, its accuracy diminishes slightly as room size expands. However, integrating frequency domain analysis and region of interest (ROI) selection proved to be a practical approach, enhancing the overall performance of the VLP system by providing faster and more accurate indoor navigation. Future research includes exploring advanced modulation techniques integrating supplementary sensing technologies and fine-tuning the algorithm parameters to improve the system’s accuracy and reliability, especially in more complex or dynamic environments.Item Improving Iterative Soft Decision Decoding of Reed Solomon Codes Using Deep Learning(University of the Witwatersrand, Johannesburg, 2024) Nkiwane, Kimberly NtokozoTelecommunications in the current information age is increasingly dependent on efficient transmission of data through a noisy channel. Therefore, utilizing For- ward Error Correction (FEC) in the development of decoding algorithms is an active area of research. This dissertation work focuses on exploiting deep learn- ing techniques and error correction techniques to improve iterative soft decision decoding of Reed Solomon codes (RS). The parity check matrix of RS codes is characterized by a dense structure. This directly affects the exchange of soft information during the iterative decoding process. Therefore, to counter this issue, a bit-level implementation is utilized with the proposed decoding approach. Furthermore, additional techniques to add sparsity to the parity check matrix are presented in this research work. The proposed method for adding sparsity leverages the cyclical properties of RS codes to add low rate rows to the parity check matrix. This sparse implementation aids with the exchange of soft information during the message passing stage of the proposed iterative decoding process. The implementation of deep learning techniques to improve iterative soft decision decoders are also presented in this dissertation. The proposed approach makes adjustments to the Neural Belief Propagation (NBP) algorithm for RS codes. The proposed NBP utilizes the sparse implementation presented in this research to improve exchange of soft information. This in turn leads to gains in error correction performance without further adding complexity which is one of the main advantages of incorporating neural networks in the iterative decoding process. Additionally, this dissertation proposes a Graph Neural Network (GNN) imple- mentation for iterative soft decision decoding of RS codes. The approach employs the GNN architecture to construct a fully connected graph. This graph represents a message passing algorithm based on the Tanner graph, with trainable weights assigned to the graph nodes. This implementation improves the error correction performance of the proposed iterative soft decision decoder while reducing the number of iterations required to decode the received vector.Item Model Propagation for High-Parallelism in Data Compression(University of the Witwatersrand, Johannesburg, 2023-10) Lin, Shaw Chian; Cheng, LingRecent data compression research focuses on the parallelisation of existing algorithms (LZ77, BZIP2 etc.) by exploiting their inherent parallelism. Little work has been performed on parallelising highly sequential algorithms, whose slow compression speeds would benefit the most from parallelism. This dissertation presents a generalised parallelisation approach that can be potentially adopted by any compression algorithms, with model sequentiality in mind. The scheme presents a novel divide-and-conquer approach when dividing the data stream into smaller data blocks for parallelisation. The scheme, branching propagation, is implemented with prediction by partial matching (PPM), an algorithm of the statistical-modelling family known for their serial nature, which is shown to suffer from compression ratio increases when parallelised. A speedup of 5.2-7x has been achieved at 16 threads, with at most a 6.5% increase in size relative to serial performance, while the conventional approach showed up to a 7.5x speedup with an 8.0% increase. The branching propagation approach has been shown to offer better compression ratios over conventional approaches with increasing parallelism (a difference of 11% increase at 256 threads), albeit at slightly slower speeds. To quantify the speedup over ratio penalty, an alternate metric called speedup-to-ratio increase (SRI) is used. This shows that when serial dependency is maintained, branching propagation is superior in standard configurations, which offers substantial speed while minimising the compression ratio penalty relative to the speedup. However, at lower serial dependency, the conventional approach is generally preferable, with 9-16x speedup per 1% increase in compression ratio at maximal speed compared to branching propagation’s 6-13x speedup per 1%.Item Modelling OAM Crosstalk with Neural Networks: Impact of Tip/tilt and Lateral Displacement(University of the Witwatersrand, Johannesburg, 2024) Makoni, Steven Gamuchirai; Cheng, LingThis research focuses on a critical challenge within Free Space Optical ( FSO) commu- nication systems, specifically those utilizing Mode Division Multiplexing (MDM) with Orbital Angular Momentum ( OAM ) modes of a limited transmission range. Despite these systems’ potential to significantly enhance spectral efficiency and transmission capacity, their effectiveness is hindered by the limited range caused by atmospheric turbulence-induced aberrations. Atmospheric turbulence and mis- alignments distort the optical wavefront, causing degradation in orthogonal spatial modes and resulting in power spreading into adjacent modes, known as crosstalk in MDM systems. This research presents a simple neural network model for estimating OAM crosstalk in FSO systems, specifically focusing on atmospheric turbulence-induced aberrations. Firstly, we generated datasets through simulation and experimentation for validation purposes. We then develop and evaluate the neural network model, assessing its accuracy under various turbulence aberrations. The simple neural network, trained solely on tip/tilt and displacement inputs and without retraining, accurately estimated OAM spectra using approximated inputs in turbulent condi- tions, closely matching experimentally measured spectra. Despite the presence of turbulent aberrations, the model showed a minimal decrease in the coefficient of determination, indicating its ability to generalize well to unseen measurements. Our findings indicate that a simple neural network trained solely on tilt and displacement inputs can accurately estimate OAM crosstalk amidst many turbulence aberrations for ℓ ∈ [-5, 5] as a proof of concept. This implies that simple detectors such as cameras can be used to implement or optimize digital signal processing for error detection and correction utilizing the knowledge of crosstalk, offering promising avenues for improving system efficiency and quality of service for MDM systems. In summary, this research leveraged neural networks to model OAM crosstalk induced by misalignments and turbulence. The model’s ability to estimate OAM crosstalk due to misalignments and atmospheric turbulence shows potential for use in real-time predictive systems. With further refinement, neural network models could indicate the evolution of OAM crosstalk in FSO communications that em- ploy OAM multiplexing schemes in atmospheric turbulence. The demonstrated efficacy of the neural network model positions it as a valuable tool for enhancing the robustness of FSO communications employing higher-order OAM modes.Item The techno-economic impact of a high penetration of embedded generators on South African, Brazilian, Australian and Ugandan distribution networks: A comparative review(University of the Witwatersrand, Johannesburg, 2024) Rakgalakane, Motladitseba Dorcas; Jandrell, IanOwing to current electricity capacity shortages and rising electricity prices in South Africa, customers are opting for self-generation to mitigate the effects of load shedding and offset their electricity bills. In June 2021, the South African government removed the licensing requirement for private generation to encourage the uptake of self- generation, close capacity shortages and promote investment in private generation. While the increase in private generation is seen by the electricity industry as a positive step towards meeting energy supply demands, there are concerns about the impact that high numbers of embedded generation facilities will have on the distributors, i.e., their networks and revenues. The aim of this study was to conduct a review of the technical, economic and regulatory impact of a high number of embedded generators on distributors and their networks. The impact in South Africa is compared with the impact in Brazil, Australia and Uganda. The research study seeks to identify some of the success strategies implemented by these countries to address challenges associated with private embedded generation, and to provide recommendations for South Africa. South Africa compares well with Brazil and Australia in terms of electricity access and installed generation capacity vs population; however, in terms of embedded generation, particularly from variable renewable energy sources, South Africa’s penetration levels are still lower than those of Brazil and Australia, although higher than those of Uganda. The review highlights that the impact of embedded generation is largely driven by technical, economic and regulatory policy changes. The absence of a clear market structure or market direction, enabling legislation and policies, regulatory tools (such as national rules for integration or compensation and unbundled tariffs for some customer categories) make it difficult to minimise the negative effects of a high penetration of embedded generation and to capitalise on potential positive effects. In Brazil and Australia, the success of renewable energy embedded generation is largely a result of clear policy and regulations, which lead and drive positive changes in their electricity industries. Recommendations are made for legislation, policy and regulation changes to support embedded generation, the creation of a clear market structure, and the publication of national guidelines for embedded generation management. In addition, tariffing mechanisms should be reviewed to ensure a fair distribution of costs.