School of Electrical & Information Engineering (ETDs)

Permanent URI for this communityhttps://hdl.handle.net/10539/37968

Browse

Search Results

Now showing 1 - 6 of 6
  • Item
    Comparative Study on the Accuracy of the Conventional DGA Techniques and Artificial Neural Network in Classifying Faults Inside Oil Filled Power Transformers
    (University of the Witwatersrand, Johannesburg, 2024) Mokgosi, Gomotsegang Millicent; Nyamupangedengu , Cuthbert; Nixon , Ken
    Power transformers are expensive yet crucial for power system reliability. As the installed base ages and failure rates rise, there is growing interest in advanced methods for monitoring and diagnosing faults to mitigate risks. Power transformer failures are often due to insulation breakdown from harsh conditions like overloading, that leads to prolonged outages, economic losses and safety hazards. Dissolved Gases Analysis (DGA) is a common diagnostic tool for detecting faults in oil-filled power transformers. However, it heavily relies on expert interpretation and can yield conflicting results, complicating decision-making. Researchers have explored Artificial Intelligence (AI) to address these challenges and improve diagnostic accuracy. This study investigates using Machine Learning (ML) techniques to enhance DGA for diagnosing power transformers. It employs an Artificial Neural Network (ANN) with Feed Forward Back Propagation, a Bayesian Regularizer for predictions, Principal Component Analysis (PCA) for feature selection and Adaptive Synthesizer (ADASYN) for data balancing. While traditional DGA methods are known for their accuracy and non- intrusiveness, they have limitations, particularly with undefined diagnostic areas. This research focuses on these limitations, to demonstrate that ANN provides more accurate predictions compared to conventional methods, with an average accuracy of 76.8% versus lower accuracies of 55% for Dornenburg, 40% for Duval, 38.4% for Roger and 31.8% for IEC (International Electrotechnical Commission) Methods. The study findings prove that ANN can effectively operate independently to improve diagnostic performance.
  • Item
    A Longitudinal Study on the Effect of Patches on Software System Maintainability and Code Coverage
    (University of the Witwatersrand, Johannesburg, 2024) Mamba, Ernest Bonginkosi; Levitt, Steve
    In the rapidly evolving landscape of software development, ensuring the quality of code patches could potentially improve the overall health and longevity of a software project. The significance of assessing patch quality arises from its pivotal role in the ongoing evolution of software projects. Patches represent the incremental changes made to the code-base, shaping the trajectory of a project’s development. The identification and understanding of factors influencing patch quality could possibly contribute to enhanced software maintainability, reduced technical debt, and ultimately, a more resilient and adaptive code-base. While previous research predominantly concentrates on analysing releases as static entities, this study extends an existing study of patch testing while incorporating an examination of quality from a maintainability point of view, thereby filling a void in patch-to-patch investigations. Over 90, 000 builds spanning 201 software projects written in 17 programming languages are mined from two popular coverage services, Coveralls and Codecov. To quantify maintainability, a variant of the SIG Maintainability Model, a recognised metric designed to assess the maintainability of incremental code changes is employed. Additionally, the Change Risk Anti-Patterns (CRAP) metric is utilised to identify and measure potential risks associated with code modifications. A moderate correlation of 0.4 was observed between maintainability and patch coverage, indicating that patches with higher coverage tend to exhibit improved maintainability. Similarly, a moderate correlation was identified between the CRAP metric and patch coverage, suggesting that higher patch coverage is associated with reduced change risk anti- patterns. In contrast, patch coverage demonstrates no correlation with overall coverage, underscoring the distinctive nature of patches. However, it is noted that relying solely on patch coverage lacks comprehensive overview of coverage patterns. Thus, it is recommended to supplement it with overall system coverage for a more comprehensive understanding. Moreover, patch maintainability also exhibits no correlation with overall coverage, again, highlighting the unique nature of patches. In conclusion, the study offers valuable insights into the nuanced relationships between patch coverage, maintainability, and change risk anti-patterns, contributing to a more refined understanding of software quality in the context of software evolution.
  • Item
    Characterisation of Standard Telecommunication Fibre Cables for Cost-Effective Free Space Optical Communication
    (University of the Witwatersrand, Johannesburg, 2024) Iga, Fortune Kayala; Cox, Mitchell A.
    In an era marked by an escalating demand for high-speed internet connectivity, optical communication plays a crucial role in meeting these needs. Free Space Optical (FSO) communication, which involves the wireless transmission of optical signals through the atmosphere, holds promise for extending existing fibre optic networks and connecting individuals beyond current coverage areas. Despite the potential, commercial FSO systems remain prohibitively expensive. A cost-effective FSO system can be achieved by utilising small form-factor pluggable (SFP) transceiver modules. These budget-friendly devices offer powerful transmit lasers and highly sensitive receiving photodiodes. To utilise these devices, optical signals are collimated out of a transmitting fibre into the atmosphere and coupled back into a receiving fibre. However, further investigation is needed to determine the optimal fibre cables for transmitting and receiving optical signals across the atmosphere to maximise received optical power and achieve efficient FSO communication. This study aims to characterise the light coupling performance of standard telecommunication fibre cables, with a focus on the optical power transmitted from and received by the fibre cable under atmospheric conditions. The methodology employed for characterising the power transmitted by the fibre cables involves mea- suring the optical power in the fundamental Gaussian mode. This mode optimises transmission through the atmosphere by minimising beam divergence. Subse- quently, light coupling from free-space is characterised by measuring the optical power coupled into the different fibre cables under non-ideal conditions, including misalignment and atmospheric turbulence. The findings of this research show notable correlations between the physical attributes of the fibre cables, namely refractive index profile, core size and numerical aperture, and their transmission and reception performances. The comprehensive characterisations of the standard fibre cables presented in this study provide insights into their suitability for distinct roles within a low-cost FSO system.
  • Item
    Feasibility of region of interest selection preprocessing using a multi-photodiode fingerprint-based visible light positioning system
    (University of the Witwatersrand, Johannesburg, 2024) Achari, Dipika; Cheng, Ling
    This research presents a novel Multi-Photodiode Fingerprint-Based Visible Light Positioning (VLP) system aimed at improving the accuracy and reducing the computational expenses of indoor localization. The system leverages an advanced K-Nearest Neighbors (KNN) algorithm, enhanced by Signal Strength Clustering, alongside a region selection strategy based on frequency-modulated VLC encoded IDs. Through extensive simulations, the system demonstrated a notable reduction in Mean Absolute Error (MAE) to approximately 2.5 meters, with a Root Mean Square Error (RMSE) of around 3.0 meters. In addition, the system exhibited robustness across varying ambient light conditions and room sizes, maintaining an accuracy rate of 95%, even in challenging environments. Analysis revealed that error rates increased in larger rooms, with average errors ranging from 1.50 meters in smaller spaces to 3.51 meters in larger environments. This suggests that while the system is effective in smaller areas, its accuracy diminishes slightly as room size expands. However, integrating frequency domain analysis and region of interest (ROI) selection proved to be a practical approach, enhancing the overall performance of the VLP system by providing faster and more accurate indoor navigation. Future research includes exploring advanced modulation techniques integrating supplementary sensing technologies and fine-tuning the algorithm parameters to improve the system’s accuracy and reliability, especially in more complex or dynamic environments.
  • Thumbnail Image
    Item
    Model Propagation for High-Parallelism in Data Compression
    (University of the Witwatersrand, Johannesburg, 2023-10) Lin, Shaw Chian; Cheng, Ling
    Recent data compression research focuses on the parallelisation of existing algorithms (LZ77, BZIP2 etc.) by exploiting their inherent parallelism. Little work has been performed on parallelising highly sequential algorithms, whose slow compression speeds would benefit the most from parallelism. This dissertation presents a generalised parallelisation approach that can be potentially adopted by any compression algorithms, with model sequentiality in mind. The scheme presents a novel divide-and-conquer approach when dividing the data stream into smaller data blocks for parallelisation. The scheme, branching propagation, is implemented with prediction by partial matching (PPM), an algorithm of the statistical-modelling family known for their serial nature, which is shown to suffer from compression ratio increases when parallelised. A speedup of 5.2-7x has been achieved at 16 threads, with at most a 6.5% increase in size relative to serial performance, while the conventional approach showed up to a 7.5x speedup with an 8.0% increase. The branching propagation approach has been shown to offer better compression ratios over conventional approaches with increasing parallelism (a difference of 11% increase at 256 threads), albeit at slightly slower speeds. To quantify the speedup over ratio penalty, an alternate metric called speedup-to-ratio increase (SRI) is used. This shows that when serial dependency is maintained, branching propagation is superior in standard configurations, which offers substantial speed while minimising the compression ratio penalty relative to the speedup. However, at lower serial dependency, the conventional approach is generally preferable, with 9-16x speedup per 1% increase in compression ratio at maximal speed compared to branching propagation’s 6-13x speedup per 1%.
  • Thumbnail Image
    Item
    Evaluation and algorithmic adaptation of brain state control through audio entertainment
    (University of the Witwatersrand, Johannesburg, 2023-12) Cassim, Muhammed Rashaad; Rubin, David; Pantanowitz, Adam
    This dissertation presents the design and evaluation of a system that can alter the dominant brain state of participants through audio entrainment. The ‘rch broadly aimed to identify the possible improvements of a dynamic entrainment stimulus when compared to a set entrainment stimulus. The dynamic entrainment stimulus was controlled by a Q-Learning (QL) model. The experiment sought to build on previous research by implementing existing entrainment methods in Virtual Reality and dynamically optimising the entrainment stimulus. The neurological effects of the stimuli were evaluated by analysing electroencephalogram measurements. It was found that a set 24 Hz entrainment stimulus increased the power of Beta band brain waves relative to a control condition. Further, contrary to existing research, it was found that the entrainment stimulus did not have a notable effect on brainwave connectivity at the entrainment frequency. The study subsequently evaluated if the QL agent could learn to optimise the entrainment stimulus. The agent was allowed to switch between an 18 and 24 Hz entrainment stimulus and succeeded in learning an optimised policy. The QL driven stimulus yielded results that generally exhibited the same characteristics as the set entrainment stimulus when using power and connectivity analysis methods. Furthermore, the power analysis indicated that the QL driven stimulus was able to affect a broader range of frequencies within the targeted band. The QL driven stimulus, additionally, resulted in higher meta-analysis metric values in some aspects. These factors indicate that it was able to have a more consistent impact on targeted brain waves. Lastly, results from participants whose stimulus was controlled by a QL driven stimulus using optimal actions indicated that the optimised actions created a more sustained increase in Beta band activity when compared to any other results, indicating the impact of the optimised policy learned.