Electronic Theses and Dissertations (Masters)

Permanent URI for this collectionhttps://hdl.handle.net/10539/38006

Browse

Search Results

Now showing 1 - 10 of 23
  • Thumbnail Image
    Item
    Detecting and Understanding COVID-19 Misclassifications: A Deep Learning and Explainable AI Approach
    (University of the Witwatersrand, Johannesburg, 2023-08) Mandindi, Nkcubeko Umzubongile Siphamandla; Vadapalli, Hima Bindu
    Interstitial Lung Disease (IDL) is a catch-all term for over 200 chronic lung diseases. These diseases are distinguished by lung tissue inflammation (Pulmonary fibrosis). They are histologically heterogeneous dis eases with inconsistent microscopic appearances, but they have clinical manifestations similar to other lung disorders. The similarities in symptoms of these diseases make differential diagnosis difficult and may lead to COVID-19 misdiagnosis with various types of IDLs. Be cause the turnaround time is shorter and more sensitive for diagnosis, imaging technology has been mentioned as a critical detection method in combating the prevalence of COVID-19. The aim of this research is to investigate existing deep learning architectures for the aforementioned task, as well as incorporate evaluation modules to determine where and why misclassification occurred. In this study, three widely used deep learning architectures, ResNet-50, VGG-19, and CoroNet, were evaluated for detecting COVID-19 from other IDLs (bacterial pneumonia, nor mal (healthy), viral pneumonia, and tuberculosis). The baseline results demonstrate the effectivities of Coronet having a classification performance of 84.02% for accuracy, specificity of 89.87%, a sensitivity of 70.97%. Recall 84.12%, and F1 score of 0.84. The results further emphasize the effectiveness of transfer learning using pre-trained domain-specific architectures, resulting in fewer learnable parameters. The proposed work used Integrated Gradients (IG), an Explainable AI technique that uses saliency maps to observe pixel feature importances, to understand mis classifications. This refers to visually prominent features in input im ages that were used by the model to make predictions. As a result, the proposed work envisions future research directions for improved classi fication through misclassification understanding.
  • Thumbnail Image
    Item
    Creating an adaptive collaborative playstyle-aware companion agent
    (University of the Witwatersrand, Johannesburg, 2023-09) Arendse, Lindsay John; Rosman, Benjamin
    Companion characters in video games play a unique part in enriching player experience. Companion agents support the player as an ally or sidekick and would typically help the player by providing hints, resources, or even fight along-side the human player. Players often adopt a certain approach or strategy, referred to as a playstyle, whilst playing video games. Players do not only approach challenges in games differently, but also play games differently based on what they find rewarding. Companion agent characters thus have an important role to play by assisting the player in a way which aligns with their playstyle. Existing companion agent approaches fall short and adversely affect the collaborative experience when the companion agent is not able to assist the human player in a manner consistent with their playstyle. Furthermore, if the companion agent cannot assist in real time, player engagement levels are lowered since the player will need to wait for the agent to compute its action - leading to a frustrating player experience. We therefore present a framework for creating companion agents that are adaptive such that they respond in real time with actions that align with the player’s playstyle. Companion agents able to do so are what we refer to as playstyle-aware. Creating a playstyle-aware adaptive agent firstly requires a mechanism for correctly classifying or identifying the player style, before attempting to assist the player with a given task. We present a method which can enable the real time in-game playstyle classification of players. We contribute a hybrid probabilistic supervised learning framework, using Bayesian Inference informed by a K-Nearest Neighbours based likelihood, that is able to classify players in real time at every step within a given game level using only the latest player action or state observation. We empirically evaluate our hybrid classifier against existing work using MiniDungeons, a common benchmark game domain. We further evaluate our approach using real player data from the game Super Mario Bros. We out perform our comparative study and our results highlight the success of our framework in identifying playstyles in a complex human player setting. The second problem we explore is the problem of assisting the identified playstyle with a suitable action. We formally define this as the ‘Learning to Assist’ problem, where given a set of companion agent policies, we aim to determine the policy which best complements the observed playstyle. An action is complementary such that it aligns with the goal of the playstyle. We extend MiniDungeons into a two-player game called Collaborative MiniDungeons which we use to evaluate our companion agent against several comparative baselines. The results from this experiment highlights that companion agents which are able to adapt and assist different playstyles on average bring about a greater player experience when using a playstyle specific reward function as a proxy for what the players find rewarding. In this way we present an approach for creating adaptive companion agents which are playstyle-aware and able to collaborate with players in real time.
  • Thumbnail Image
    Item
    Procedural Content Generation for video game levels with human advice
    (University of the Witwatersrand, Johannesburg, 2023-07) Raal, Nicholas Oliver; James, Steven
    Video gaming is an extremely popular form of entertainment around the world and new video game releases are constantly being showcased. One issue with the video gaming industry is that game developers require a large amount of time to develop new content. A research field that can help with this is procedural content generation (PCG) which allows for an infinite number of video game levels to be generated based on the parameters provided. Many of the methods found in literature can generate content reliably that adhere to quantifiable characteristics such as playability, solvability and difficulty. These methods do not however, take into account the aesthetics of the level which is the parameter that makes them more reasonable levels for human players. In order to address this issue, we propose a method of incorporating high level human advice into the PCG loop. The method uses pairwise comparisons as a way in which a score can be assigned to a level based on its aesthetics. Using the score along with a feature vector describing each level, an SVR model is trained that will allow for a score to be assigned to unseen video game levels. This predicted score is used as an additional fitness function of a multi objective genetic algorithm (GA) and can be optimised as a standard fitness function would. We test the proposed method on two 2D platformer video games, Maze and Super Mario Bros (SMB), and our results show that the proposed method can successfully be used to generate levels with a bias towards the human preferred aesthetical features, whilst still adhering to standard video game characteristics such as solvability. We further investigate incorporating multiple inputs from a human at different stages of the PCG life cycle and find that it does improve the proposed method, but further testing is still required. The findings of this research is hopefully going to assist in using PCG in the video game space to create levels that are more aesthetically pleasing to a human player.
  • Thumbnail Image
    Item
    A Continuous Reinforcement Learning Approach to Self-Adaptive Particle Swarm Optimisation
    (University of the Witwatersrand, Johannesburg, 2023-08) Tilley, Duncan; Cleghorn, Christopher
    Particle Swarm Optimisation (PSO) is a popular black-box optimisation technique due to its simple implementation and surprising ability to perform well on various problems. Unfortunately, PSO is fairly sensitive to the choice of hyper-parameters. For this reason, many self-adaptive techniques have been proposed that attempt to both simplify hyper-parameter selection and improve the performance of PSO. Surveys however show that many self-adaptive techniques are still outperformed by time-varying techniques where the value of coefficients are simply increased or decreased over time. More recent works have shown the successful application of Reinforcement Learning (RL) to learn self-adaptive control policies for optimisers such as differential evolution, genetic algorithms, and PSO. However, many of these applications were limited to only discrete state and action spaces, which severely limits the choices available to a control policy, given that the PSO coefficients are continuous variables. This dissertation therefore investigates the application of continuous RL techniques to learn a self-adaptive control policy that can make full use of the continuous nature of the PSO coefficients. The dissertation first introduces the RL framework used to learn a continuous control policy by defining the environment, action-space, state-space, and a number of possible reward functions. An effective learning environment that is able to overcome the difficulties of continuous RL is then derived through a series of experiments, culminating in a successfully learned continuous control policy. The policy is then shown to perform well on the benchmark problems used during training when compared to other self-adaptive PSO algorithms. Further testing on benchmark problems not seen during training suggest that the learned policy may however not generalise well to other functions, but this is shown to also be a problem in other PSO algorithms. Finally, the dissertation performs a number of experiments to provide insights into the behaviours learned by the continuous control policy.
  • Thumbnail Image
    Item
    Self Supervised Salient Object Detection using Pseudo-labels
    (University of the Witwatersrand, Johannesburg, 2023-08) Bachan, Kidhar; Wang, Hairong
    Deep Convolutional Neural Networks have dominated salient object detection methods in recent history. A determining factor for salient object detection network performance is the quality and quantity of pixel-wise annotated labels. This annotation is performed manually, making it expensive (time-consuming, tedious), while limiting the training data to the available annotated datasets. Alternatively, unsupervised models are able to learn from unlabelled datasets or datasets in the wild. In this work, an existing algorithm [Li et al. 2020] is used to refine the generated pseudo labels before training. This research focuses on the changes made to the pseudo label refinement algorithm and its effect on performance for unsupervised saliency object detection tasks. We show that using this novel approach leads to statistically negligible performance improvements and discuss the reasons why this is the case.
  • Thumbnail Image
    Item
    Evaluating Pre-training Mechanisms in Deep Learning Enabled Tuberculosis Diagnosis
    (University of the Witwatersrand, Johannesburg, 2024) Zaranyika, Zororo; Klein, Richard
    Tuberculosis (TB) is an infectious disease caused by a bacteria called Mycobacterium Tuberculosis. In 2021, 10.6 million people fell ill because of TB and about 1.5 million lives are lost from TB each year even though TB is a preventable and curable disease. The latest global trends in TB death cases are shown in 1.1. To ensure a higher survival rate and prevent further transmissions, it is important to carry out early diagnosis. One of the critical methods of TB diagnosis and detection is the use of posterior-anterior chest radiographs (CXR). The diagnosis of Tuberculosis and other chest-affecting dis- eases like Pneumoconiosis is time-consuming, challenging and requires experts to read and interpret chest X-ray images, especially in under-resourced areas. Various attempts have been made to perform the diagnosis using deep learning methods such as Convolutional Neural Networks (CNN) using labelled CXR images. Due to the nature of CXR images in maintaining a consistent structure and overlapping visual appearances across different chest-affecting diseases, it is reasonable to believe that visual features learned in one disease or geographic location may transfer to a new TB classificationmodel. This would allow us to leverage large volumes of labelled CXR images available online hence decreasing the data required to build a local model. This work will explore to what extent such pre-training and transfer learning is useful and whether it may help decrease the data required for a locally trained classifier. In this research, we investigated various pre-training regimes using selected online datasets to under- stand whether the performance of such models can be generalised towards building a TB computer-aided diagnosis system and also inform us on the nature and size of CXR datasets we should be collecting. Our experiment results indicated that both supervised and self-supervised pre-training between the CXR datasets cannot significantly improve the overall performance metrics of a TB. We noted that pre-training on the ChestX-ray14, CheXpert, and MIMIC-CXR datasets resulted in recall values of over 70% and specificity scores of at least 90%. There was a general decline in performance in our experiments when we pre-trained on one dataset and fine-tuned on a different dataset, hence our results were lower than baseline experiment results. We noted that ImageNet weights initialisation yields superior results over random weights initialisation on all ex- periment configurations. In the case of self-supervised pre-training, the model reached acceptable metrics with a minimum number of labels as low as 5% when we fine-tuned on the TBX11k dataset, although slightly lower in performance compared to the super-vised pre-trained models and the baseline results. The best-performing self-supervised pre-trained model with the least number of training labels was the MoCo-ResNet-50 model pre-trained on the VinDr-CXR and PadChest datasets. These model configura- tions achieved recall scores of 81.90% and a specificity score of 81.99% on VinDr-CXR pre-trained weights while the PadChest weights scored a recall of 70.29% and a speci- ficity of 70.22%. The other self-supervised pre-trained models failed to reach scores of at least 50% on both recall or specificity with the same number of labels
  • Thumbnail Image
    Item
    Regime Based Portfolio Optimization: A Look at the South African Asset Market
    (University of the Witwatersrand, Johannesburg, 2023-09) Mdluli, Nkosenhle S.; Ajoodha, Ritesh; Mulaudzi, Rudzani
    Financial markets change their properties (i.e mean, volatility, correlation, and distribution) with time. However, traditional portfolio optimization strategies seek to create static, all weather portfolios oblivious to this and current economic conditions. This produces portfolios that are unable to predict events with excessive skewness and kurtosis. This research investigated the difference in portfolio percentage return, of portfolios that incorporate regimes against one that does not. HMMs, binary segmentation, and PELT algorithms were used to identify regimes in 7 macro-economic features. These regimes, with regimes identified by the SARB, were incorporated into Markowitz’s mean-variance optimization technique to optimize portfolios. The base portfolio, which did not incorporate regimes, produced the least return of 761% during the period under consideration. Portfolios using HMMs identified regimes, produced, on average, the highest returns, averaging 3211% whilst the portfolio using SARB identified regimes returned 1878% during the same period. This research, therefore, shows that incorporating regimes into portfolio optimization increases the percentage return of a portfolio. Moreover, it shows that, although HMMs, on average, produced the most profitable portfolio, portfolios using regimes based on data-driven techniques do not always out-perform portfolios using the SARB identified regimes.
  • Thumbnail Image
    Item
    Overlapping multidomain paired quasilinearization methods for solving boundary layer flow problems
    (University of the Witwatersrand, Johannesburg, 2024) Nefale, Mpho Mendy; Otegbeye, Olumuyiwa; Oloniiju, Shina Daniel
    There is a constant and continuous need to refine current numerical approaches used to solve non-linear differential equations, which are employed to model real- world problems that often do not have analytical solutions. Spectral-based techniques have proven to be one of the most efficient numerical techniques for finding solutions of differential equations. Numerous spectral-based linearization techniques have been developed, such as the spectral relaxation (SRM), the spectral local linearization (SLLM), the spectral quasilinearization (SQLM), and the paired quasilinearization (PQLM) methods, among others. Previous research suggests that the PQLM is an efficient approach for solving complex non-linear systems of ordinary (ODEs) and partial differential equations (PDEs). However, it has been observed that this method requires further enhancement when utilized for problems described over a large domain, be it temporal or spatial. This research aims to address this limitation by proposing a modified version of the PQLM called the overlapping multi-domain paired quasilinearization method (OMD-PQLM), that enhances the accuracy and convergence speed of the original approach. The new approach entails solving a system by a technique that involves decoupling the system into pairs of equations and partitioning the large domain into smaller overlapping sub-domains. A comparison between the OMD-PQLM and the PQLM is conducted by solving systems of ODEs and PDEs. The proposed numerical approach is evaluated based on the norms of the residual and convergence errors, computational time, and the influence of the number of grid points and sub-domains on the convergence speed of the iterative scheme and the accuracy of the solutions. The findings demonstrate that the OMD-PQLM remarkably improves the accuracy of the solution compared to the PQLM, suggesting that partitioning the problem domain into overlapping multiple-domains optimizes the performance of the PQLM.
  • Thumbnail Image
    Item
    A fully-decentralised general-sum approach for multi-agent reinforcement learning using minimal modelling
    (University of the Witwatersrand, Johannesburg, 2023-08) Kruger, Marcel Matthew Anthony; Rosman, Benjamin; James, Steven; Shipton, Jarrod
    Multi-agent reinforcement learning is a prominent area of research in machine learning, extending reinforcement learning to scenarios where multiple agents concurrently learn and interact within the same environment. Most existing methods rely on centralisation during training, while others employ agent modelling. In contrast, we propose a novel method that adapts the role of entropy to assist in fully-decentralised training without explicitly modelling other agents using additional information to which most centralised methods assume access. We augment entropy to encourage more deterministic agents, and instead, we let the non-stationarity inherent in MARL serve as a mode for exploration. We empirically evaluate the performance of our method across five distinct environments, each representing unique challenges. Our assessment encompasses both cooperative and competitive cases. Our findings indicate that the approach of penalising entropy, rather than rewarding it, enables agents to perform at least as well as the prevailing standard of entropy maximisation. Moreover, our alternative approach achieves several of the original objectives of entropy regularisation in reinforcement learning, such as increased sample efficiency and potentially better final rewards. Whilst entropy has a significant role, our results in the competitive case indicate that position bias is still a considerable challenge.
  • Thumbnail Image
    Item
    Generating Rich Image Descriptions from Localized Attention
    (University of the Witwatersrand, Johannesburg, 2023-08) Poulton, David; Klein, Richard
    The field of image captioning is constantly growing with swathes of new methodologies, performance leaps, datasets, and challenges. One new challenge is the task of long-text image description. While the vast majority of research has focused on short captions for images with only short phrases or sentences, new research and the recently released Localized Narratives dataset have pushed this to rich, paragraph length descriptions. In this work we perform additional research to grow the sub-field of long-text image descriptions and determine the viability of our new methods. We experiment with a variety of progressively more complex LSTM and Transformer-based approaches, utilising human-generated localised attention traces and image data to generate suitable captions, and evaluate these methods on a suite of common language evaluation metrics. We find that LSTM-based approaches are not well suited to the task, and under-perform Transformer-based implementations on our metric suite while also proving substantially more demanding to train. On the other hand, we find that our Transformer-based methods are well capable of generating captions with rich focus over all regions of the image and in a grammatically sound manner, with our most complex model outperforming existing approaches on our metric suite.