ETD Collection
Permanent URI for this collection
Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954
Follow the link below for important information about Electronic Theses and Dissertations (ETD)
Library Guide about ETD
Browse
Browsing ETD Collection by School "Computer Science and Applied Mathematics"
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item A Bayesian approach to lightning ground-strike points analysis(2022) Lesejane, WandileStudying cloud-to-ground lightning strokes and ground strike points provides an alternative method of lightning mapping for lightning risk assessment. Various k-means algorithms have been used to verify the ground strike points from lightning locating systems. These algorithms produce results but have the potential to be improved. This research report proposes using Bayesian Network which is a model that has not been used before to verify lightning ground strike points. A Bayesian Network is a probabilistic graphical model that makes use of Bayes Theorem to represent the conditional dependencies of variables. The network created for this research were learned from the data using a score-based structure learning and the Bayesian Information Criterion score function was used. The models were evaluated using a confusion matrix and a kappa index. They produced accuracies ranging from 86% to 94% with a kappa index of up to 0.76. The results from the Bayesian Network models are within the range of the available algorithms used currently to analyse lightning ground strike points but have an advantage of not needing a predetermined distance, easy to interpret and as well as being suitable for small data sets. The use of a Bayesian network is a good candidate for an alternative method to analyse lightning ground strike points.Item A chow-liu score-based structure learning approach to refining course curricula in higher education(2024) Naicker, LeanthaHigh failure and dropout rates have challenged higher education learning institutions to support students and keep them motivated throughout their undergraduate and postgraduate learning. This is not only beneficial to tertiary institutions but also to South Africa as a whole. This research adds to the field of curriculum learning using structure learning graphical modelling to refine course curricula at a tertiary level. This will assist faculties in ensuring that students are provided sufficient knowledge in their respective fields through the co-requisite and prerequisite subjects of each program. This problem is constantly being looked at in literature, with most solutions embedded in manual methods; however, in recent years, there has been a shift to looking at how artificial intelligence and machine learning can be used to produce unbiased solutions. Chow-Liu’s score-based structure learning method in conjunction with K2 scoring is used in this research, due to its ability to handle large node spaces and reduce complexity with its treestructured approach. The method is first validated using synthetically generated data before it is exposed to the real-world observational database. The data set used for this study is obtained from a South African university after removing all socio-economic and demographic data. The results have two noted benefits; one is to help refine course curricula for undergraduate degrees by suggesting co-requisite and prerequisite courses to be added for various programs and the second is to help prescribe subject selection for postgraduate students.Item A dynamical trajectory-based method for sparse recovery(2022) Sejeso, Matthews MalebogoMany applications in emerging technologies call for efficient sensing systems to acquire and process high-resolution signals. This task is impractical for the traditional sampling scheme, the Nyquist-Shannon sampling theory. Compressed sensing was developed as the new signal acquisition scheme to address this issue. The compressed sensing theory asserts that linear encoded measurements can be used to simultaneous acquire and compress the signal. The technique requires much less computational resources than traditional sampling schemes. In compressed sensing, an optimization problem comprised of a data fidelity term and a nonlinear sparsity enforcing term are used to recover sparse vectors from a few linear measurements. In most applications, the problems are large-scale and characterized by high-dimensional decision variables. They also require the real-time processing of data. Specialised optimization algorithms have been used to solve sparse recovery problems in compressed sensing. However, these algorithms tend to be slow and computationally intensive to perform real-time recovery. On the contrary, continuous-time dynamical systems have recently gained attention as efficient optimization problem solvers. They have the potential to yield significantly speed and power improvements over their discrete counterpart. The focus of this thesis is to understand the type of continuous-time dynamical systems that can be used to solve sparse recovery problems and analyse their performance mathematically. It is essential to understand the dynamical system’s behaviour before it can be used to solve optimization problems. First, we present a general dynamical system modelled by the subgradient of nonsmooth objective function coupled with the sparse promoting activation function. Convergence analysis of this gradient-like differential inclusion is done using the recently developed nonsmooth Lojasiewicz inequality. The trajectories of the dynamical system are shown to have finite lengths and are globally convergences to equilibrium points. The equilibrium points of the dynamical system correspond to the critical point of the sparse optimization problem. An estimate of convergence rate, which depends on the Lojasiewicz exponent, is obtained. Second, the Bregman integrated dynamical system for solving the `1-minimization problem is presented. The dynamical system integrates the Bregman distance in the design, resulting in an improved convergence rate. The proposed dynamical system fits well within the framework differential inclusion presented and analysed early. Thus the Bregman integrated dynamical system is well suited to solve the `1-minimizer problem. We show that the proposed dynamical system takes an efficient path towards the optimal solution and recovers the expected support set of the sparse solution. The Bregman integrated dynamical system yields the exponential convergence rate, which significantly improves the convergence of the previously proposed dynamical system of Locally Competitive Algorithm. Computational results are presented to support the developed theory and the good performance of the proposed dynamical system. Several comparative experiments on sparse recovery problems demonstrate that the proposed dynamical system approach is efficient and effectiveItem Anomaly detection using time series forecasting with deep learning(2022) Mathonsi, ThabangAnomaly detection is increasingly researched by the academic community because of its growing importance in applications such as, for instance, monitoring sensor readings in autonomous vehicles, or diagnosing potential medical risks in health data. This thesis presents solutions to the anomaly detection problem with the aid of multivariate time series forecasting, uncertainty quantification, and explainable and interpretable artificial intelligence. Challenges with existing deep learning-based time series anomaly detection approaches include (i) large sets of parameters that may be computationally intensive to tune for non-parsimonious models, (ii) returning too many false positives rendering the techniques impractical for use, (iii) requiring labelled datasets for training which are often not prevalent in real life, (iv) temporal dependence inherent in the data, as well as (v) complex dependency and cross-correlation between the covariates that may be hard to capture. An interpretable statistics and deep learning-based hybrid anomaly detection method is introduced, which overcomes these challenges. By systematically building anomaly detection machinery that is firstly good at forecasting, secondly satisfactorily quantifies the associated uncertainty, and thirdly, is interpretable, the presented methodology suggests that hybrid models show significant promise in terms of accuracy, efficiency, and practical usage. The extensive experimental results indicate that interpretable hybrid approaches can have a significant impact on anomaly detection as a research field, and the interpretability of such models can be useful for practitionersItem Application of Bayesian modelling & computations for media mix models(2024) Akinlaja, OlatomiwaMarketers seek to properly measure the effectiveness of different marketing channels. The purpose of this research report is to optimize a business’s marketing budget through a combination of Bayesian computations and Media Mix Models. A Bayesian framework approach helps incorporate prior knowledge within the model description, as opposed to the traditional Frequentist media mix modelling approach that fails to account for uncertainties that might occur within multiple advertisement channels. Most of the Media Mix Modelling (MMM) research conducted fails to incorporate the knowledge domain and account for the uncertainties within a business’s marketing channels before modelling. This research project attempts to solve the problem advertisers face when measuring the effectiveness of different marketing channels. The solution was achieved through the use of Bayesian methods for approximation, such as the Markov Chain Monte Carlo (MCMC), and media mix models for optimizing a marketing budget. Overall, we successfully applied Bayesian modelling to Media Mix Models for optimizing the marketing budget to generate the most sales. This was achieved by developing a Bayesian linear regression model that considers probability distributions, as opposed to training data alone. When compared to the Frequentist approach models, our Bayesian model provides reliable estimates and confidence intervals for sales based on the funds allocated to multiple marketing channels because it accounts for uncertainties and prior knowledge. The Bayesian media mix model was evaluated using Mean Square Error (MSE) and Mean Absolute Error (MAE) metrics to ensure the outcomes were reliable when compared to those generated by the Frequentist approach.Item Augmentative topology agents for open-ended learning(2024) Nasir, Muhammad UmairWe tackle the problem of open-ended learning by improving a method that simultaneously evolves agents and increasingly challenging environments. Unlike previous open-ended approaches that optimize agents using a fixed neural network topology, we hypothesize that the open-endedness and generalization can be improved by allowing agents’ controllers to become more complex as they encounter more difficult environments. Our method, Augmentative Topology EPOET (ATEP), extends the Enhanced Paired Open-ended Trailblazer (EPOET) algorithm by allowing agents to evolve their own neural network structures over time, adding complexity and capacity as necessary. Empirical results demonstrate that ATEP results in open-ended and general agents capable of solving more environments than a fixed-topology baseline. We also investigate mechanisms for transferring agents between environments and find that a species-based approach further improves the open-endedness and generalization of agents.Item Bitcoin in the South African market: a safe haven or hedge?(2022) Mhlanga, Fortune NhlanhlaCrypto-currency has been growing rapidly since its inception and this development has led to increasing interest from investment portfolio managers to understand if crypto-currency can be used as a financial asset. One of the major objectives of these managers is to minimize losses and maximize profits on the investments they made. Loss in portfolio value is always possible. Consequently portfolio managers are always searching for new, uncorrelated, asset classes that can be used to hedge their existing positions. This study seeks to understand if Bitcoin can act as a hedge or a safe haven for stocks and bonds in South Africa. The principal regression model is used to probe the weekly, monthly, and quarterly Bitcoin, stocks and bonds historical data from 2012 to 2021. This study shows that Bitcoin can act as a hedge against bonds. Results from weekly and monthly historical data show that Bitcoin can act as a hedge against stocks and a safe haven for stocks and bonds. Quarterly historical data results do not support the latter results.Item Brain tumor classification on magnetic resonance imaging(MRI) scans using deep learning(2022) Marumo, A.M.A brain tumor is formed when there is a development of aberrant cells in the brain. Early detection of brain tumors increases the patient’s chances of survival. This study proposes a Convolutional Neural Network(CNN) model or system that will automatically classify or detect brain tumors on MRI scans without the interference of radiologists or physicians. To make the proposed model trustworthy, integrated gradients and XRAI are built and evaluated. The CNN model achieved 90% accuracy, 82% sensitivity, 95% specificity, 82% precision, 79% Cohen’s kappa statistic, 79% Matthews correlation coefficient, and 77% Gini coefficient. The built classifier is best explained by integrated gradients. In the medical industry, integrated gradients haven’t been widely used as an explanation for deep learning models. This study demonstrates how integrated gradient can be used to interpret deep learning models in the medical area.Item Comparative study of Machine learning techniques for loan fraud prediction(2022) Tshivhidzo, RinaeLoan fraud is a major and growing problem. Due to this problem, millions of amounts are lost yearly. There has been a significant amount of research on fraud prediction. However, there has been less research on loan fraud prediction. It could be because of a lack of data available for analysis. This research aimed to compare machine learning algorithms to predict fraud practices in loan administration to find techniques that present the most accurate results. The dataset used is the Kaggle fraud detection dataset for the year 2019. Four machine learning algorithms, random forest, extreme gradient boost, adaptive boosting, and multilayer perceptron, were evaluated. The results obtained during the first attempt show that extreme gradient boost performed best compared to the other three models, with an area under the curve score (AUC) of 0.74. The results from the second attempt show that adaptive boosting performed best with an AUC score of 1.00, followed by extreme gradient boosting with an AUC score of 0.94Item Disentanglement using Vaes resembles distance learning and requires overlapping data(2022) Michlo, Nathan JurajLearning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the loss. In this work, we highlight the interaction between data and the reconstruction term of the loss as the main contributor to disentanglement in VAEs. We note that standardised benchmark datasets are constructed in a way that is conducive to learning what appear to be disentangled representations. We design an intuitive adversarial dataset that exploits this mechanism to break existing state-of-the-art disentanglement frameworks. We provide solutions in the form of a modified reconstruction loss suggesting that VAEs are distance learners, we also show that these loss functions can be learnt. From this idea, we introduce new scores that measure if disentangled representations using distances have been discovered. We then solve these scores by introducing a supervised metric learning framework that encourages disentanglement. Finally, we present various considerations for disentanglement research based on the subjective nature of disentanglement itself and the results from our work which suggest that VAE disentanglement is largely accidentalItem Dynamics generalisation in reinforcement learning through the use of adaptive policies(2024) Beukman, MichaelReinforcement learning (RL) is a widely-used method for training agents to interact with an external environment, and is commonly used in fields such as robotics. While RL has achieved success in several domains, many methods fail to generalise well to scenarios different from those encountered during training. This is a significant limitation that hinders RL’s real-world applicability. In this work, we consider the problem of generalising to new transition dynamics, corresponding to cases in which the effects of the agent’s actions differ; for instance, walking on a slippery vs. rough floor. To address this problem, we introduce a neural network architecture, the Decision Adapter, which leverages contextual information to modulate the behaviour of an agent, depending on the setting it is in. In particular, our method uses the context – information about the current environment, such as the floor’s friction – to generate the weights of an adapter module which influences the agent’s actions. This, for instance, allows an agent to act differently when walking on ice compared to gravel. We theoretically show that our approach generalises a prior network architecture and empirically demonstrate that it results in superior generalisation performance compared to previous approaches in several environments. Furthermore, we show that our method can be applied to multiple RL algorithms, making it a widely-applicable approach to improve generalisationItem Exploring latent regularisation bias in disentangled representation learning(2022) Pather, NeelanThe field of representation learning involves learning data representations (or features) that capture the structure and relationships implicit in data rather than engineering them. Latent (hidden) variables are high-level data representations inferred indirectly from data via statistical models. Representation learning assumes data is generated via a process characterised by higher-order generative factors. When learning disentangled latent representations, the aim is individual latent variables sensitive to changes in only one generative factor. Disentangled representations are interpretable and provide insight into what a model has learnt. The most popular disentangled representation learning model is Higgins et al. [2017]’s β VAE. It is an extension of Kingma and Welling [2013] and Rezende et al. [2014]’s VAE, which uses variational inference (VI), framing inference as optimisation of the Evidence Lower BOund (ELBO) objective. VAEs are also Bayesian models, with a prior belief of the latent structure before data is observed, as captured by the prior latent distribution. The framework fits a latent posterior distribution to data, regularising the resultant latent distribution by keeping it “close” to the prior. The closeness between the latent prior and posterior is enforced by penalising the ELBO using distance metric called the Kullback–Leibler (KL) divergence between these distributions. The latent posterior distribution is thus encouraged not to deviate too much from the latent prior distribution. Notably, the KL divergence is asymmetrical, so swapping the argument distributions around results in a different distance metric. Furthermore, the chosen direction of the KL divergence results in different behaviour when the divergence is minimised. This, in turn, encourages different latent posterior solutions. The reverse KL divergence is typically used in VI and encourages “mode-seeking” behaviour in the latent space, favouring under-dispersed solutions relative to the latent prior. Conversely, the forward KL divergence results in “mean-matching” behaviour in the latent space, favouring over-dispersed solutions relative to the latent prior. The β VAE includes a β hyperparameter factor to the reverse KL divergence in the ELBO, resulting in hyperparameter constrained reverse KL latent regularisation. It thus exhibits “mode-seeking” behaviour, favouring under-dispersed solutions relative to the latent prior. Our study assesses the impact of the KL divergence direction on the resultant latent posterior. Since the goal of disentangled representation learning is latents that capture generative factors in an isotropic manner, we assess the impact multimodal generative spaces have on the resultant posterior solution when the KL divergence direction is varied. To facilitate this investigation, we extend Higgins et al. [2017]’s β VAE to include an additional hyperparameter constrained forward KL latent regulariser, deriving our model the βγ VAE. Furthermore, we construct a collection of supervised datasets, each with a different number of generative space modes called mSprites. Finally, impacts in the study are assessed using information-theoretic disentangling metrics. When using the reverse KL for latent regularisation, we find that multimodal generative spaces distort the overall information content captured by the learnt representation. This is related to the mode-seeking behaviour of the reverse KL, as evidenced by reduced fit from increased generative space modality. This distortion may be remedied by introducing an additional constrained forward KL for latent regularisation, as done in our βγ VAE. The impact of multimodal generative spaces on disentangled representation learning, however, is less clear. Our study provides evidence that multimodal generative spaces negatively distort axis alignment between latent and generative dimensions. However, it is not clear that this necessarily hinders disentangled representation learning. Finally, we observe that while our βγ VAE can improve some metrics in disentangled representation learning, eliminating the impact of multimodal generative spaces, it is not consistent when all disentangling metrics are considered, and thus results are less robust. Our βγ VAE is thus suitable for representation learning with inconsistent evidence that it may also be useful in disentangled representation learning. Our findings are summarised as follows (the related research question addressed is given in brackets): · 6.2 Multimodal generative spaces distort global latent fit of β VAE, particularly at lower values of β (3.2.1) · 6.3 Multimodal generative factors do not consistently distort disentangled representation learning, but improve axis alignment. (3.2.2) · 6.4 βγ VAE has consistently better global latent fit than β VAE and eliminates negative impacts of multimodal generative space (3.2.3) · 6.5 βγ VAE does not consistently learn better disentangled representations than β VAE, nor does it consistently eliminate impact of multimodal generative spaces (3.2.4) Our key contributions are mSprites, a series of supervised datasets designed to investigate the impact of multimodal generative spaces on disentangled representation learning, the βγ VAE, a model with proven merit for representation learning that uses an additional constrained forward KL for latent regularisation, and empirical evidence validating the findings mentioned above.Item Facial action unit classification using weakly supervised learning(2024) Enabor, Oseluole TobiDeep learning has gained popularity because of its supremacy in terms of performance when trained on large datasets. However, collecting and annotating large datasets is laborious, expensive, and time-consuming. Weak supervision learning (WSL) has been at the forefront in exploring solutions to the above limitations. WSL techniques can create accurate classifiers under different scenarios, such as limited sample datasets, inaccurate datasets with noisy labels, and datasets that do not have the desired labels. This work applies WSL to facial Action Unit (AU) recognition, a problem space that relies on subject-matter experts (i.e., certified Facial Action Unit Coders (FACS)) to annotate samples. Two WSL techniques, namely incomplete supervision using a pseudolabelling mechanism, where one has access to vast amounts of unlabelled data and a limited amount of labelled data, and inaccurate supervision using Large-Loss Rejection (LLR) mechanism, where one has access to only noisy labels, were explored. The pseudo-labelling mechanism involves feeding samples with generated pseudo-labels during the training process. Alternatively, the LLR mechanism prevents model learning noisy labels by rejecting samples that reported large-loss during training. To better evaluate the limitations posed by accurate data and label availability and its impact on training models, the authors trained a baseline emotion recognition model and finetuned for AU recognition using transfer learning. This process also helped access the ability to estimate fine-grain labels (AUs) using only coarse-grain labels (facial emotions). The experimental setup included training and validating a VGG16 Convolutional neural network (CNN) using the Extended Denver Intensity of Spontaneous Facial Action Database (DISFA+) and the use of the Karolinska Directed Emotional Faces (KDEF) dataset as cross-dataset evaluation. Pseudo-labelling approach for AU recognition had three models, the first, PL-1, reported subset accuracy of 68% and 0.56 weighted F1- score, PL-2a reported a subset accuracy 89% and 0.9 weighted F1-score, PL-2b reported a subset accuracy of 66% and a weighted F1-score of 0.44. The LLR approach for AU recognition reported a subset accuracy of 69% and a weighted average F1-score of 0.66. The baseline AU model reported accuracy of 97% and an F1-score of 0.98 for AU recognition, signifying the need for large data sets and transfer learning. However, with an average reported accuracy of 68.5%, WSL mechanisms provide a solution in the right direction and can assist researchers in addressing data annotation challengesItem Forecast based portfolio optimisation using XGBoost(2022) May, KhanyaPortfolio optimisation is a vital research field in modern finance. In recent years, a plethora of approaches have been proposed to deal with the increasingly challenging task of portfolio optimisation. In this research, it is demonstrated how using a new methodology that involves using XGBoost regressor chains to forecast stock prices, then incorporating these prices in k-means algorithm, selecting the assets with the highest Sharpe ratio in each cluster then allocating weights to the assets using Monte Carlo simulations. Historical stock price data of the assets in the JSE top 40 index is used. The performance of the model is evaluated using 2 test periods, 2019 as the non-crisis test period and 2020 for the crisis stress test period. The optimal portfolio has the best performance in both periods earning 94.73% returns with a Sharpe ratio of 0.1999 in 2019 and 11.02% returns with a Sharpe ratio of 0.029 in 2020.Item Generating African inspired fashion designs(2022) Malobola, LindiweFashion has drawn a lot of attention from researchers in computer vision in recent years with a growing number of papers and workshops dedicated to this topic. There has been rapid development in fashion-related work ranging from retail sales forecasting [76, 79, 20], fashion trends analysis [10, 81, 31], fashion synthesis and recommendation [39, 8]. Fashion trends analysis often involves identifying patterns and predicting future fashion demands based on cities, season and runway fashion. Fashion image synthesis involves using generative models such as Generative Adversarial Networks (GANs) [27] and Variational Autoencoders (VAEs) [42] to generate new samples of fashion images. Fashion recommendation focuses on recommending clothing pieces or outfits given some conditions such as users’ preferences, occasion and weather conditions.Item Harnessing unlabelled data for automatic aerial poacher detection: reducing annotation costs through unsupervised and self-supervised learning(2024) Ball, SamanthaThe recent escalation in wildlife poaching poses a major threat to the survival of several key species, with South Africa and Zimbabwe forming the epicentre of the poaching crisis. The application of emerging technology such as Unmanned Aerial Vehicles (UAVs) and object detection provides a novel way of tackling this issue through the use of aerial surveillance. However, despite pioneering studies into the practical use of computer vision for poacher detection, current models require detailed ground truth. Notably, the sparsity and small-scale of objects in poaching detection data renders the annotation process particularly expensive and time-consuming, posing a barrier to resource-constrained conservation organisations in real-world scenarios. To reduce the need for costly annotations, this study explores the use of the self-supervised DINO model and unsupervised anomaly detection network FastFlow to provide pseudo-labels for unlabelled data. The value of these alternative techniques is evaluated on real-world poaching detection data provided by a Southern African conservation NPO. The results indicate that a YOLOv5 detection model can be trained using pseudo-labels together with only a small fraction of manually-annotated ground truth for the most difficult training videos. The resulting models attain over 90% of the detection recall of a baseline model trained with original ground truth labels, while also maintaining real-time detection speeds. This reduction in annotation cost would allow current systems to harness large unlabelled datasets with greatly reduced annotation effort and time, while still meeting the efficiency constraints associated with the UAV platform.Item Image captioning via multimodal embeddings(2022) Algu, ShikashImage captioning is an ongoing problem in computer vision with the aim of generating semantically and syntactically correct captions. Vanilla image captioning models fail to capture the structural relationship between objects that are available in images. To overcome this problem, scene graphs (knowledge graphs) that describe the relationship between objects have been added to models and improve on results. Current image captioning models do not consider combining image features and scene graphs in a common latent space, before generating captions. Graph convolutional neural networks have been designed to capture dependency information and are showing promising results in computer vision. This research aimed to investigate whether the inclusion of scene graph and image features in a multimodal layer will improve on image captioning models. Results show that by including scene graph features, image captioning results improve based on the standard image captioning evaluation metrics. Qualitative analysis shows that by including scene graphs, the structural relationships between objects in captions improve.Item Improving central value functions for cooperative multi-agent reinforcement learning(2022) Singh, SiddarthCentral value functions (CVFs) are methods which use a shared centralised critic to decompose the global shared reward in the cooperative settings into individual local rewards. CVFs are an effective method for value decomposition in multiagent reinforcement learning problems. However many state-of-the-art methods are reliant on an easily defined ground truth state to perform credit assignment. These methods perform poorly in certain environments with high numbers of redundant agents. We propose a method called Relevance Decomposition Network (RDN) that makes use of layerwise-relevance propagation (LRP) as an alternative form of credit assignment that can better perform value decomposition with large numbers of redundant agents when compared to existing methods like Qmix and Value-Decomposition Network (VDN). Another limitation in the MARL space is that it has generally favoured Q-learning based algorithms. This can be attributed to the belief that due to the poor sample efficiency of on-policy learning they are ineffective in the large action and state spaces in the Multi-Agent setting. We make use of a small set of improvements that can be generalised to most on-policy actor-critic algorithms to accommodate a small amount of off-policy data to improve sample efficiency and increase training stability. We implemented our improved agent variants and test them in a variety of environments including the Starcraft multi-agent challenge (SMAC). Our proposed method was able able to greatly improve the performance of a basic naive multi-agent advantage actor-critic algorithm with faster convergence to high-performing policies and reduced variance in expected performance at all stages of training.Item Learning factored skill representations for improved transfer(2022) Cockcroft, MatthewThe ability to reuse skills gained from previously solved tasks is essential to building agents that can solve more complex, unseen tasks. Typically, skills are specific to the initial task for which they were learned to assist in solving and it remains a challenge to determine which features between a set of two tasks need to be similar in order for that skill to apply to both tasks. Current approaches have shown that learning generalised skill representations allows for the skills to be successfully transferred and reused in accelerating learning across multiple similar new tasks. However, these approaches require large amounts of domain knowledge and handcrafting to learn the correct representations. We propose a novel framework for autonomously learning factored skill representations, which consist only of the variables which are relevant to executing each particular skill. We show that our learned factored skills significantly outperform traditional unfactored skills, and match the performance of other methods, without requiring the prior expert knowledge that those methods do. We also display our framework’s applicability to realworld settings by demonstrating its ability to scale to a realistic simulated kitchen environment.Item Lie group analysis of Prandtl‘s two-dimensional laminar boundary layer equation: analytical and numerical solutions for scaling and non-scaling symmetries(2024) Boloka, MolokoThe Lie point symmetries of Prandtl‘s two-dimensional boundary layer equation expressed in terms of the stream function are derived. The general form of the invariant solutions and boundary conditions, which include slip, suction and blowing at the boundary, are obtained. The analytical solutions for boundary layer flow in convergent and divergent channels generated by Lie point symmetries, which are not scaling symmetries, are investigated. When an ordinary differential equation and some associated boundary conditions are invariant under a scaling transformation, the boundary value problem for the ordinary differential equation can be transformed to an initial value problem which is then solved. This is known as the non-iterative transformation method. The Blasius equation is invariant under a scaling transformation while the Falkner-Skan equation is not invariant. The Blasius and Falkner-Skan equations are ordinary differential equations derived from Prandtl‘s boundary layer partial differential equation for the stream function and describe boundary layer flow over a flat plate and wedge respectively. In the case of the Falkner-Skan equation, which is non-invariant under a scaling transformation method, a modified boundary value problem is derived which is invariant under an extended scaling group. The modified problem is then transformed to an initial value problem.