School of Computer Science and Applied Mathematics (ETDs)
Permanent URI for this communityhttps://hdl.handle.net/10539/38004
Browse
3 results
Search Results
Item Towards Lifelong Reinforcement Learning through Temporal Logics and Zero-Shot Composition(2024-10) Tasse, Geraud Nangue; Rosman, Benjamin; James, StevenThis thesis addresses the fundamental challenge of creating agents capable of solving a wide range of tasks in their environments, akin to human capabilities. For such agents to be truly useful and be capable of assisting humans in our day-to-day lives, we identify three key abilities that general purpose agents should have: Flexibility, Instructability, and Reliability (FIRe). Flexibility refers to the ability of agents to adapt to various tasks with minimal learning; instructability involves the capacity for agents to understand and execute task specifications provided by humans in a comprehensible manner; and reliability entails agents’ ability to solve tasks safely and effectively with theoretical guarantees on their behavior. To build such agents, reinforcement learning (RL) is the framework of choice given that it is the only one that models the agent-environment interaction. It is also particularly promising since it has shown remarkable success in recent years in various domains—including gaming, scientific research, and robotic control. However, prevailing RL methods often fall short of the FIRe desiderata. They typically exhibit poor sample efficiency, demanding millions of environment interactions to learn optimal behaviors. Task specification relies heavily on hand-designed reward functions, posing challenges for non-experts in defining tasks. Moreover, these methods tend to specialize in single tasks, lacking guarantees on the broader adaptability and behavior robustness desired for lifelong agents that need solve multiple tasks. Clearly, the regular RL framework is not enough, and does not capture important aspects of what makes humans so general—such as the use of language to specify and understand tasks. To address these shortcomings, we propose a principled framework for the logical composition of arbitrary tasks in an environment, and introduce a novel knowledge representation called World Value Functions (WVFs) that will enable agents to solve arbitrary tasks specified using language. The use of logical composition is inspired by the fact that all formal languages are built upon the rules of propositional logics. Hence, if we want agents that understand tasks specified in any formal language, we must define what it means to apply the usual logic operators (conjunction, disjunction, and negation) over tasks. The introduction of WVFs is inspired by the fact that humans seem to always seek general knowledge about how to achieve a variety of goals in their environment, irrespective of the specific task they are learning. Our main contributions include: (i) Instructable agents: We formalize the logical composition of arbitrary tasks in potentially stochastic environments, and ensure that task compositions lead to rewards minimising undesired behaviors. (ii) Flexible agents: We introduce WVFs as a new objective for RL agents, enabling them to solve a variety of tasks in their environment. Additionally, we demonstrate zero-shot skill composition and lifelong sample efficiency. (iii) Reliable agents: We develop methods for agents to understand and execute both natural and formal language instructions, ensuring correctness and safety in task execution, particularly in real-world scenarios. By addressing these challenges, our framework represents a significant step towards achieving the FIRe desiderata in AI agents, thereby enhancing their utility and safety in a lifelong learning setting like the real world.Item Regularized Deep Neural Network for Post-Authorship Attribution(University of the Witwatersrand, Johannesburg, 2024) Modupe, Abiodun; Celik, Turgay; Marivate, VukosiPost-authorship attribution is the computational process of determining the legitimate author of an online text snippet, such as an email, blog, forum post, or chat log, by employing stylometric features. The process consists of analysing various linguistic and writing patterns, such as vocabulary, sentence structure, punctuation usage, and even the use of specific words or phrases. By comparing these features to a known set of writing pieces from potential authors, investigators can make educated hypotheses about the true authorship of a text snippet. Additionally, post-authorship attribution has applications in fields like forensic linguistics and cybersecurity, where determining the source of a text can be crucial for investigations or identifying potential threats. Furthermore, in a verification procedure to proactively uncover misogynistic, misandrist, xenophobic, and abusive posts on the internet or social networks, finding a suitable text representation to adequately symbolise and capture an author’s distinctive writing from a computational linguistics perspective is typically known as a stylometric analysis. Additionally, most of the posts on social media or online are generally rife with ambiguous terminologies that could potentially compromise and influence the precision of the early proposed authorship attribution model. The majority of extracted stylistic elements in words are idioms, onomatopoeias, homophones, phonemes, synonyms, acronyms, anaphora, and polysemy, which are fundamentally difficult to interpret by most existing natural language processing (NLP) systems. These difficulties make it difficult to correctly identify the true author of a given text. Therefore, further advancements in NLP systems are necessary to effectively handle these complex linguistic elements and improve the accuracy of authorship attribution models. In this thesis, we introduce a regularised deep neural network (RDNN) model to solve the challenges that come with figuring out who wrote what after the fact. The proposed method utilises a convolutional neural network, a bidirectional long short-term memory encoder, and a distributed highway network to effectively address the post-authorship attribution problem. The neural network was utilised to generate lexical stylometric features, which were then fed into the bidirectional encoder to produce a syntactic feature vector representation. The feature vector was then fed into the distributed high-speed networks for regularisation to reduce network generalisation errors. The regularised feature vector was then given to the bidirectional decoder to learn the author’s writing style. The feature classification layer is made up of a fully connected network and a SoftMax function for prediction. The RDNN method outperformed the existing state-of-the-art methods in terms of accuracy, precision, and recall on the majority of the benchmark datasets. These results highlight the potential of the proposed method to significantly improve classification performance in various domains. Again, the introduction of an interactive system to visualise the performance of the proposed method would further enhance its usability and effectiveness in quantifying the contribution of the author’s writing characteristics in both online text snippets and literary documents. It is useful in processing the evidence that is needed to support claims or draw conclusions about the author’s writing style or intent during the pre-trial investigation by the law enforcement agent in the court of law. The incorporation of this method into the pretrial stage greatly strengthens the credibility and validity of the findings presented in court and has the potential to revolutionise the field of authorship attribution and enhance the accuracy of forensic investigations. Furthermore, it ensures a fair and just legal process for all parties involved by providing concrete evidence to support or challenge claims. We are also aware of the limitations of the proposed methods and recognise the need for additional research to overcome these constraints and improve the overall reliability and applicability of post-authorship attribution of online text snippets and literary documents for forensic investigations. Even though the proposed methods have revealed some unusual differences in author writing style, such as how influential writers, regular people, and suspected authors use language, the evidence from the results with the features extracted from the texts has shown promise for identifying authorship patterns and aiding in forensic analyses. However, much work remains to be done to validate the methodologies’ usefulness and dependability as effective authorship attribution procedures. Further research is needed to determine the extent to which external factors, such as the context in which the text was written or the author’s emotional state, may impact the identified authorship patterns. Additionally, it is crucial to establish a comprehensive dataset that includes a diverse range of authors and writing styles to ensure the generalizability of the findings and enhance the reliability of forensic analyses. Furthermore, the dataset used in this thesis does not include a diverse variety of authors and writing styles, such as impostors attempting to impersonate another author, which limits the generalizability of the conclusions and undermines the credibility of forensic analysis. More studies can be conducted to broaden the proposed strategy for detecting and distinguishing impostors’ writing styles from those of authentic authors when committing crimes on both online and literary documents. It is conceivable for numerous criminals to collaborate to perpetrate a crime, which could aid in improving the proposed methods for detecting the existence of multiple impostors or the contribution of each criminal writing style based on the person or individual they are attempting to mimic. The likelihood of numerous offenders working together complicates the investigation and necessitates advanced procedures for identifying their individual contributions, as well as both authentic and manufactured impostor contents within the text. This is especially difficult on social media, where fake accounts and anonymous profiles can make it difficult to determine the true identity of those involved, which can come from a variety of sources, including text, WhatsApps, chat images, videos, and so on, and can lead to the spread of misinformation and manipulation. As a result, promoting a hybrid approach that goes beyond text as evidence could help address some of the concerns raised above. For example, integrating audio and visual data may provide a more complete perspective of the scenario. As a result, such an approach exacerbates the restrictions indicated in the distribution of data and may necessitate more storage and analytical resources. However, it can also lead to a more accurate and nuanced analysis of the situationItem Using Machine Learning to Estimate the Photometric Redshift of Galaxies(University of the Witwatersrand, Johannesburg, 2023-08) Salim, Shayaan; Bau, Hairong; Komin, NukriMachine learning has emerged as a crucial tool in the field of cosmology and astrophysics, leading to extensive research in this area. This research study aims to utilize machine learning models to estimate the redshift of galaxies, with a primary focus on utilizing photometric data to obtain accurate results. Five machine learning algorithms, including XGBoost, Random Forests, K-nearest neighbors, Artificial Neural Networks, and Polynomial Regression, are employed to estimate the redshifts, trained on photometric data derived from the Sloan Digital Sky Survey (SDSS) Data Release 17 database. Furthermore, various input parameters from the SDSS database are explored to achieve the most accurate redshift values. The research incorporates a comparative analysis, utilizing different evaluation metrics and statistical tests to determine the best-performing algorithm. The results indicate that the XGBoost algorithm achieves the highest accuracy, with an R2 value of 0.94, a Root Mean Square Error (RMSE) of 0.03, and a Mean Absolute Average Percentage (MAPE) of 12.04% when trained on the optimal feature subset. In comparison, the base model achieved an R2 of 0.84, a RMSE of 0.05, and a MAPE of 20.89%. The study contributes to the existing literature by utilizing photometric data during model training and comparing different high-performing algorithms from the literature.