ETD Collection

Permanent URI for this collectionhttps://wiredspace.wits.ac.za/handle/10539/104


Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954

Follow the link below for important information about Electronic Theses and Dissertations (ETD)

Library Guide about ETD

Browse

Search Results

Now showing 1 - 1 of 1
  • Thumbnail Image
    Item
    Dynamics generalisation in reinforcement learning through the use of adaptive policies
    (2024) Beukman, Michael
    Reinforcement learning (RL) is a widely-used method for training agents to interact with an external environment, and is commonly used in fields such as robotics. While RL has achieved success in several domains, many methods fail to generalise well to scenarios different from those encountered during training. This is a significant limitation that hinders RL’s real-world applicability. In this work, we consider the problem of generalising to new transition dynamics, corresponding to cases in which the effects of the agent’s actions differ; for instance, walking on a slippery vs. rough floor. To address this problem, we introduce a neural network architecture, the Decision Adapter, which leverages contextual information to modulate the behaviour of an agent, depending on the setting it is in. In particular, our method uses the context – information about the current environment, such as the floor’s friction – to generate the weights of an adapter module which influences the agent’s actions. This, for instance, allows an agent to act differently when walking on ice compared to gravel. We theoretically show that our approach generalises a prior network architecture and empirically demonstrate that it results in superior generalisation performance compared to previous approaches in several environments. Furthermore, we show that our method can be applied to multiple RL algorithms, making it a widely-applicable approach to improve generalisation