Optimizing Genetic Programming Agents with TPG and Memory Structures
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis explores the design of temporal memory for Tangled Program Graphs (TPGs), a team-based Genetic Programming (GP) framework for Reinforcement Learning (RL). We specifically focus on challenging partially-observable settings in which agents rely on memory to handle temporal dependencies.
First, we look at how global indexed scalar memory can be initialized to better store and retrieve observations, helping agents build internal models of the environment. Tests on simple classic control tasks show that resetting memory at the beginning of each new interaction sequence with the environment can prevent interference by weaker agents and improve performance in tasks with shorter-term dependencies.
Next, we tackle partially-observable continuous control tasks with large state and action spaces. Here we propose team-specific shared memory, where each group of programs keeps its own memory without being affected by other teams. In addition, we extend TPG’s scalar memory by adding vector and matrix structures initialized with evolved constants, which are numerical values that evolve across generations to give agents inherited knowledge. These enhancements allow for stronger coordination and more robust behaviour in high-dimensional tasks.
Overall, our findings highlight the vital role of indexed memory in TPGs when an agent lacks full state information. By exploring different ways to store and share data among programs, this work highlights the importance of sharing information both among team members during the lifetime of an agent and across generations through evolved constants.