Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

HEPPO: Hardware-Efficient Proximal Policy Optimization. A Universal Pipelined Architecture for Generalized Advantage Estimation

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis presents HEPPO: Hardware-Efficient Proximal Policy Optimization, a framework designed to address the computational and memory challenges associated with implementing advanced reinforcement learning algorithms on resource-constrained hardware platforms. By introducing dynamic standardization for rewards and an 8-bit quantization strategy, HEPPO reduces memory requirements by up to 75% while improving training stability and performance, achieving up to a 67% increase in cumulative rewards. A novel, highly parallelized architecture for Generalized Advantage Estimation (GAE) computation accelerates this critical phase, processing 19.2 billion elements per second using 64 processing elements, contributing to a 22% to 37% reduction in PPO training time in different environments. Adapting the proposed on-chip memory layout reduces the GAE data transfer latency and increases the reduction percentage up to 48% in certain environments in PPO training time. The integration of the entire PPO pipeline on a single System-on-Chip (SoC) further enhances system performance by reducing communication overhead and leveraging custom hardware acceleration. Experimental evaluations demonstrate that HEPPO effectively bridges the gap between sophisticated reinforcement learning algorithms and practical hardware implementations, enabling efficient deployment in embedded systems and real-time applications.

Description

Citation

Endorsement

Review

Supplemented By

Referenced By