Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

POWER ALLOCATIONS FOR DELAY-CONSTRAINED TASK OFFLOADING IN CELL-FREE WIRELESS NETWORKS

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis addresses the uplink resource allocation problem in a cell-free (CF) environment, where mobile devices (MDs) periodically offload application data for processing to a shared edge server (ES) co-located at the central unit (CU). These user applications require timely processing at the ES, while the uncertain and time-varying wireless transmission conditions, combined with the limited battery energy of the MDs, make this difficult. To tackle this challenge, we propose a deep reinforcement learning framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm, enhanced with Transformer encoders in both the actor and critic networks. The resulting Transformer-based DDPG (T-DDPG) framework captures spatial-temporal dependencies in dynamic wireless conditions to make more informed decisions. Simulations are conducted under varying numbers of MDs and task sizes. The proposed T-DDPG consistently outperforms conventional DDPG, achieving both lower task completion time and energy consumption, while improving the rate of task completion within deadlines. These results highlight the effectiveness of spatial-temporal policy learning for real-time uplink scheduling in CF edge computing systems.

Description

Citation

Endorsement

Review

Supplemented By

Referenced By