Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/30877
Title: | STUDY ON THE AGE OF INFORMATION IN DIGITAL TWINS |
Authors: | Aghaei, Amirhossein |
Advisor: | Zhao, Dongmei |
Department: | Electrical and Computer Engineering |
Publication Date: | 2024 |
Abstract: | Digital Twins (DTs) have emerged as significant tools for real-time monitoring and control across various industries, offering dynamic digital replicas of physical systems (PSs). However, maintaining the freshness of information in DTs is challenged by communication delays, uncertain network conditions, and limited computational resources. These challenges can lead to increased Age of Information (AoI), reducing the effectiveness of DTs in time-sensitive applications where timely and accurate data is critical. This thesis addresses the optimization of PS-DT synchronization and DT response time to applications, aiming to minimize AoI while efficiently allocating communication and computational resources. Firstly, we investigate the optimal DT response time when applications request real-time information. Considering uncertainties in wireless communication channels and the unpredictability of future AoI, we formulate the problem as a Markov Decision Process (MDP) with delayed rewards. To solve this, we employ reinforcement learning techniques, specifically combining Long Short-Term Memory (LSTM) networks with Dueling Double Deep Q-Networks (DDDQN). This approach enables the DT to decide whether to respond immediately to application requests or wait for fresher data from the PS, effectively balancing response timeliness and information freshness. Secondly, we extend the optimization to multiple PS-DT pairs operating under shared and constrained communication and computational resources. We model the system as a multi-agent environment where each PS-DT pair aims to keep the AoI at DT below a predefined threshold while minimizing power consumption during data transmission. The problem is formulated as a stochastic optimization task and addressed using a two-stage MDP framework. In the first stage, agents optimize transmission power considering channel interference in an orthogonal frequency-division multiple access (OFDMA) scheme. In the second stage, they request computational resources from the edge server (ES) with limited processing capacity. We utilize the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm within a centralized training and decentralized execution (CTDE) framework to solve the MDP efficiently. Simulation results demonstrate that the proposed methods reduce the AoI at both DTs and applications, enhance resource utilization, and outperform existing algorithms in managing the trade-off between AoI and power consumption. The findings contribute to the efficient design and operation of DT systems in time-sensitive applications, ensuring timely updates and responses while optimizing resource allocation in constrained environments. |
URI: | http://hdl.handle.net/11375/30877 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Aghaei_Amirhossein_2024_12_MaSC.pdf | 2.19 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.