Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/32534
Title: | ADAPTIVE TUNING OF MODEL PREDICTIVE CONTROL PARAMETERS VIA REINFORCEMENT LEARNING |
Authors: | Zhang, Susu |
Advisor: | Mhaskar, Prashant |
Department: | Chemical Engineering |
Publication Date: | 2025 |
Abstract: | This thesis presents a reinforcement learning (RL)-assisted model predictive control (MPC) framework for multivariable chemical processes that experience external time varying disturbances. MPC is widely used in industry for its ability to predict future behaviour and enforce operating constraints. However, its performance depends on tuning parameters, in this case, the prediction and control horizons. They are usually selected offline and kept fixed during operation. Even with offset-free formulations, the chosen horizons remain fixed, which can result in degraded plant performance. The thesis presents a control framework in which a Deep Q-Network agent dynamically updates the MPC prediction and control horizons in real time. The resulting improvement in set-point tracking is demonstrated by comparing it with the industrial MPC with fixed horizons. A variation of the formulation is also introduced with the reward function explicitly penalizing changes in the agents actions, resulting in more stable agent behavior under the closed-loop MPC. |
URI: | http://hdl.handle.net/11375/32534 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zhang_Susu_2025Oct_MASc.pdf | 2.75 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.