[口头报告]Off-policy reinforcement learning for input-constrained optimal control of dual-rate industrial processes
Off-policy reinforcement learning for input-constrained optimal control of dual-rate industrial processes
编号:28
稿件编号:251 访问权限:仅限参会人
更新:2024-05-20 09:56:41
浏览:486次
口头报告
报告开始:2024年05月30日 15:40 (Asia/Shanghai)
报告时间:20min
所在会议:[S4] Intelligent Equipment Technology » [S4-2] Afternoon of May 30th-2
暂无文件
摘要
Real industrial operating systems are not ideally immune to unmodeled dynamics, and industrial processes usually operate on multiple time scales, which poses a problem for operational optimization of industrial processes. In order to better address these difficulties, a composite compensated controller is designed to solve the input-constrained optimal operation control (OOC) problem in dual time scales by integrating reinforcement learning (RL) techniques and singular perturbation (SP) theory. Within this control framework, a self-learning compensatory control method is proposed to optimize the operational metrics of a dual time-scale industrial system with uncertain dynamic parts to the desired values. Finally, the effectiveness of the method is verified by an industrial mixed separation thickening process (MSTP) example.
关键字
Reinforcement Learning, Dual Time Scales, Optimal Operational Control, Singular perturbation Theory
发表评论