F. A. Yaghmaie and D.J. Braun, Reinforcement Learning for a Class of Continuous-time Input Constrained Optimal Control Problems, Automatica, vol. 99, pp. 221-227, 2019.
This paper develops a reinforcement learning framework for solving a class of continuous-time optimal control problems with input constraints. By relaxing the usual requirement that the value function must be differentiable, the authors extend reinforcement learning to problems where only continuity can be guaranteed. They also generalize the form of the cost function, broadening the applicability of the method. The result is a partially model-free framework that requires only an initial stabilizing policy and ensures stability of the system. Effectiveness is demonstrated through simulation studies.
Why it matters: Many real-world control problems involve input constraints that violate standard assumptions in reinforcement learning. This work extends the reach of RL-based control to more practical systems, enabling stable and efficient solutions in robotics and other complex domains.