
This paper presents a one-shot learning approach with performance and robustness guarantees for the linear quadratic regulator (LQR) control of stochastic linear systems. Even though data-based LQR control has been widely considered, existing results suffer either from data hungriness due to the inherently iterative nature of the optimization formulation (e.g., value learning or policy gradient reinforcement learning algorithms) or from a lack of robustness guarantees in one-shot non-iterative algorithms. To avoid data hungriness while ensuing robustness guarantees, an adaptive dynamic programming formalization of the LQR is presented that relies on solving a Bellman inequality. The control gain and the value function are directly learned by using a control-oriented approach that characterizes the closed-loop system using data and a decision variable from which the control is obtained. This closed-loop characterization is noise-dependent. The effect of the closed-loop system noise on the Bellman inequality is considered to ensure both robust stability and suboptimal performance despite ignoring the measurement noise. To ensure robust stability, it is shown that this system characterization leads to a closed-loop system with multiplicative and additive noise, enabling the application of distributional robust control techniques. The analysis of the suboptimality gap reveals that robustness can be achieved without the need for regularization or parameter tuning. The simulation results on the active car suspension problem demonstrate the superiority of the proposed method in terms of robustness and performance gap compared to existing methods.
Second version, v2
suboptimality gap, Dynamic programming in optimal control and differential games, linear quadratic regulator, robustness, Systems and Control (eess.SY), Electrical Engineering and Systems Science - Systems and Control, Optimization and Control (math.OC), direct data-driven control, Linear-quadratic optimal control problems, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Mathematics, Optimal stochastic control, Sensitivity (robustness), Mathematics - Optimization and Control
suboptimality gap, Dynamic programming in optimal control and differential games, linear quadratic regulator, robustness, Systems and Control (eess.SY), Electrical Engineering and Systems Science - Systems and Control, Optimization and Control (math.OC), direct data-driven control, Linear-quadratic optimal control problems, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Mathematics, Optimal stochastic control, Sensitivity (robustness), Mathematics - Optimization and Control
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
