Research Highlights

Tube-based planning and control under uncertainties

Motion planning and tracking control for robotic and autonomous systems with nonlinear, underactuated and uncertain dynamics-with guaranteed safety-remains a challenging problem. In this project, we developed tracking controllers with transient performance guarantees for general nonlinear systems subject to model uncertainties. The tracking controllers can be conveniently incorporated into a feedback motion planning framework for planning safety-guaranteed trajectories. Our methods are based on control contraction metrics (CCMs) recently proposed for the synthesis of nonlinear controllers using convex optimization. Our specific contributions are as follows.

  • For nonlinear systems subject to bounded disturbances, we proposed robust CCMs that minimize the effect of disturbances on nominal-actual trajectory deviations while yielding certificate tubes for quantifying the deviations for both states and inputs,

  • For nonlinear systems subject to matched (time- and state-dependent) uncertainties, we proposed a disturbance estimation-based CCM controller that ensures exponential convergence of the actual state trajectory to its nominal counterpart in the presence of matched uncertainties.

References

  • P. Zhao, A. Lakshmanan, K. Ackerman, A. Gahlawat, M. Pavone, and N. Hovakimyan. Tube-certified trajectory tracking for nonlinear systems with robust control contraction metrics. IEEE Robotics and Automation Letters, 7(2): 5528-5535, 2022.

  • P. Zhao, Z. Guo, and N. Hovakimyan. Robust nonlinear tracking control with exponential convergence using contraction metrics and disturbance estimation. Sensors, 22(13): 4743, 2022.

Safe and robust learning-enabled control via active uncertainty compensation

In this project, we explore the integration of machine learning (ML) and control theory for safe and robust learning-enabled control. In particular, we are interested in leveraging adaptive or disturbance observer-based control to actively compensate for the uncertainties that may be induced by environmental change (in a model-free setting) or the inaccuracy of the learned dynamics (in a model-based setting). This uncertain compensation based approach enables fast reaction to sudden dynamics change and less conservative control performance in the presence of a poorly learned dynamics model compared to robust approaches without uncertainty compensation.

References

  • Y. Cheng*, P. Zhao*, F. Wang, D. J. Block, and N. Hovakimyan. Improving the robustness of reinforcement learning policies with \(\mathcal L_1\) adaptive control. IEEE Robotics and Automation Letters, 7(3): 6574-6581, 2022.

  • Y. Cheng*, P. Zhao*, M. Gandhi, B. Li, E. Theodorou and N. Hovakimyan. Robustifying reinforcement learning policies with \(\mathcal L_1\) adaptive control, in ICRA Workshop on Safe Robot Control with Learned Motion and Environment Models, 2021.

  • P. Zhao, Z. Guo, Y. Cheng, A. Gahlawat, H. Kang and N. Hovakimyan. Guaranteed nonlinear tracking in the presence of DNN-learned dynamics with contraction metrics and disturbance estimation. arXiv preprint arXiv: 2112.08222, 2022.

  • A. Gahlawat, P. Zhao, A. Patterson, N. Hovakimyan and E. A. Theodorou. \(\mathcal L_1\)-GP: \(\mathcal L_1\) adaptive control with Bayesian learning. 2nd Conference on Learning for Dynamics and Control, 2020.

Adaptive geometric control of quadrotors

Quadrotors have been widely used in various applications such as environment monitoring, structure inspection, emergency response, and package delivery. Precise control of quadrotors is very challenging in the presence of model uncertainties and disturbances from various sources, such as aerodynamic drag, unknown payload, and wind/gust. In this project, we proposed an adaptive geometric controller based on \(\mathcal L_1\) adaptive augmentation of a geometric controller, which provides guaranteed tracking performance in the presence of a broad class of uncertainties. We experimentally validated the performance of the controller on a custom-built quadrotor platform.

References

  • Z. Wu, S. Cheng, P. Zhao, et al. \(\mathcal L_1\)Quad: \(\mathcal L_1\) adaptive augmentation of geometric control for agile quadrotors with performance guarantees. Submitted, 2023. arXiv: 2302.07208.

  • Z. Wu, C. Sheng, K. A. Ackerman, A. Gahlawat, A. Lakshmanan, P. Zhao, and N. Hovakimyan. \(\mathcal L_1\) adaptive augmentation for geometric tracking control of quadrotors. IEEE International Conference on Robotics and Automation, pp. 1329-1336, 2022.

Adaptive control of aerospace systems

Motivated by the need to adjust the desired dynamics in the control of aerospace systems, we developed adaptive controllers that allow the desired dynamics to be systematically scheduled or switched, e.g., according to the operating conditions. The developed adaptive controllers utilize the \(\mathcal L_1\) adaptive control architecture that has been tested on many safety-critical systems, including piloted and unmanned aircraft, and incorporate LPV and switching systems to characterize the desired dynamics. In particular, for LPV systems subject to unmatched uncertainties, we proposed a novel approach based on peak-to-peak gain minimization and analysis from robust control to mitigate the effect of unmatched uncertainties, while allowing for the controllers to provide transient performance guarantees. Additionally, the adaptive controller with switched desired dynamics was used to facilitate safe real-time learning of aerial vehicle dynamics and tested on a UAV.

References

  • S. Snyder, P. Zhao and N. Hovakimyan. \(\mathcal L_1\) adaptive control with switched reference models: Application to Learn-to-Fly. Journal of Guidance, Control, and Dynamics, 45(12): 2229-2242, 2022

  • P. Zhao, S. Snyder, N. Hovakimyan and C. Cao. Robust adaptive control of linear parameter-varying systems with unmatched uncertainties. Submitted, 2021. arXiv:2010.04600.

  • S. Snyder, P. Zhao and N. Hovakimyan. \(\mathcal L_1\) adaptive control for switching reference systems: Application to flight control. IFAC-PapersOnLine, 52(16), pp.718-723, 2019.

Intelligent agricultural management with reinforcement learning

The world's agricultural system is facing significant challenges to bridge the gap between the amount of food produced today and that needed to feed a population of 9.6 billion by 2050 while reducing environmental impacts. In this project, we envisioned an intelligent agricultural management system based on deep reinforcement learning (RL) and crop simulations (e.g., using DSSAT), which can improve crop yields while reducing the use of resources (e.g., fertilizers, irrigation water) compared to expert-suggested management practices. We also leveraged imitation learning (IL) to train practically-implementable management policies that use only the state information that is easily accessible in the real world.

Agricultural managment via RL, imitation learning and crop simulations 
  • J. Wu*, R. Tao*, P. Zhao*, N. Martin and N. Hovakimyan. Optimizing nitrogen management with deep reinforcement learning and crop simulations. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1712-1720, 2022.

  • Ran Tao, Pan Zhao, Jing Wu, et al. Optimizing crop management with reinforcement learning and imitation learning (extended abstract). Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23), 2023.