UAV ATTITUDE AND ALTITUDE STABILITY CONTROL ALGORITHM UNDER EXTREME WEATHER CONDITIONS
Volume 2, Issue 1, Pp 61-65, 2025
DOI: https://doi.org/10.61784/mjet3025
Author(s)
HanJun Zhang
Affiliation(s)
Changzhou Beijiao High School, Changzhou 213000, Jiangsu, China.
Corresponding Author
HanJun Zhang
ABSTRACT
Extreme weather conditions pose significant challenges to the stability of attitude and altitude control of uncrewed aerial vehicles (UAVs). Traditional control methods often have problems with response lag and reduced accuracy in intense disturbance environments. This paper proposes a hybrid control framework that integrates disturbance observers and deep reinforcement learning strategies to improve the autonomous control capabilities of UAVs under complex meteorological disturbances. This method models the disturbance trend in real time by extending the state observer. It uses the policy network to dynamically adjust the control output according to the disturbance estimation, thus realizing the closed-loop optimization of perception making. In simulation experiments, the proposed method shows excellent control performance under multiple typical disturbance conditions such as crosswind, gusts, downdrafts, and their combinations. Compared with traditional PID, LQR, and MPC controllers, it significantly improves trajectory stability, control accuracy, and energy consumption. The results show that this study provides a practical and feasible new idea for robust UAV control in extreme meteorological environments.
KEYWORDS
UAV control; Extreme weather; Disturbance observer; Deep reinforcement learning; Hybrid control
CITE THIS PAPER
HanJun Zhang. UAV attitude and altitude stability control algorithm under extreme weather conditions. Multidisciplinary Journal of Engineering and Technology. 2025, 2(1): 61-65. DOI: https://doi.org/10.61784/mjet3025.
REFERENCES
[1] Shi G, Hehn M, D’Andrea R. Planning and control of aggressive maneuvers for quadrotors with a flatness-based approach. Journal of Intelligent Robotic Systems, 2013, 70(1): 315–327.
[2] Richards A, How J. Aircraft trajectory planning with collision avoidance using mixed integer linear programming. American Control Conference (ACC), 2002: 1936–1941.
[3] Han J, Wang W, Zhang Y. Nonlinear disturbance observer-based control for robotic systems. IEEE Transactions on Industrial Electronics, 2009, 56(10): 3768–3773.
[4] Gao Z. Scaling and bandwidth-parameterization based controller tuning. IEEE, 2006: 4989–4996.
[5] Zhang H, Xu J, Tan Y. Adaptive disturbance observer-based control for uavs under winddisturbances. IEEE Transactions on Industrial Electronics, 2018, 65(6): 4955–4965.
[6] Nguyen T D, Han J B, Kim H J. Learning-based disturbance rejection control for quadrotor uavs under wind gusts. IEEE Transactions on Industrial Electronics, 2022, 69(5): 5067–5076.
[7] Levine S, Finn C, Darrell T, et al. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 2016, 17(39): 1–40.
[8] Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. PMLR, 2018: 1861–1870.
[9] Abbeel P, Coates A, Quigley M, et al. An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems, 2007, 19: 1–8.
[10] Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms. arXiv preprint arXiv: 1707.06347, 2017.