Enabling Micro Aerial Vehicles to autonomously follow a reference position trajectory while effectively avoiding collisions is a task of paramount importance in many aerial applications. Most of the approaches adopted to address this problem employ modular navigation architectures. These suffer from inherent drawbacks, including the dependence on costly sensing equipment and the reliance on detailed information about the environmental characteristics and the shape and distribution of obstacles. On the other hand, most of the modern data-driven approaches exhibit numerous limitations and primarily address simplified versions of the task, which often entail tight constraints on the mobility of the drone. In this paper, we employ the Deep Reinforcement Learning framework to train a control policy that addresses the trajectory following and the collision avoidance problem in a unified manner using visual data. Differently from most of the existing methods, our model requires only pose measurements and depth maps captured by a front-looking camera. Moreover, it operates without assumptions about the shape, size and placement of obstacles and permits the drone to freely navigate the three-dimensional space. Through comprehensive evaluations in photo-realistic simulated environments and in a mixed-reality setting, both involving a variety of static obstacles, we validate the effectiveness and generalization capabilities of our strategy against different state-of-the-art baselines.

Towards trajectory following and vision-based collision avoidance for Micro Aerial Vehicles with Deep Reinforcement Learning

Brilli R.
;
Dionigi A.;Legittimo M.;Crocetti F.;Leomanni M.;Fravolini M. L.;Costante G.
2026

Abstract

Enabling Micro Aerial Vehicles to autonomously follow a reference position trajectory while effectively avoiding collisions is a task of paramount importance in many aerial applications. Most of the approaches adopted to address this problem employ modular navigation architectures. These suffer from inherent drawbacks, including the dependence on costly sensing equipment and the reliance on detailed information about the environmental characteristics and the shape and distribution of obstacles. On the other hand, most of the modern data-driven approaches exhibit numerous limitations and primarily address simplified versions of the task, which often entail tight constraints on the mobility of the drone. In this paper, we employ the Deep Reinforcement Learning framework to train a control policy that addresses the trajectory following and the collision avoidance problem in a unified manner using visual data. Differently from most of the existing methods, our model requires only pose measurements and depth maps captured by a front-looking camera. Moreover, it operates without assumptions about the shape, size and placement of obstacles and permits the drone to freely navigate the three-dimensional space. Through comprehensive evaluations in photo-realistic simulated environments and in a mixed-reality setting, both involving a variety of static obstacles, we validate the effectiveness and generalization capabilities of our strategy against different state-of-the-art baselines.
2026
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1612334
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact