Autonomous exploration is a longstanding goal of the robotics community. Aerial drone navigation has proven to be especially challenging. The stringent requirements on cost, weight, maneuverability, and power consumption do not allow exploration approaches to easily be employed or adapted to different types of environments. End-to-End Deep Reinforcement Learning (DRL) techniques based on Convolutional Networks approximators, which grant constant-time computation, predefined memory usage, and deliver high visual perception capabilities, represent a very promising alternative to current state of the art solutions relying on metric environment reconstruction. In this work, we address the autonomous exploration problem with aerial robots with a monocular camera based on DRL. Specifically, we propose a novel asymmetric actor-critic model for drone exploration that efficiently leverages ground truth information provided by the simulator environment to speed up learning and enhance final exploration performances. Furthermore, in order to reduce the sim-to-real gap for exploration, we present a novel mixed reality framework that allows an easier, smoother, and safer simulation to real-world transition. Both aspects allow to further exploit the great potential of simulation engines and contribute to reducing the risk associated with directly deploying algorithms on a physical platform with no intermediate step between the simulation and the real world. This is well-known to create several safety concerns and be dangerous when deploying aerial vehicles. Experimental results with a drone exploring multiple environments show the effectiveness of the proposed approach.

Autonomous Single-Image Drone Exploration with Deep Reinforcement Learning and Mixed Reality

Devo A.;Costante G.;
2022

Abstract

Autonomous exploration is a longstanding goal of the robotics community. Aerial drone navigation has proven to be especially challenging. The stringent requirements on cost, weight, maneuverability, and power consumption do not allow exploration approaches to easily be employed or adapted to different types of environments. End-to-End Deep Reinforcement Learning (DRL) techniques based on Convolutional Networks approximators, which grant constant-time computation, predefined memory usage, and deliver high visual perception capabilities, represent a very promising alternative to current state of the art solutions relying on metric environment reconstruction. In this work, we address the autonomous exploration problem with aerial robots with a monocular camera based on DRL. Specifically, we propose a novel asymmetric actor-critic model for drone exploration that efficiently leverages ground truth information provided by the simulator environment to speed up learning and enhance final exploration performances. Furthermore, in order to reduce the sim-to-real gap for exploration, we present a novel mixed reality framework that allows an easier, smoother, and safer simulation to real-world transition. Both aspects allow to further exploit the great potential of simulation engines and contribute to reducing the risk associated with directly deploying algorithms on a physical platform with no intermediate step between the simulation and the real world. This is well-known to create several safety concerns and be dangerous when deploying aerial vehicles. Experimental results with a drone exploring multiple environments show the effectiveness of the proposed approach.
2022
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1536014
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact