Visual Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) have experienced increasing interest in both the consumer and racing automotive sectors in recent decades. With the introduction of novel neuromorphic vision sensors, it is now possible to accurately localize a vehicle even under complex environmental conditions, leading to an improved and safer driving experience. In this paper, we propose MA-VIED, a large-scale driving dataset that collects race track-like loops, maneuvers, and standard driving scenarios, all bundled in a rich sensory dataset. MA-VIED provides highly accurate IMU data, standard and event camera streams, and RTK position data from a dual GPS antenna, both of which are hardware-synchronized with all cameras and IMU data. In addition, we collect accurate wheel odometry data and other data from the vehicle's CAN bus. The dataset contains 13 sequences collected in urban, suburban, and racetrack-like environments with varying lighting conditions and driving dynamics. We provide ground-truth RTK data for algorithms evaluation and the calibration sequences for both IMU and cameras. We then present three tests to demonstrate how MA-VIED can be suitable for monocular VIO applications, using state-of-the-art VIO algorithms and an EKF-based sensor fusion solution. The experimental results show that MA-VIED can support the development and prototyping of novel automotive-oriented frame and event-based monocular VIO algorithms.

MA-VIED: A Multisensor Automotive Visual Inertial Event Dataset

Mollica G.;Felicioni S.;Legittimo M.;Costante G.;Valigi P.
2024

Abstract

Visual Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) have experienced increasing interest in both the consumer and racing automotive sectors in recent decades. With the introduction of novel neuromorphic vision sensors, it is now possible to accurately localize a vehicle even under complex environmental conditions, leading to an improved and safer driving experience. In this paper, we propose MA-VIED, a large-scale driving dataset that collects race track-like loops, maneuvers, and standard driving scenarios, all bundled in a rich sensory dataset. MA-VIED provides highly accurate IMU data, standard and event camera streams, and RTK position data from a dual GPS antenna, both of which are hardware-synchronized with all cameras and IMU data. In addition, we collect accurate wheel odometry data and other data from the vehicle's CAN bus. The dataset contains 13 sequences collected in urban, suburban, and racetrack-like environments with varying lighting conditions and driving dynamics. We provide ground-truth RTK data for algorithms evaluation and the calibration sequences for both IMU and cameras. We then present three tests to demonstrate how MA-VIED can be suitable for monocular VIO applications, using state-of-the-art VIO algorithms and an EKF-based sensor fusion solution. The experimental results show that MA-VIED can support the development and prototyping of novel automotive-oriented frame and event-based monocular VIO algorithms.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1568803
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact