In the evolving landscape of robotic navigation, the demand for solutions capable of operating in challenging scenarios, such as low-light environments, is increasing. This study investigates the performance of two state-of-the-art (SOTA) visual simultaneous localization and mapping (VSLAM) algorithms, direct sparse odometry (DSO) and ORBSLAM3, in their monocular implementation, in the dark indoor scenarios where the only light source is provided by an auxiliary light system installed on a robot. A modified Pioneer3-DX robot, equipped with a monocular camera, LED bars, and a lux meter, is utilized to collect a novel data set, “LUCID—Lighting Up Campus Indoor Spaces Data Set,” in real-world, low-light indoor environments. The data set includes image sequences enhanced using a generative adversarial network (GAN) to simulate varying levels of image enhancement. Through comprehensive experiments, we assess the performances of the V-SLAM algorithm, considering the critical balance between maintaining adequate auxiliary illumination and enhancing. This study provides insights into the optimization of robotic navigation in lowlight conditions, paving the way for more robust and reliable autonomous navigation systems.
Comparison of DSO and ORB-SLAM3 in Low-Light Environments With Auxiliary Lighting and Deep Learning Based Image Enhancing
Crocetti F.;Brilli R.;Dionigi A.;Fravolini M. L.;Costante G.;Valigi P.
2025
Abstract
In the evolving landscape of robotic navigation, the demand for solutions capable of operating in challenging scenarios, such as low-light environments, is increasing. This study investigates the performance of two state-of-the-art (SOTA) visual simultaneous localization and mapping (VSLAM) algorithms, direct sparse odometry (DSO) and ORBSLAM3, in their monocular implementation, in the dark indoor scenarios where the only light source is provided by an auxiliary light system installed on a robot. A modified Pioneer3-DX robot, equipped with a monocular camera, LED bars, and a lux meter, is utilized to collect a novel data set, “LUCID—Lighting Up Campus Indoor Spaces Data Set,” in real-world, low-light indoor environments. The data set includes image sequences enhanced using a generative adversarial network (GAN) to simulate varying levels of image enhancement. Through comprehensive experiments, we assess the performances of the V-SLAM algorithm, considering the critical balance between maintaining adequate auxiliary illumination and enhancing. This study provides insights into the optimization of robotic navigation in lowlight conditions, paving the way for more robust and reliable autonomous navigation systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


