Autonomous systems have been established as a ground-breaking technology in agriculture, particularly for resource optimization and labor savings. However, even those solutions that are limited to monitoring activities, such as yield estimation, rely on costly robotic platforms equipped with a series of range devices (e.g., LIDAR and GPS-RTK). Recently, vision-based strategies have gained considerable attention as a less expensive and more efficient alternative, capable to be on par with or even surpass approaches that benefit from range sensors. Nonetheless, they exploit deep learning methodologies, which require burdensome labeling procedures to perform training. To address these shortcomings, we present a novel approach that performs yield estimation requiring only a monocular camera and needs a limited amount of supervision information. It detects, locates and maps fruits and tree canopies to estimate the total yield of a specific crop. To keep the image labeling effort to a minimum, we propose a weakly-supervision paradigm that only requires a simple binary label encoding the presence or the absence of fruits in the training images. Our approach does not make any assumptions on the underlying platform, i.e., it can be used by collecting images either with a hand-held camera or with an autonomous robot. Therefore, we are able to considerably reduce the deployment time, the energy and the cost of the overall yield estimation system. At the same time, we keep the performance comparable to both vision-based fully supervised baselines (which require costly labeling operations) and classical systems that rely on more expensive and power-demanding sensors.

A novel vision-based weakly supervised framework for autonomous yield estimation in agricultural applications

Bellocchio E.;Crocetti F.;Costante G.;Fravolini M. L.;Valigi P.
2022

Abstract

Autonomous systems have been established as a ground-breaking technology in agriculture, particularly for resource optimization and labor savings. However, even those solutions that are limited to monitoring activities, such as yield estimation, rely on costly robotic platforms equipped with a series of range devices (e.g., LIDAR and GPS-RTK). Recently, vision-based strategies have gained considerable attention as a less expensive and more efficient alternative, capable to be on par with or even surpass approaches that benefit from range sensors. Nonetheless, they exploit deep learning methodologies, which require burdensome labeling procedures to perform training. To address these shortcomings, we present a novel approach that performs yield estimation requiring only a monocular camera and needs a limited amount of supervision information. It detects, locates and maps fruits and tree canopies to estimate the total yield of a specific crop. To keep the image labeling effort to a minimum, we propose a weakly-supervision paradigm that only requires a simple binary label encoding the presence or the absence of fruits in the training images. Our approach does not make any assumptions on the underlying platform, i.e., it can be used by collecting images either with a hand-held camera or with an autonomous robot. Therefore, we are able to considerably reduce the deployment time, the energy and the cost of the overall yield estimation system. At the same time, we keep the performance comparable to both vision-based fully supervised baselines (which require costly labeling operations) and classical systems that rely on more expensive and power-demanding sensors.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1501148
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? ND
social impact