The object of study consists of solving a live image classification problem in the mobile environment and the subsequent implementation of mechanisms to integrate an image classification model, leveraging TensorFlow Lite, within a smartphone application built with Google's Flutter framework and written in Dart code. The functionality has been implemented within a cultural outreach application and allows the user, during an art visit, to use the camera on his or her device to frame the artwork and get its name on the screen in real time. Each captured frame is in fact sent as input to the classifier, which, after performing the classification, returns as output the associated label (the name of the artwork) that is finally displayed on the screen. After briefly introducing the preliminary knowledge necessary for the complete understanding of the object of study (implementation of the camera in the app and conversion of the classification model into Tensorflow Lite format), the tools used for the implementation of the functionality are described, which exploit the main plugin used, namely tflite\_flutter. Finally, to ensure that the performance of the feature is optimal in devices, it is presented the concept of Isolate in the flutter framework and described the mechanisms for creating a new Isolate within which to perform the classification operation and how to communicate between them in order to display in the application user interface the output returned by the image classifier.
Integration of an Artificial Intelligence Model into a Smartphone Flutter Application to Solve a Live Image Classification Problem
Tasso, Sergio
;Gervasi, Osvaldo;Perri, Damiano
2024
Abstract
The object of study consists of solving a live image classification problem in the mobile environment and the subsequent implementation of mechanisms to integrate an image classification model, leveraging TensorFlow Lite, within a smartphone application built with Google's Flutter framework and written in Dart code. The functionality has been implemented within a cultural outreach application and allows the user, during an art visit, to use the camera on his or her device to frame the artwork and get its name on the screen in real time. Each captured frame is in fact sent as input to the classifier, which, after performing the classification, returns as output the associated label (the name of the artwork) that is finally displayed on the screen. After briefly introducing the preliminary knowledge necessary for the complete understanding of the object of study (implementation of the camera in the app and conversion of the classification model into Tensorflow Lite format), the tools used for the implementation of the functionality are described, which exploit the main plugin used, namely tflite\_flutter. Finally, to ensure that the performance of the feature is optimal in devices, it is presented the concept of Isolate in the flutter framework and described the mechanisms for creating a new Isolate within which to perform the classification operation and how to communicate between them in order to display in the application user interface the output returned by the image classifier.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.