Analysing the particle size distribution (PSD) of nanomaterials is essential for many scientific and industrial applications, as PSD offers important details about the distribution, average size, and range of particles within a sample. All these properties have a huge impact on the characteristics and behaviour of the materials. In this work we investigated image-based PSD recognition of nanoparticles from synthetic SEM images. For the task we considered an image classification pipeline based on five conventional (hand-crafted) texture descriptors and five pre-trained convolutional neural networks (CNN). Fusion strategies for combining the best performing feature sets were also investigated. We found that both the conventional and the CNN-based features achieved good performance (best accuracy respectively 87.6% and 88.2%), whereas the optimal performance (90.2%) was obtained by combining the two best performing CNN-based feature sets. Altogether the results are promising and demonstrate the feasibility of the method
Identification of the size distribution of SEM particles by conventional texture descriptors and deep features from pre-trained convolutional networks
Francesco Bianconi
;Mario Luca Fravolini;Cinzia Buratti;Giulia Pascoletti;Elisabetta M. Zanetti
2026
Abstract
Analysing the particle size distribution (PSD) of nanomaterials is essential for many scientific and industrial applications, as PSD offers important details about the distribution, average size, and range of particles within a sample. All these properties have a huge impact on the characteristics and behaviour of the materials. In this work we investigated image-based PSD recognition of nanoparticles from synthetic SEM images. For the task we considered an image classification pipeline based on five conventional (hand-crafted) texture descriptors and five pre-trained convolutional neural networks (CNN). Fusion strategies for combining the best performing feature sets were also investigated. We found that both the conventional and the CNN-based features achieved good performance (best accuracy respectively 87.6% and 88.2%), whereas the optimal performance (90.2%) was obtained by combining the two best performing CNN-based feature sets. Altogether the results are promising and demonstrate the feasibility of the methodI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


