Breast ultrasound imaging (BUS) plays a major role in the diagnosis of breast cancer. Consequently, computerised analysis of BUS images is being intensively investigated as a means to assist physicians in clinical decision making. The traditional approach involves the use of manually-designed (hand-crafted) morphology and texture features, although Deep Learning (DL) by convolutional neural networks (CNN) has received much interest in recent years. Advances in AI research has demonstrated the effectiveness of CNN models for classification tasks; however, this approach requires large datasets, extensive computational resources and solid knowledge of CNN architecture design. Deep feature extraction from pre-trained state-of-the art CNNs is an easy-to-implement approach that allows to exploit the information already learnt on large datasets (e.g., ImageNet) while saving substantial computational time and resources. In this work we compare the effectiveness of deep features from three pre-trained CNN with that of five types of hand-crafted features for differentiating benign vs. malignant lesions on BUS images. The analysis was carried out on 252 lesions (154 benign, 98 malignant) from the recently released BrEaST dataset. The effects of image preprocessing (resizing and denoising) and of the area from which the features are computed (region of interest – ROI or bounding box around the ROI) are also investigated. The results show that the deep features outperformed the handcrafted in terms of accuracy (best accuracy 81.4% vs. 78.6%) hence demonstrating the feasibility of the method.

Hand-crafted Vs. deep CNN features to distinguish benign from malignant lesions in breast ultrasound images

Muhammad Usama Khan
;
Francesco Bianconi;
2025

Abstract

Breast ultrasound imaging (BUS) plays a major role in the diagnosis of breast cancer. Consequently, computerised analysis of BUS images is being intensively investigated as a means to assist physicians in clinical decision making. The traditional approach involves the use of manually-designed (hand-crafted) morphology and texture features, although Deep Learning (DL) by convolutional neural networks (CNN) has received much interest in recent years. Advances in AI research has demonstrated the effectiveness of CNN models for classification tasks; however, this approach requires large datasets, extensive computational resources and solid knowledge of CNN architecture design. Deep feature extraction from pre-trained state-of-the art CNNs is an easy-to-implement approach that allows to exploit the information already learnt on large datasets (e.g., ImageNet) while saving substantial computational time and resources. In this work we compare the effectiveness of deep features from three pre-trained CNN with that of five types of hand-crafted features for differentiating benign vs. malignant lesions on BUS images. The analysis was carried out on 252 lesions (154 benign, 98 malignant) from the recently released BrEaST dataset. The effects of image preprocessing (resizing and denoising) and of the area from which the features are computed (region of interest – ROI or bounding box around the ROI) are also investigated. The results show that the deep features outperformed the handcrafted in terms of accuracy (best accuracy 81.4% vs. 78.6%) hence demonstrating the feasibility of the method.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1601814
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact