Users of web or chat social networks typically use emojis (e.g., smilies, memes, hearts) to convey in their textual interactions the emotions underlying the context of the communication, aiming for better interpretability, especially for short polysemous phrases. Semantic-based context recognition tools, employed in any chat or social network, can directly comprehend text-based emoticons (i.e., emojis created from a combination of symbols and characters) and translate them into audio information (e.g., text-to-speech readers for individuals with vision impairment). On the other hand, for a comprehensive understanding of the semantic context, image-based emojis require image-recognition algorithms. This study aims to explore and compare different classification methods for pictograms, applied to emojis collected from Internet sources. Each emoji is labeled according to the basic Ekman model of six emotional states. The first step involves extraction of emoji features through convolutional neural networks, which are then used to train conventional supervised machine learning classifiers for purposes of comparison. The second experimental step broadens the comparison to deep learning networks. The results reveal that both the conventional and deep learning classification approaches accomplish the goal effectively, with deep transfer learning exhibiting a highly satisfactory performance, as expected.

Tell Me More: Automating Emojis Classification for Better Accessibility and Emotional Context Recognition

Valentina Franzoni
Supervision
2022

Abstract

Users of web or chat social networks typically use emojis (e.g., smilies, memes, hearts) to convey in their textual interactions the emotions underlying the context of the communication, aiming for better interpretability, especially for short polysemous phrases. Semantic-based context recognition tools, employed in any chat or social network, can directly comprehend text-based emoticons (i.e., emojis created from a combination of symbols and characters) and translate them into audio information (e.g., text-to-speech readers for individuals with vision impairment). On the other hand, for a comprehensive understanding of the semantic context, image-based emojis require image-recognition algorithms. This study aims to explore and compare different classification methods for pictograms, applied to emojis collected from Internet sources. Each emoji is labeled according to the basic Ekman model of six emotional states. The first step involves extraction of emoji features through convolutional neural networks, which are then used to train conventional supervised machine learning classifiers for purposes of comparison. The second experimental step broadens the comparison to deep learning networks. The results reveal that both the conventional and deep learning classification approaches accomplish the goal effectively, with deep transfer learning exhibiting a highly satisfactory performance, as expected.
2022
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1561953
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 5
social impact