Many studies show that spatial representation of information can be conveyed by sonification methods in a functionally equivalent way, supporting the hypothesis that spatial representation is processed by an amodal system. The aim of this study is to analyse from the User eXperience (UX) perspective the interaction of users whilst surfing WhatsOnWeb, a sonificated Web search clustering engine which displays information by means of visuospatial data output by using sophisticated graph visualisation algorithms on semantically clustered data. WhatsOnWeb provides a complete information representation within a single browseable page, overcoming the efficiency limitations of the top-down flat linear representation used by the most common search engines to represent the indexed dataset. A heuristic evaluation and a usability evaluation with end-users – carried out with the Partial Concurrent Thinking Aloud protocol – have been conducted. The results show no significant differences between the interactions of blind and sighted subjects: the users' ability in performing spatial exploration tasks guided by visual or acoustic cues seems to be both qualitatively and quantitatively equivalent. These results confirm that sonification methods are an effective solution to counteract the visuocentric barriers which currently exclude blind people from accessing information technologies.

Searching by Hearing: Neutralising Visuocentric Barriers to Blind People’s Access to Spatial Information Representation

BORSCI, SIMONE;FEDERICI, Stefano
2011

Abstract

Many studies show that spatial representation of information can be conveyed by sonification methods in a functionally equivalent way, supporting the hypothesis that spatial representation is processed by an amodal system. The aim of this study is to analyse from the User eXperience (UX) perspective the interaction of users whilst surfing WhatsOnWeb, a sonificated Web search clustering engine which displays information by means of visuospatial data output by using sophisticated graph visualisation algorithms on semantically clustered data. WhatsOnWeb provides a complete information representation within a single browseable page, overcoming the efficiency limitations of the top-down flat linear representation used by the most common search engines to represent the indexed dataset. A heuristic evaluation and a usability evaluation with end-users – carried out with the Partial Concurrent Thinking Aloud protocol – have been conducted. The results show no significant differences between the interactions of blind and sighted subjects: the users' ability in performing spatial exploration tasks guided by visual or acoustic cues seems to be both qualitatively and quantitatively equivalent. These results confirm that sonification methods are an effective solution to counteract the visuocentric barriers which currently exclude blind people from accessing information technologies.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/178637
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact