The ubiquity of search engines has transformed how individuals access and consume infor mation, but the algorithms powering these tools can introduce biases that impact the user experience. This study examines user needs in handling search bias, exploring why people might introduce bias in their queries and the strategies they use for complex, bias-prone searches. During the pilot study, which was conducted online using Google Meet among 10 volunteer participants aged 18 to 44 years in 2024, each participant was randomly sent a sentence from selected myths and asked to verify it using the Google search engine within 3 attempts. The goal was to understand user behaviors, strategies, and crucial aspects considered in query formulation and results assessments, which can then be translated into features for a system. The trustworthiness of information sources emerged as a critical factor, with 40% of participants indicating they only trusted results from "valid references." The most common query formats were "sentence" and "question," and participants were often satisfied with the Google Featured Snippet, often not exploring further results. Interestingly, participants’ confidence in the answers increased the more they explored the information space, though they tended to review fewer results over successive attempts, only digging deeper when searching for specific characteristics such as keywords and sources. While users are able to formulate queries that will get them close to the desired results, it remains challenging for them to target specific characteristics in the search results and must manually sift through the answers to find what they are seeking. This problem has persisted despite users’ attempts to refine their queries, as they have been unable to effectively translate the sought-after characteristics into the query itself. Through a human-centered approach, this work suggests novel functionalities that can combat these situations.
La diffusione dei motori di ricerca ha trasformato il modo in cui le persone cercano informazioni per comprendere un particolare argomento, ma gli algoritmi che alimentano questi strumenti possono introdurre bias che influenzano l’esperienza dell’utente. Questo studio esamina le esigenze degli utenti nella gestione dei bias nelle ricerche, esplorando perché le persone potrebbero introdurre bias nelle loro query e le strategie che utilizzano per ricerche complesse e inclini ai bias. Durante lo studio pilota, condotto online tramite Google Meet tra 10 partecipanti volontari di età compresa tra 18 e 44 anni nel 2024, a ciascun partecipante è stata inviata una frase che fa riferimento a credenze comuni che non si basano su evidenza scientifica, e gli è stato chiesto di verificarla utilizzando il motore di ricerca Google avendo a disposizione 3 tentativi. L’obiettivo era comprendere i comportamenti degli utenti, le strategie e gli aspetti cruciali considerati nella formulazione delle query e nella valutazione dei risultati, che possono poi essere tradotti in funzionalità per un sistema che supporti l’utente nelle ricerche online. L’affidabilità delle fonti di informazione è emersa come un fattore critico, con il 40% dei partecipanti che ha indicato di fidarsi solo dei risultati provenienti da "riferimenti validi." I formati di query più comuni erano "frase" e "domanda," e i partecipanti erano spesso soddisfatti del Featured Snippet di Google, esplorando raramente ulteriori risultati. Curiosamente, la fiducia dei partecipanti nelle risposte aumentava quanto più esploravano lo spazio informativo, sebbene tendessero a esaminare meno risultati nei tentativi successivi, approfondendo solo quando cercavano caratteristiche specifiche come parole chiave e fonti. I risultati dell’indagine mostrano che sebbene gli utenti siano in grado di formulare query che li avvicinano ai risultati desiderati, rimane difficile per loro specificare in modo preciso gli attributi che vorrebbero ottenere nei risultati di ricerca e devono vagliare manualmente le risposte per trovare ciò che cercano. Questo problema persiste nonostante i tentativi degli utenti di affinare le loro query, poiché non sono stati in grado di tradurre efficacemente le caratteristiche ricercate nella query stessa. Attraverso un approccio incentrato sull’utente, questo lavoro suggerisce nuove funzionalità che possono essere utili in queste situazioni.
Understanding Query Formulation Strategies to Mitigate Bias in Web
MELLATDOUST, PARINAZ
2023/2024
Abstract
The ubiquity of search engines has transformed how individuals access and consume infor mation, but the algorithms powering these tools can introduce biases that impact the user experience. This study examines user needs in handling search bias, exploring why people might introduce bias in their queries and the strategies they use for complex, bias-prone searches. During the pilot study, which was conducted online using Google Meet among 10 volunteer participants aged 18 to 44 years in 2024, each participant was randomly sent a sentence from selected myths and asked to verify it using the Google search engine within 3 attempts. The goal was to understand user behaviors, strategies, and crucial aspects considered in query formulation and results assessments, which can then be translated into features for a system. The trustworthiness of information sources emerged as a critical factor, with 40% of participants indicating they only trusted results from "valid references." The most common query formats were "sentence" and "question," and participants were often satisfied with the Google Featured Snippet, often not exploring further results. Interestingly, participants’ confidence in the answers increased the more they explored the information space, though they tended to review fewer results over successive attempts, only digging deeper when searching for specific characteristics such as keywords and sources. While users are able to formulate queries that will get them close to the desired results, it remains challenging for them to target specific characteristics in the search results and must manually sift through the answers to find what they are seeking. This problem has persisted despite users’ attempts to refine their queries, as they have been unable to effectively translate the sought-after characteristics into the query itself. Through a human-centered approach, this work suggests novel functionalities that can combat these situations.File | Dimensione | Formato | |
---|---|---|---|
2024_7_Mellatdoust.pdf
accessibile in internet per tutti
Dimensione
996.03 kB
Formato
Adobe PDF
|
996.03 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/223846