The main objective of this study is to prove that automatic voice analysis of non-linguistic content during spontaneous speech can provide useful information for cognitive assessment and can represent an additional objective assessment tool for diagnosis support purposes. Voice recordings from 153 Italian subjects (age >65 years; group 1: 47 subjects with MMSE>26; group 2: 46 subjects with 20= MMSE =26; group 3: 45 subjects with 10< MMSE <20) were collected. Voice samples were processed using a MATLAB-based custom software to derive a broad set of known acoustic features (namely, percentage of unvoiced signal; mean value of the fundamental frequency; mean, median, percentile 15% and 85% of the durations of voiced and unvoiced segments; shimmer; percentage of voice breaks; standard deviation of the third formant; speech rate and articulation rate; phonation percentage; mean duration of syllables and pauses). Linear mixed model analyses were performed to select the features able to significantly distinguish among the three groups and between couples of groups in the pairwise comparisons. The selected features, alone or in addition to age and years of education, were used to build a learning-based classifier in both the cases of multiclass and the three binary classification tasks. The leave-one-out cross validation was used for testing and the classifiers accuracy was computed. In the multiclass case the best performance was an accuracy of 67%. In the comparison between Group 1 and 2 the best result was an accuracy of 81%; in the comparison between Group 2 and 3 it was achieved an accuracy of 68%; lastly, in the comparison between Group 1 and 3 it was scored an accuracy of 94%. The setting in which more interest was put was the binary comparison between Group 1 and 2, since it is the one which represents the transition from healthy subjects to subjects with MCI. In this case, an overall accuracy of 75% was found when significant voice features were used alone. The accuracy increased up to 81% when age and years of education were included. These performances were lower than the accuracy of 86% found in a recent study. However, in that study the classification was based on several tasks, including more cognitive demanding tasks. This results acts as a proof of concepst of the fact that acoustic features, derived for the first time only from an ecologic continuous speech task, were able to discriminate among different levels of cognitive decline. This study poses the basis for the development of a mobile application performing automatic voice analysis on-the-fly during phone conversations. This mobile app might represent a tool to daily monitor the cognitive function of elderly people in a transparent and non-intrusive manner, with the final aim to favor an early diagnosis of cognitive decline.
L'obiettivo principale di questa tesi è di dimostrare che l'analisi automatica della voce basata su contenuti non linguistici durante il parlato spontaneo può fornire informazioni utili per la valutazione cognitiva e può rappresentare uno strumento aggiuntivo di valutazione oggettiva per scopi di supporto diagnostico. Registrazioni della voce di 153 soggetti italiani (età >65 anni, gruppo 1: 47 soggetti con MMSE> 26, gruppo 2: 46 soggetti con 20= MMSE =26, gruppo 3: 45 soggetti con 10 <MMSE <20). I campioni sono stati elaborati utilizzando un software basato su MATLAB per ricavare un ampio insieme di feature acustiche note (vale a dire, percentuale del segnale unvoiced; valore medio della frequenza fondamentale; media, mediana, percentile 15% e 85% della durata di segmenti voiced e unvoiced; shimmer; percentuale di voice breaks; deviazione standard della terza formante; speech rate e articulation rate; percentuale di fonazione, durata media di sillabe e pause). Sono state eseguite analisi del linear mixed model per selezionare le feature in grado di distinguere significativamente tra i tre gruppi e tra coppie di gruppi nei confronti a accoppiati. Le feature selezionate, da sole o in aggiunta all'età e agli anni di istruzione, sono state utilizzate per costruire un classificatore basato sul machine-learning sia nel caso multiclasse che nei tre task di classificazione binaria. La leave-one-out cross-validation è stata utilizzata per il testing e l'accuratezza dei vari classificatori è stata calcolata. Nel caso multiclasse, la migliore prestazione è stata un'accuratezza del 67%. Nel confronto tra i gruppi 1 e 2, il risultato migliore è stato un'accuratezza dell'81%; nel confronto tra i gruppi 2 e 3 è stata raggiunta un'accuratezza del 68%; infine, nel confronto tra i gruppi 1 e 3 è stata raggiunta un'accuratezza del 94%. Il caso in cui è stato posto più interesse è il confronto binario tra i gruppi 1 e 2, poiché è quello che rappresenta il passaggio da soggetti sani a soggetti con MCI. In questo caso, è stata rilevata un'accuratezza complessiva del 75% quando sono state utilizzate le sole feature significative della voce. L'accuratezza è aumentata fino all'81% quando sono stati inclusi l'età e gli anni di istruzione. Queste prestazioni sono inferiori all'accuratezza dell'86% trovata in un recente studio. Tuttavia, in questo studio la classificazione era basata su diversi task, compresi compiti cognitivamente più impegnativi. Questo risultato funge da proof of concept del fatto che le feature acustiche, estratte per la prima volta solo da un parlato ecologico e continuo, sono in grado di discriminare tra diversi livelli di declino cognitivo. Questo studio pone le basi per lo sviluppo di un'applicazione mobile che esegua l'analisi automatica della voce in tempo reale durante le conversazioni telefoniche. Questa app mobile potrebbe rappresentare uno strumento per monitorare quotidianamente la funzione cognitiva delle persone anziane in modo trasparente e non intrusivo, con l'obiettivo finale di favorire una diagnosi precoce del declino cognitivo.
Automatic speech analysis for early detection of functional cognitive impairment in elderly population
CAIELLI, MATTEO
2018/2019
Abstract
The main objective of this study is to prove that automatic voice analysis of non-linguistic content during spontaneous speech can provide useful information for cognitive assessment and can represent an additional objective assessment tool for diagnosis support purposes. Voice recordings from 153 Italian subjects (age >65 years; group 1: 47 subjects with MMSE>26; group 2: 46 subjects with 20= MMSE =26; group 3: 45 subjects with 10< MMSE <20) were collected. Voice samples were processed using a MATLAB-based custom software to derive a broad set of known acoustic features (namely, percentage of unvoiced signal; mean value of the fundamental frequency; mean, median, percentile 15% and 85% of the durations of voiced and unvoiced segments; shimmer; percentage of voice breaks; standard deviation of the third formant; speech rate and articulation rate; phonation percentage; mean duration of syllables and pauses). Linear mixed model analyses were performed to select the features able to significantly distinguish among the three groups and between couples of groups in the pairwise comparisons. The selected features, alone or in addition to age and years of education, were used to build a learning-based classifier in both the cases of multiclass and the three binary classification tasks. The leave-one-out cross validation was used for testing and the classifiers accuracy was computed. In the multiclass case the best performance was an accuracy of 67%. In the comparison between Group 1 and 2 the best result was an accuracy of 81%; in the comparison between Group 2 and 3 it was achieved an accuracy of 68%; lastly, in the comparison between Group 1 and 3 it was scored an accuracy of 94%. The setting in which more interest was put was the binary comparison between Group 1 and 2, since it is the one which represents the transition from healthy subjects to subjects with MCI. In this case, an overall accuracy of 75% was found when significant voice features were used alone. The accuracy increased up to 81% when age and years of education were included. These performances were lower than the accuracy of 86% found in a recent study. However, in that study the classification was based on several tasks, including more cognitive demanding tasks. This results acts as a proof of concepst of the fact that acoustic features, derived for the first time only from an ecologic continuous speech task, were able to discriminate among different levels of cognitive decline. This study poses the basis for the development of a mobile application performing automatic voice analysis on-the-fly during phone conversations. This mobile app might represent a tool to daily monitor the cognitive function of elderly people in a transparent and non-intrusive manner, with the final aim to favor an early diagnosis of cognitive decline.File | Dimensione | Formato | |
---|---|---|---|
TM_MatteoCaielli_2019.pdf
accessibile in internet per tutti
Descrizione: Testo della tesi
Dimensione
13.43 MB
Formato
Adobe PDF
|
13.43 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/147916