Over the last years Artificial Intelligence has exponentially grown and become a reliable and accurate tool for a seeming limitless field of applications. Just like in humans vision plays a fundamental role to adapt the walking to external features, similar advantages can be obtained with the help of Computer Vision when designing a control strategy for a lower limb exoskeleton. In this work, a Deep Learning model able to recognise the environment in the proximity of the subject is implemented within a hip exosuit controller. It is able to distinguish among three different walking terrains (Incline Stairs. Level Ground, Decline Stairs) through the use of an RGB camera and to adapt the assistance accordingly. The system was tested with seven healthy participants walking throughout an overground path comprising of staircases and level ground. Subjects performed the task with the exosuit disabled (Exo Off ), constant assistance profile (Vision Off ), and with assistance modulation (Vision On). Our results showed that the controller was able to promptly classify in real-time the path in front of the user with an overall accuracy of 93.0 ± 1.1%, with values above the 85% for each class, and to perform assistance modulation accordingly. Evaluation related to the effects on the user showed that Vision On was able to outperform the other two conditions: we obtained significantly higher metabolic savings than Exo Off, with a peak of ≈ −20% when climbing up the staircase and ≈ −16% in the overall path, and than Vision Off when ascending or descending stairs. Such advancements in the field may yield to a step forward for the exploitation of lightweight walking assistive technologies in real-life scenarios.
Negli ultimi anni l’intelligenza artificiale è cresciuta ad un livello esponenziale, trasformandosi da strumento di ricerca ad uno sempre più applicativo. Come gli umani sfruttano la vista nel processo decisionale in una moltitudine di compiti, un simile meccanismo può essere riprodotto tramite a livello informatico tramite la computer vision. Nel presente lavoro implementiamo, ad un pre-esistente sistema di controllo di un esoscheletro per arto inferiore, un modello di Deep Learning in grado di riconoscere l’ambiente circostante e di adattare l’assistenza di conseguenza. Il sistema è in grado di distinguere tra tre classi di terreni (superficie piana, scala ascendente e scala discendente) acquisendo immagini in tempo reale tramite una camera RGB posta sulla cintura dell’esoscheletro. Sette soggetti sono stati reclutati per testare il nuovo sistema lungo un percorso che comprende i citati terreni. Tre condizioni sono state considerate: con l’esoscheletro spento, con un’assistenza costante, e con assistenza modulata tramite computer vision. I risultati mostrano che il sistema è in grado di classificare correttamente l’ambiente circostante per il 93.0 ± 1.1% delle volte, con valori al di sopra del 85% per ogni classe e di assistere l’utente di conseguenza. In termini di consumo metabolico, il controllo con Vision dimostra di essere il più performante, con un risparmio del ≈ −20% salendo le scale, e del ≈ −16% totale rispetto a non ricevere assistenza. Inoltre, si registrano miglioramenti rispetto all’assistenza costante nel salire e scendere le scale. Tali risultati potrebbero portare ad un ulteriore passo avanti verso un futuro utilizzo di tecnologie di assistenza alla camminata in scenari di vita reale.
A Novel Computer Vision-based Control Strategy for a Hip Exosuit to Perform Assistance Modulation on Various Terrains
MOSSINI, MIRKO
2021/2022
Abstract
Over the last years Artificial Intelligence has exponentially grown and become a reliable and accurate tool for a seeming limitless field of applications. Just like in humans vision plays a fundamental role to adapt the walking to external features, similar advantages can be obtained with the help of Computer Vision when designing a control strategy for a lower limb exoskeleton. In this work, a Deep Learning model able to recognise the environment in the proximity of the subject is implemented within a hip exosuit controller. It is able to distinguish among three different walking terrains (Incline Stairs. Level Ground, Decline Stairs) through the use of an RGB camera and to adapt the assistance accordingly. The system was tested with seven healthy participants walking throughout an overground path comprising of staircases and level ground. Subjects performed the task with the exosuit disabled (Exo Off ), constant assistance profile (Vision Off ), and with assistance modulation (Vision On). Our results showed that the controller was able to promptly classify in real-time the path in front of the user with an overall accuracy of 93.0 ± 1.1%, with values above the 85% for each class, and to perform assistance modulation accordingly. Evaluation related to the effects on the user showed that Vision On was able to outperform the other two conditions: we obtained significantly higher metabolic savings than Exo Off, with a peak of ≈ −20% when climbing up the staircase and ≈ −16% in the overall path, and than Vision Off when ascending or descending stairs. Such advancements in the field may yield to a step forward for the exploitation of lightweight walking assistive technologies in real-life scenarios.File | Dimensione | Formato | |
---|---|---|---|
Mossini_Mirko_MSc_Article.pdf
Open Access dal 25/11/2023
Descrizione: Tesi
Dimensione
19.2 MB
Formato
Adobe PDF
|
19.2 MB | Adobe PDF | Visualizza/Apri |
Mossini_Mirko_Executive_Summary.pdf
Open Access dal 07/12/2023
Descrizione: Executive Summary
Dimensione
1.61 MB
Formato
Adobe PDF
|
1.61 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/197462