Meningiomas account for 37.6% of incidence among primary central nervous system tumors, for their surgical removal a craniotomy is required, a skull opening that involves a skin incision and dura mater opening too. An imaging technique is necessary in neurosurgery to complete a tumor resection, typically an MRI scan is displayed on a neuronavigation system that guides surgery and planning. Although it is the gold standard since the early 2000s it presents several limitations including encumbrance, high cost, the need to shift neurosurgeon’s gaze on screens away from the surgical field to visualize only 2D imaging. In addition, image-guided surgery can only be used after patient’s head fixation in a 3-pin skull-clamp, which can lead to suboptimal positioning for lesion targeting and surgeon ergonomics. To overcome these issues, a Mixed-Reality (MR) application for automatic craniotomy has been developed using the Microsoft HoloLens2 device, leveraging lightness, freehand operations and the ability to project 3D objects into the physical world. An experimental campaign was conducted to assess its feasibility in defining a shorter pathway between craniotomy surface and tumor’s points, obtained for 9 neurosurgeons among the 10 involved; plus, a 13% in mean percentage change for the comparison between the center of MR craniotomy and the one performed with the standard neuronavigation system verifies the possible substitution. To achieve a fully automatic procedure, a convolutional neural network (CNN) was trained to automatically segment meningiomas from T1-MRI scans. An almost 90% DICE coefficient and a mean Haursdoff distance value of 9.66 mm were obtained. Thus, the proposed MR and DL-based workflow can support craniotomy procedures, replacing neuronavigation system and manual brain tumor segmentation.
I meningiomi presentano il 37.6% di incidenza tra i tumori del sistema nervoso centrale, per la loro rimozione chirurgica è richiesta una craniotomia, un’apertura del cranio che coinvolge anche un’incisione di pelle e dura madre. Per completare una resezione di un tumore in neurochirurgia una tecnica di imaging è necessaria e tipicamente si usa una risonanza magnetica, MRI, che viene mostrata sul sistema di neuronavigazione permettendo il planning e il supporto durante la chirurgia. Sebbene questo rappresenti il gold standard fin dagli anni 2000 presenta delle limitazioni come ingombro, costo elevato, necessità per il neurochirurgo di spostare lo sguardo su uno schermo lontano dal campo operatorio per visualizzare l’imaging 2D. Inoltre, la tecnica di image-guided surgery può essere utilizzata solo dopo aver fissato la testa del paziente in uno strumento a 3 punte portando così ad avere un non ottimale posizionamento sia per il targeting della lesione che per l’ergonomia del medico. Per superare questi problemi, un’applicazione di Mixed-Reality (MR) per craniotomia automatica è stata sviluppata sfruttando le proprietà delle HoloLens2 come leggerezza, operazioni free-hands e capacità di proiettare oggetti 3D nel mondo fisico. È stata condotta una campagna sperimentale per valutare la sua fattibilità nel definire un percorso chirurgico più corto tra superficie della craniotomia e punti del tumore, risultato per 9 neurochirurghi sui 10 coinvolti; in più una media di variazione percentuale pari al 13% nel confronto tra i centri della craniotomia ottenuta con MR e quella con il neuronavigatore verifica la possibile sostituzione. Per ottenere una procedura del tutto automatica, una rete neurale convoluzionale (CNN) è stata allenata per segmentare in modo automatico i meningiomi a partire da risonanze T1-MRI. Un coefficiente di DICE pari a quasi il 90% è stato ottenuto e un valore medio di distanza di Haursdoff di 9.66 mm. Quindi, il workflow proposto basato su MR e DL è in grado di supportare le procedure di craniotomia sostituendo il sistema di neuronavigazione e la segmentazione manuale dei tumori cerebrali.
Mixed Reality and deep learning approaches in neurosurgery for craniotomy procedure
Imperiali, Martina
2021/2022
Abstract
Meningiomas account for 37.6% of incidence among primary central nervous system tumors, for their surgical removal a craniotomy is required, a skull opening that involves a skin incision and dura mater opening too. An imaging technique is necessary in neurosurgery to complete a tumor resection, typically an MRI scan is displayed on a neuronavigation system that guides surgery and planning. Although it is the gold standard since the early 2000s it presents several limitations including encumbrance, high cost, the need to shift neurosurgeon’s gaze on screens away from the surgical field to visualize only 2D imaging. In addition, image-guided surgery can only be used after patient’s head fixation in a 3-pin skull-clamp, which can lead to suboptimal positioning for lesion targeting and surgeon ergonomics. To overcome these issues, a Mixed-Reality (MR) application for automatic craniotomy has been developed using the Microsoft HoloLens2 device, leveraging lightness, freehand operations and the ability to project 3D objects into the physical world. An experimental campaign was conducted to assess its feasibility in defining a shorter pathway between craniotomy surface and tumor’s points, obtained for 9 neurosurgeons among the 10 involved; plus, a 13% in mean percentage change for the comparison between the center of MR craniotomy and the one performed with the standard neuronavigation system verifies the possible substitution. To achieve a fully automatic procedure, a convolutional neural network (CNN) was trained to automatically segment meningiomas from T1-MRI scans. An almost 90% DICE coefficient and a mean Haursdoff distance value of 9.66 mm were obtained. Thus, the proposed MR and DL-based workflow can support craniotomy procedures, replacing neuronavigation system and manual brain tumor segmentation.File | Dimensione | Formato | |
---|---|---|---|
2023_05_Imperiali.pdf
accessibile in internet per tutti
Dimensione
2.51 MB
Formato
Adobe PDF
|
2.51 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/206932