Images in medical fields, like CT scans, are generally grayscale, which makes them useful only for specific applications, but less intuitive to understand compared to realistic colored representations. According to some experts, colored medical images could be useful for education and visualization, but existing approaches don’t offer satisfying results. This thesis presents a deep learning method that can realistically colorize medical images, which can be helpful for a range of applications, such as medical education, surgical planning, patient communication, and 3D visualization. The main obstacle is the lack of paired data to train a model. Because of how medical images are physically created, we don't have real colored versions of CT scans to use as ground truth, so we tackled this issue by creating a pseudo-CT training approach. For image-to-image transformations GAN architectures are vastly used, so we decided that this solution could work well also for our application. After colorizing each slice and segmenting the organs, the RGB volumes are then reconstructed, while keeping all the original DICOM metadata so the files stay compatible with medical software. As a practical application of this colorization system, we developed a texturization workflow to transfer the volumetric color information onto 3D mesh surfaces. This process involves extracting a point cloud from the outer surface of the colored volume, aligning it to a mesh of the same organ, and following a workflow using MeshLab and Blender which produces high-quality textures while optimizing the mesh geometry. And by combining all these components, this work provides a methodology to generate realistic colored medical volumes without specialized imaging equipment, and a specific case study that extends this colorization to the realization of textured mesh.
Le immagini in campo medico, come le scansioni CT, sono generalmente in scala di grigi, e utili solo per utilizzi specifici. Secondo alcuni esperti, immagini mediche colorate potrebbero essere utili per l’istruzione e visualizzazione, ma gli approcci esistenti non offrono risultati soddisfacenti. Questa tesi presenta un metodo che si basa sul deep learning che permette di colorare realisticamente le immagini mediche, il che può essere utile per diverse applicazioni, come l'istruzione, la pianificazione chirurgica, la comunicazione con i pazienti e la visualizzazione 3D. L’ostacolo principale è la mancanza di paired data per eseguire il training di un modello. A causa di limiti fisici, non esistono versioni reali di CT a colori che possono essere usate come ground truth, quindi abbiamo affrontato questo problema con un approccio che utilizza immagini pseudo-CT. Per le trasformazioni image-to-image, le architetture GAN sono ampiamente usate, quindi abbiamo deciso che questa soluzione potesse funzionare bene anche nel nostro caso. Dopo aver colorato ogni slice e segmentato gli organi, ricostruiamo i volumi RGB mantenendo tutti i metadati originali del DICOM; in questo modo i file ottenuti rimangono compatibili con gli attuali software medici. Come applicazione pratica di questo sistema di colorazione, abbiamo sviluppato un workflow per trasferire le informazioni di colore su superfici mesh 3D. Questo processo prevede l'estrazione di una point cloud dalla superficie esterna del volume colorato, l'allineamento con una mesh dello stesso organo, e un workflow che utilizza MeshLab e Blender per produrre texture di alta qualità ottimizzando al contempo la geometria della mesh. Combinando questi componenti, questo lavoro fornisce una metodologia per generare volumi colorati realistici senza la necessità di strumentazione specializzata, e un caso di studio specifico che estende questa colorazione alla creazione di mesh texturizzate.
Deep learning-based colorization of CT scans for medical support applications
BOIMAH, DIVINE ELIKPLIM
2024/2025
Abstract
Images in medical fields, like CT scans, are generally grayscale, which makes them useful only for specific applications, but less intuitive to understand compared to realistic colored representations. According to some experts, colored medical images could be useful for education and visualization, but existing approaches don’t offer satisfying results. This thesis presents a deep learning method that can realistically colorize medical images, which can be helpful for a range of applications, such as medical education, surgical planning, patient communication, and 3D visualization. The main obstacle is the lack of paired data to train a model. Because of how medical images are physically created, we don't have real colored versions of CT scans to use as ground truth, so we tackled this issue by creating a pseudo-CT training approach. For image-to-image transformations GAN architectures are vastly used, so we decided that this solution could work well also for our application. After colorizing each slice and segmenting the organs, the RGB volumes are then reconstructed, while keeping all the original DICOM metadata so the files stay compatible with medical software. As a practical application of this colorization system, we developed a texturization workflow to transfer the volumetric color information onto 3D mesh surfaces. This process involves extracting a point cloud from the outer surface of the colored volume, aligning it to a mesh of the same organ, and following a workflow using MeshLab and Blender which produces high-quality textures while optimizing the mesh geometry. And by combining all these components, this work provides a methodology to generate realistic colored medical volumes without specialized imaging equipment, and a specific case study that extends this colorization to the realization of textured mesh.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Boimah.pdf
non accessibile
Descrizione: Testo della Tesi
Dimensione
20.22 MB
Formato
Adobe PDF
|
20.22 MB | Adobe PDF | Visualizza/Apri |
|
2025_12_Boimah_Executive Summary.pdf
accessibile in internet per tutti
Descrizione: Executive Summary
Dimensione
1.5 MB
Formato
Adobe PDF
|
1.5 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246185