Colorectal cancer is one of the most common and lethal malignancies worldwide, char- acterized by the uncontrolled growth of cells in the colon or rectum. Early detection through medical imaging is critical for improving patient outcomes, as timely diagnosis enables effective treatment and reduces mortality rates. Recent advances in deep learning have demonstrated significant potential in medical image analysis, yet the lack of explain- ability in many models poses challenges for clinical adoption. This work explores the application of Vision Transformer (ViT) models for colorectal cancer detection and the assessment of model explainability. This work places a strong emphasis on explainability by employing different explainability techniques. These methods highlight salient regions in the images that contribute to the models predictions, thereby providing transparency and fostering trust among medical practitioners. The Vit-base-224 model was trained on two benchmark datasets, Kvasir and Colorectal Histology-MNIST achieving accuracies of 92.49% and 97.18%, respectively. To assess model explainability, two explainability meth- ods were employed: Attention rollout and Grad-CAM. The quality of explanations was evaluated using average drop and faithfulness metrics. Results showed that attention roll- out generated more faithful and confidence-preserving saliency maps on the Kvasir dataset (average drop = 0.29, faithfulness = 0.42) compared to Histology-MNIST (average drop = 0.50, faithfulness = 0.28), indicating better visual localization of relevant features in endoscopy images. A comparison with Grad-CAM confirmed the superior alignment and interpretability of attention-based explanations for transformer architectures. Overall, the findings demonstrate the effectiveness of Vision Transformers in both classification performance and explainability for medical imaging, while also emphasizing the impact of dataset characteristics and model training paradigms on interpretability quality.
Il cancro del colon-retto è una delle neoplasie più comuni e letali a livello globale, carat- terizzato dalla crescita incontrollata di cellule nel colon o nel retto. La diagnosi precoce tramite imaging medico è fondamentale per migliorare la prognosi dei pazienti, poiché consente trattamenti tempestivi e riduce i tassi di mortalità. I recenti progressi nellambito del deep learning hanno mostrato un notevole potenziale nellanalisi di immagini mediche; tuttavia, la mancanza di spiegabilità in molti modelli rappresenta una sfida per ladozione clinica. Questo lavoro esplora l’applicazione dei modelli Vision Transformer (ViT) per la rilevazione del cancro del colon-retto, con particolare attenzione alla spiegabilità del modello. Lenfasi è posta sullutilizzo di tecniche di explainability che evidenziano le regioni salienti delle immagini che contribuiscono alle predizioni del modello, offrendo così trasparenza e favorendo la fiducia tra i professionisti medici. Il modello ViT-base-224 è stato addestrato su due dataset di riferimento, Kvasir e Colorectal Histology-MNIST, raggiungendo unaccuratezza rispettivamente del 92.49 Per valutare la spiegabilità del modello, sono stati impiegati due metodi: Attention Rollout e Grad-CAM. La qualità delle spiegazioni è stata misurata tramite le metriche average drop e faithfulness. I risultati mostrano che Attention Rollout genera mappe di salienza più fedeli e che preservano meglio la confidenza del modello nel dataset Kvasir (average drop = 0.29, faithfulness = 0.42) rispetto a Histology-MNIST (average drop = 0.50, faith- fulness = 0.28), indicando una migliore localizzazione visiva delle caratteristiche rilevanti nelle immagini endoscopiche. Il confronto con Grad-CAM ha confermato la superiore coerenza e interpretabilità delle spiegazioni basate sullattenzione nei modelli transformer. Nel complesso, i risultati dimostrano lefficacia dei Vision Transformer sia in termini di performance di classificazione che di spiegabilità nel contesto dellimaging medico, sottolin- eando inoltre linfluenza delle caratteristiche del dataset e delle modalità di addestramento sulla qualità dellinterpretabilità.
Explainable AI for colorectal cancer detection using vision transformers
SALAH ABDELFATAH ABDELSALAM MOHAMED
2024/2025
Abstract
Colorectal cancer is one of the most common and lethal malignancies worldwide, char- acterized by the uncontrolled growth of cells in the colon or rectum. Early detection through medical imaging is critical for improving patient outcomes, as timely diagnosis enables effective treatment and reduces mortality rates. Recent advances in deep learning have demonstrated significant potential in medical image analysis, yet the lack of explain- ability in many models poses challenges for clinical adoption. This work explores the application of Vision Transformer (ViT) models for colorectal cancer detection and the assessment of model explainability. This work places a strong emphasis on explainability by employing different explainability techniques. These methods highlight salient regions in the images that contribute to the models predictions, thereby providing transparency and fostering trust among medical practitioners. The Vit-base-224 model was trained on two benchmark datasets, Kvasir and Colorectal Histology-MNIST achieving accuracies of 92.49% and 97.18%, respectively. To assess model explainability, two explainability meth- ods were employed: Attention rollout and Grad-CAM. The quality of explanations was evaluated using average drop and faithfulness metrics. Results showed that attention roll- out generated more faithful and confidence-preserving saliency maps on the Kvasir dataset (average drop = 0.29, faithfulness = 0.42) compared to Histology-MNIST (average drop = 0.50, faithfulness = 0.28), indicating better visual localization of relevant features in endoscopy images. A comparison with Grad-CAM confirmed the superior alignment and interpretability of attention-based explanations for transformer architectures. Overall, the findings demonstrate the effectiveness of Vision Transformers in both classification performance and explainability for medical imaging, while also emphasizing the impact of dataset characteristics and model training paradigms on interpretability quality.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Salah_Executive Summary_02.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
1.4 MB
Formato
Adobe PDF
|
1.4 MB | Adobe PDF | Visualizza/Apri |
|
2025_07_Salah_Thesis_01.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
12.96 MB
Formato
Adobe PDF
|
12.96 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/240721