Computer-assisted surgery are increasingly becoming a part of the operating rooms (OR). In future, current OR are expected to include various technologies such as context-awareness and automated surgical workflow analysis with robust sensors, robot-assisted surgical systems, image-guided surgery, virtual and augmented reality systems, and better data analysis and visualization systems. With the inclusion of new technologies e.g. robot-assisted surgeries and change in the conventional surgical approaches, OR are becoming highly involved with numerous sensors, which produce multi-modal information. Despite the discernible advancement of OR technologies, surgeons’ decision-making capability has not considerably influenced. Besides, surgeon’s cognitive load is increased analyzing such extra surgical representational information, which outpaced human analysis capabilities and leads to preventable medical errors. For the advanced surgeries, surgical simulators are being used for surgical training and replacing the conventional training methods such as lecturing, apprenticeship and video demonstrations. However, current training regime and simulationbased training systems do not automatically adapt to the training context and surgical workflow. There are several aspects that can be tackled to potentially overcome these issues, including robust software architectures (to allow easy integration of sensor data and surgical information), recognition of contextual information (to guide and train surgeons on surgical workflows and removing unwanted information), and robust surgical workflow analysis (to improve system performance by building and analyzing surgical workflow automatically for context-awareness). Context-aware systems are expected to analyze the intraoperative data and understand the surgical situation automatically, e.g. recognition of surgical steps. Despite being well researched, we still have not achieved the desired results for context recognition and automatic workflow analysis in surgery. On this background, the overall goal of this thesis is to develop a knowledge-driven context-aware system for reliable recognition and contextual information, aiding assistance to surgical training and providing decision support. The contributions of this PhD work are: (1) A novel modular knowledge-driven context-aware system framework [H1] was developed, which exploits the procedural knowledge and computer vision-based methods to recognise surgical instruments and contextual information. In H1, a knowledge-driven context-aware system framework was implemented using a procedural knowledge (in the form of ontology), 3D image processing and semantic queries, which were used to obtain contextual information on the workflow. The framework was able to retrieve contextual information about current surgical activity along with the information on the need or presence of a surgical instrument. To show the applicability, as a paradigmatic scenario,Thoracentesis procedure was chosen. Further to that, based on the framework components, an intelligent training system for Thoracentesis was developed, where formal knowledge about the procedure workflow and entities for object recognition is specified in the form of production rules, which used to recognise surgical context, i.e. surgical instruments and to automatically interpret surgical workflow. Surgical training with the context-aware system showed similar results as mentor-based training. Knowledge-based context recognition and situation interpretation have been achieved in the pre-operative and intra-operative training scenarios in the laboratory-based setup. (2) A new pipeline [H2] integrating deep learning with knowledge representation techniques, called “Deep-Onto” network, was developed to recognize surgical workflow. H2 exploits a “Deep-Onto” network, which takes advantage of the formal knowledge and semantic relations encoded inside the ontology to recognize surgical steps and other low-level surgical activity. The deep learning models were used to recognize the step, which in progress and the consecutive step. After the contextual recognition on the surgical step is achieved, the information was forwarded to ontology to find other low-level surgical entities. We also developed the video annotation dataset on Robot-Assisted Partial Nephrectomy (RAPN) procedure, which was used in this work. The network was able to achieve an approximate accuracy of 77.8±3.9%, 67.3±29.3% recall, 83.4±21.2% precision and 68.8±20.5% F1 score. (3) A new data mining approach [H3] using inductive logic programming was used to analyze the video annotation data to construct a surgical process model and to analyze the contextual information. In H3, we proposed a data mining approach using inductive logic programming to automatically develop production rules, in the first order logic, based on the intraoperative video data. These rules were either constructed as to represent the surgical process model or to understand the surgical context by analyzing semantic relations between the surgical entities. Data mining was done on Thoracentesis and RAPN video annotations datasets. We found that ILP could learn procedure-specific workflow rules and analyze the contextual information and could a strong technique for context-awareness in surgery. The overall results presented by this thesis demonstrates that knowledge-driven context aware system, comprising knowledge representation, computer vision, machine learning and logic programming techniques, has potential to be a critical aspect of solving the problem of context-awareness and automated surgical workflow analysis. The progress made in this thesis shows that combining these technologies help to extract contextual information underlying surgical workflows, overcoming potential barriers to context-awareness in surgery, which could be helpful in surgical assistance. As a future perspective, this work is a small step in the broader framework of context-aware surgical systems, autonomous and semiautonomous robotic systems to support the surgeon during surgical activities.

N/A

Novel approaches for context-awareness and workflow analysis in surgery

NAKAWALA, HIRENKUMAR CHANDRAKANT

Abstract

Computer-assisted surgery are increasingly becoming a part of the operating rooms (OR). In future, current OR are expected to include various technologies such as context-awareness and automated surgical workflow analysis with robust sensors, robot-assisted surgical systems, image-guided surgery, virtual and augmented reality systems, and better data analysis and visualization systems. With the inclusion of new technologies e.g. robot-assisted surgeries and change in the conventional surgical approaches, OR are becoming highly involved with numerous sensors, which produce multi-modal information. Despite the discernible advancement of OR technologies, surgeons’ decision-making capability has not considerably influenced. Besides, surgeon’s cognitive load is increased analyzing such extra surgical representational information, which outpaced human analysis capabilities and leads to preventable medical errors. For the advanced surgeries, surgical simulators are being used for surgical training and replacing the conventional training methods such as lecturing, apprenticeship and video demonstrations. However, current training regime and simulationbased training systems do not automatically adapt to the training context and surgical workflow. There are several aspects that can be tackled to potentially overcome these issues, including robust software architectures (to allow easy integration of sensor data and surgical information), recognition of contextual information (to guide and train surgeons on surgical workflows and removing unwanted information), and robust surgical workflow analysis (to improve system performance by building and analyzing surgical workflow automatically for context-awareness). Context-aware systems are expected to analyze the intraoperative data and understand the surgical situation automatically, e.g. recognition of surgical steps. Despite being well researched, we still have not achieved the desired results for context recognition and automatic workflow analysis in surgery. On this background, the overall goal of this thesis is to develop a knowledge-driven context-aware system for reliable recognition and contextual information, aiding assistance to surgical training and providing decision support. The contributions of this PhD work are: (1) A novel modular knowledge-driven context-aware system framework [H1] was developed, which exploits the procedural knowledge and computer vision-based methods to recognise surgical instruments and contextual information. In H1, a knowledge-driven context-aware system framework was implemented using a procedural knowledge (in the form of ontology), 3D image processing and semantic queries, which were used to obtain contextual information on the workflow. The framework was able to retrieve contextual information about current surgical activity along with the information on the need or presence of a surgical instrument. To show the applicability, as a paradigmatic scenario,Thoracentesis procedure was chosen. Further to that, based on the framework components, an intelligent training system for Thoracentesis was developed, where formal knowledge about the procedure workflow and entities for object recognition is specified in the form of production rules, which used to recognise surgical context, i.e. surgical instruments and to automatically interpret surgical workflow. Surgical training with the context-aware system showed similar results as mentor-based training. Knowledge-based context recognition and situation interpretation have been achieved in the pre-operative and intra-operative training scenarios in the laboratory-based setup. (2) A new pipeline [H2] integrating deep learning with knowledge representation techniques, called “Deep-Onto” network, was developed to recognize surgical workflow. H2 exploits a “Deep-Onto” network, which takes advantage of the formal knowledge and semantic relations encoded inside the ontology to recognize surgical steps and other low-level surgical activity. The deep learning models were used to recognize the step, which in progress and the consecutive step. After the contextual recognition on the surgical step is achieved, the information was forwarded to ontology to find other low-level surgical entities. We also developed the video annotation dataset on Robot-Assisted Partial Nephrectomy (RAPN) procedure, which was used in this work. The network was able to achieve an approximate accuracy of 77.8±3.9%, 67.3±29.3% recall, 83.4±21.2% precision and 68.8±20.5% F1 score. (3) A new data mining approach [H3] using inductive logic programming was used to analyze the video annotation data to construct a surgical process model and to analyze the contextual information. In H3, we proposed a data mining approach using inductive logic programming to automatically develop production rules, in the first order logic, based on the intraoperative video data. These rules were either constructed as to represent the surgical process model or to understand the surgical context by analyzing semantic relations between the surgical entities. Data mining was done on Thoracentesis and RAPN video annotations datasets. We found that ILP could learn procedure-specific workflow rules and analyze the contextual information and could a strong technique for context-awareness in surgery. The overall results presented by this thesis demonstrates that knowledge-driven context aware system, comprising knowledge representation, computer vision, machine learning and logic programming techniques, has potential to be a critical aspect of solving the problem of context-awareness and automated surgical workflow analysis. The progress made in this thesis shows that combining these technologies help to extract contextual information underlying surgical workflows, overcoming potential barriers to context-awareness in surgery, which could be helpful in surgical assistance. As a future perspective, this work is a small step in the broader framework of context-aware surgical systems, autonomous and semiautonomous robotic systems to support the surgeon during surgical activities.
ALIVERTI, ANDREA
GUAZZONI, CHIARA
16-mag-2018
N/A
Tesi di dottorato
File allegati
File Dimensione Formato  
Novel approaches for context-awareness and workflow analysis in surgery.pdf

solo utenti autorizzati dal 27/04/2021

Descrizione: PhD Thesis - Hirenkumar Chandrakant Nakawala
Dimensione 16.44 MB
Formato Adobe PDF
16.44 MB Adobe PDF   Visualizza/Apri

I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10589/140373