Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing, as well as surgical vision, are evolving as techniques needed to facilitate computer-assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition.
Analyzing the surgical workflow is a prerequisite for many applications in computer-assisted surgery (CAS), such as context-aware visualization of navigation information, specifying the most probable tool required next by the surgeon or determining the remaining duration of surgery. Since laparoscopic surgeries are performed using an endoscopic camera, a video stream is always available during surgery, making it the obvious choice as input sensor data for workflow analysis. Furthermore, integrated operating rooms are becoming more prevalent in hospitals, making it possible to access data streams from surgical devices such as cameras, thermoflator, lights, etc. during surgeries.
This project focuses on the online workflow analysis of laparoscopic surgeries. The main goal is to segment surgeries into surgical phases based on the video.
• Designing and developing deep architectures for surgical tools detection and segmentation of colorectal surgeries into surgical phases based on the video input (public dataset)
• Applying the developed technique on prostatectomy (in-house dataset)
• Detection of deviation from normal habit patterns during surgery