Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing, as well as surgical vision, are evolving as techniques needed to facilitate computer-assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition.
Analyzing the surgical workflow is a prerequisite for many applications in computer-assisted surgery (CAS), such as context-aware visualization of navigation information, specifying the most probable tool required next by the surgeon or determining the remaining duration of surgery. Since laparoscopic surgeries are performed using an endoscopic camera, a video stream is always available during surgery, making it the obvious choice as input sensor data for workflow analysis. Furthermore, integrated operating rooms are becoming more prevalent in hospitals, making it possible to access data streams from surgical devices such as cameras, thermoflator, lights, etc. during surgeries.
This project focuses on the online workflow analysis of laparoscopic surgeries. The main goal is to segment surgeries into surgical phases based on the video. Project goals
• Designing and developing deep architectures for surgical tools detection and segmentation of colorectal surgeries into surgical phases based on the video input (public dataset)
• Applying the developed technique on prostatectomy (in-house dataset)
• Detection of deviation from normal habit patterns during surgery
Multimodal imaging is increasingly being used within healthcare for diagnosis, planning treatment, guiding treatment, biopsy, surgical navigation and monitoring disease progression.
Multimodality imaging takes advantage of the strengths of different imaging modalities to provide a more complete picture of the anatomy under investigation. The goal of this study is to develop a real-time MRI and ultrasound image registration.
MRI is used widely for both diagnostic and therapeutic planning applications because of its multi-planar imaging capability, high signal to noise ratio, and sensitivity to subtle changes in soft tissue morphology and function. Ultrasound imaging, on the other hand, has important advantages including high temporal resolution, high sensitivity to acoustic scatterers such as calcifications and gas bubbles, excellent visualization and measurement of blood flow, low cost, and portability. The strengths of these modalities are complementary, and the two are combined regularly (though separately) in clinical practice. The benefits of combining these modalities through image registration have been shown for intra-operative surgical applications and breast/prostate biopsy guidance.
Image registration is the process of transforming different modalities into the same reference frame to achieve as much comprehensive information about the underlying structure as possible. While MRI is typically pre-operative imaging techniques, ultrasound can easily be performed live during surgery. Project goals
The aim of this study is to design and develop a deep learning-based method for the registration of multimodal images (MRI and ultrasound).
1st phase: Using built-in multi-modality image fusion feature in ultrasound machines on phantoms
2nd phase: Error estimation in multimodal registration application using CNN
The only potentially curative option for patients with colorectal liver metastases (CRLM) or hepatocellular carcinoma (HCC) is surgical resection. However, 80–85% of these patients are not eligible for liver surgery because of extensive intrahepatic metastatic lesions or the presence of extrahepatic disease. Neoadjuvant chemotherapy (NAC) is increasingly applied with the aim to downsize tumors in patients with initially unresectable disease to attain a resectable situation.
Accurate imaging of the liver following neoadjuvant chemotherapy is crucial for optimal selection of patients eligible for surgical resection and preparation of a surgical plan. MRI is the most appropriate imaging modality for preoperative assessment of patients with CRLM or HCC.
However, NAC may impair lesion detection and underestimate lesion size. As a result, patients whose tumors were considered resectable on preoperative imaging may turn out to have unresectable tumors during surgery. Or the underestimation may result in insufficient surgery, resulting in positive margins and re-excisions.
The incidence of recurrence after liver resection is very high. In different series between 43% and 65% of the patients had recurrences within 2 years of removal of the first tumor, and up to 85% within 5 years. Without any form of treatment, most patients with recurrent cancer will die within one year.
Following surgical treatments, doctors will frequently use MRI to check for residual tumors and will look at the risk that cancer will come back (recur) to decide if the patient should be offered additional treatments (called adjuvant therapy) or repeat the hepatectomy. Project goals
The aim of this study is to design and develop a deep learning-based algorithm to predict five-year liver cancer recurrence using series of liver MRI exams. Patients have serial liver MRI exams: pre-treatment baseline MRI, at follow-up MRI exams during the course of therapy or surgery, and a final MRI after completing the therapy protocol.
Surgery forms the mainstay in the treatment for solid tumors. However, in up to 30% of the cases, surgery is inadequate either because tumor tissue is left behind erroneously or surgical resection is too extensive compromising vital structures such as nerves. So in cancer surgery, surgeons generally operate at a delicate balance between achieving radical tumor resection and preventing morbidity from too extensive surgical resection. Within this context, there is long lasting but still unmet need for a precise surgical tool that informs the surgeon on the tissue type at the tip of his instrument and in this way can guide the surgical procedure.
To tackle these shortcomings we propose to employ an innovative approach of image-guided surgery which allows real-time intra-operative tissue recognition during surgery. To this end, we will combine the unique characteristics of ultrasound imaging (US) with the excellent tissue sensing characteristics of optical spectroscopy (DRS). The two techniques, US and DRS, both have proven track records in the field of cancer diagnosis. However, both have their critical limitations. DRS has excellent performance with respect to tissue diagnosis, distinguishing cancer from healthy tissues. Because DRS is a point measurement that samples small tissue volumes close to the measurement probe, its depth sensitivity is limited and thus the possibility to look deeper into the surgical resection plane. Ultrasound on the other hand, has more than sufficient sampling depth and resolution, but, cannot resolve cancer directly from the imaging architecture. Our approach will be to strategically combine these two techniques and using the best of two worlds within one smart device. Project goals
• Ultrasound raw data analysis and development of new algorithms for beamforming, image reconstruction, layer segmentation and elastography
• Designing and developing a multimodal Machine Learning/deep learning technique for the discrimination of cancer from healthy tissues by using the diffuse reflectance spectrum, US data (raw or processed) and elasticity data as input features.