Multimodal imaging is increasingly being used within healthcare for diagnosis, planning treatment, guiding treatment, biopsy, surgical navigation and monitoring disease progression.
Multimodality imaging takes advantage of the strengths of different imaging modalities to provide a more complete picture of the anatomy under investigation. The goal of this study is to develop a real-time MRI and ultrasound image registration.
MRI is used widely for both diagnostic and therapeutic planning applications because of its multi-planar imaging capability, high signal to noise ratio, and sensitivity to subtle changes in soft tissue morphology and function. Ultrasound imaging, on the other hand, has important advantages including high temporal resolution, high sensitivity to acoustic scatterers such as calcifications and gas bubbles, excellent visualization and measurement of blood flow, low cost, and portability. The strengths of these modalities are complementary, and the two are combined regularly (though separately) in clinical practice. The benefits of combining these modalities through image registration have been shown for intra-operative surgical applications and breast/prostate biopsy guidance.
Image registration is the process of transforming different modalities into the same reference frame to achieve as much comprehensive information about the underlying structure as possible. While MRI is typically pre-operative imaging techniques, ultrasound can easily be performed live during surgery.
The aim of this study is to design and develop a deep learning-based method for the registration of multimodal images (MRI and ultrasound).
1st phase: Using built-in multi-modality image fusion feature in ultrasound machines on phantoms
2nd phase: Error estimation in multimodal registration application using CNN