Robotic surgery has made great progress in the field of minimally invasive surgery, due to its advantages in flexibility, precision, and 3D vision.
Meanwhile, integrated augmented reality (AR) systems are becoming more and more significant in robotic surgery systems, which allows information from multiple modalities to be incorporated into real-time surgery procedure. Scene depth estimation is an essential part and a prerequisite. This project aims to design a benchtop stereo-vision system, acquire dataset and ground truth, apply deep learning networks to the system, and evaluate the reconstruction results.