In recent years, with the development of new tools for Minimal Invasive Robotic Surgery (MIRS), the implementation of navigation algorithms has risen as an interesting challenge[1]. Some novel developments have proposed the use of navigation methods based on image information, and specifically by using Deep Learning-based methods[2].
As a further step to determine the feasibility of applying these image-based methods for lumen segmentation, and its later merge with a robotic endoscope, for autonomous navigation, a platform where to test this devices is needed.