In cancer surgery, reliable intraoperative visualization is still a technological difficulty. Recently, a novel tethered laparoscopic gamma detector was introduced to identify the location of tracer activity to help identify lymph node.
However, the location of the probe (‘SENSEI®’) and the tissue surface it points to will not be clearly indicated. For better tracking of the sensing area of the probe, a miniaturized camera and a structured light will be integrated into the probe. Therefore, the aim of this study is to propose a fast method for image registration between laparoscopic view and an attached miniaturized camera. Meanwhile, the sensing area of probe should be found in the view of laparoscope. We designed a structure to connect the hardware: camera, structured light and probe. A selfsupervised convolutional network (AMIRNet) was designed to learn the discriminative features of the images to directly make registration. After that, structured light was used to determine the sensing area of probe in the laparoscopic image.