This manuscript describes an image-based scheme for automatic segmentation and measurement of thrombosis. Biologists inject drugs that can cause thrombosis in mice and use a Confocal Laser Scanning Microscope (CLSM) to observe changes in blood vessels to understand the mechanism of thrombosis. However, it is difficult to segment the thrombus region in CLSM images because the thrombus region is very similar to the background. Therefore, computer vision-based methods are used to analyze thrombosis and assist biologists. A previous method used the difference between a preset reference frame (fixed frame) and a frame (current frame) to locate the thrombus region. However, this method did not take into account that the thrombus always grows inside the blood vessels, resulting in mis-segmented thrombus regions. Therefore, we use the anatomical structure relationship of the mouse to increase the accuracy of thrombus segmentation. We use the difference between the current frame and a reference frame to segment the thrombus region. The blood vessel, which is a representative anatomical structure in the CLSM image, is found using Otsu-based thresholding and is used to remove the false positive thrombus regions. The remaining thrombus region is used to calculate the size, the centroid coordinate of the thrombus, and the growth rate of the thrombus region. We created the ground truth of the thrombus regions to validate the proposed method. Experimental results showed that the DICE value of the proposed method was 0.76 ± 0.13.
This article describes a method for bronchial nomenclature using real bronchoscopic (RB) images and pre-built knowledge base of branches. The bronchus has a complex tree-like structure, which increases the difficulty of bronchoscopy. Therefore, a bronchoscopic navigation system is used to help physicians during examination. Conventional navigation system used preoperative CT images and real bronchoscopic images to obtain the camera pose for navigation, whose accuracy is influenced by organ deformation. We propose a bronchial nomenclature method to estimate branch names for bronchoscopic navigation. This method consists of a bronchus knowledge base construction model, a camera motion estimation module, an anatomical structure tracking module, and a branch name estimation module. The knowledge base construction module is used to find the relationship of each branch. The anatomical tracking module is used to track the bronchial orifice (BO) extracted in each RB frame. The camera motion estimation module is used to estimate the camera motion between two frames. The branch name estimation module uses the pre-built bronchus knowledge base and BO tracking results to find the name of each branch. Experimental results showed that it is possible to estimate branch names using only RB images and the pre-built knowledge base of branches.
KEYWORDS: Cameras, In vivo imaging, Navigation systems, Video, Computed tomography, Motion estimation, Image segmentation, 3D metrology, 3D image processing, Medicine
This paper describes a branching level estimation method using the tracking result of the bronchial orifice structure in branches. Since the bronchus has a tree-like structure with many branches, it would be beneficial to physicians if the location of the bronchoscope in branches is provided. Hence, the estimation of branching level is the core work of coarse bronchoscope tracking-based navigation bronchoscopy. Previous method used the changes in the number of the bronchial orifice (BO) and the camera moving direction for the branching level estimation, which cannot observe the changes of each BO region. Therefore, we extract BO regions using a virtual depth image from deep learning and track these regions among real bronchoscope images. The branching level is estimated based on the results of BO tracking. Experimental results showed that the average accuracy of the branching level estimation is 92.1 %.
This paper describes a bronchial orifice (BO) segmentation method on real bronchoscopic video frames by using depth images. The BO is one of the anatomical characteristics in the bronchus, which is critical in clinical applications such as bronchus scene description and navigation path generation. Previous work used image appearance and the gradation of the real bronchoscopic image to segment orifice region, which behaved poorly in complex scenes including bubble or changes in illumination. To obtain a better segmentation result of BO even in the complex scenes, we propose a BO segmentation method using the distance between the bronchoscope camera and the bronchus lumen, which is represented by a depth image. Since the depth image is unavailable due to devices limitation, we use an image-to-image domain translation network named cycle generative adversarial network (CycleGAN) to estimate depth images from real bronchoscopic images. The BO regions are considered as the regions whose distances are larger than a distance threshold. We decide the distance threshold according to the depth images' projection profiles. Experimental results showed that the proposed method can find BO regions in the real bronchoscopic videos in real-time. We manually labeled BO regions as ground truth to evaluate the proposed method. The average Dice score of the proposed method was 77.0 %.
We present an improved patient-specific bronchoscopic navigation scheme by using visual SLAM for bronchoscope tracking. Bronchoscopic navigation system is used to assist physicians during the bronchoscopy examination. Conventional navigation system obtain the camera pose of bronchoscope based on image similarity of real bron- choscopic (RB) and virtual bronchoscopic (VB) images or the pose information from the additional sensor. We propose to use visual SLAM for bronchoscope tracking. The tracking procedure of visual SLAM is improved for processing bronchoscopic scene by considering the inter-frame displacement to filter 2D-3D matches used for pose optimization. The tracking result is registered to CT images to find the relationship between RB and CT the coordinate system. Virtual bronchoscopic views are generated corresponding to real bronchoscopic views by using the registration result and camera pose. Experimental results showed that our proposed method track more frames with higher accuracy on average than the the previous method. The virtual bronchoscopic views have high similarity with real bronchoscopic views.
In this paper, we describe an automated hand eye calibration in laparoscope holding robot for robot assisted surgery. In minimally invasive surgery, laparoscope holding robot can give more stability of the laparoscope images than human laparoscope assistants. We study on laparoscope holding robot controlled based on anatomical structure information during laparoscopic surgery. In order to operate laparoscope holding robot guided by images, it is necessary to make a vision system for laparoscope holding robot. We compute the position and orientation relationships between a laparoscope camera and a Tool Center Point (TCP) of robot arm to make a vision system. We utilize Tsai’s method for hand eye calibration to estimate the homogeneous transformation matrix between the TCP and laparoscope camera. We attached a laparoscope to an industrial robot arm. The robot arm is moved to different positions and captures calibration board images. Hand eye calibration is performed using recorded TCP positions and calibration board images. The homogeneous transformation matrices between the laparoscope camera coordinate and the laparoscope holding robot TCP coordinate is obtained by this hand eye calibration. The experimental result shown that the proposed method could compute the homogeneous transformation matrix between a laparoscope holding robot TCP and a laparoscope camera.
We present a new scheme for bronchoscopic navigation by exploiting visual SLAM for bronchoscope tracking. Bronchoscopic navigation system is used to guide physicians by providing 3D space information about the bronchoscope during bronchoscopic examination. Existing bronchoscopic navigation systems mainly used CT-video or sensor for bronchoscope tracking. CT-video based tracking estimates the bronchoscope pose by registration of real bronchoscope images and virtual images generated from computed tomography (CT) images, which requires lots of time. Sensor based tracking calculates the bronchoscope pose based on information from sensor, which is easily in uenced by examination tools. We improve the bronchoscope tracking by using visual simultaneous localization and mapping (VSLAM), which can overcome the aforementioned shortcomings. VSLAM is an approach to estimate the camera pose and reconstruct surrounding structure around a camera (called map). We use the adjacent frames to increase the points used for tracking, and use VSLAM for bronchoscope tracking. Tracking performance of VSLAM were evaluated with phantom and in-vivo videos. Reconstruction performance of VSLAM was evaluated by root mean square (RMS) value, which is calculated using aligned reconstructed points and segmented bronchus from pre-operative CT volumes. Experimental results showed that the successfully tracked frames in the proposed method increased more than 700 frames compared with the original ORB-SLAM for six cases. The average RMS in phantom case between estimated bronchus from SLAM and bronchus shape from segmented bronchus was 2.55 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.