The assessment of lymph nodes in CT examinations of cancer patients is essential for cancer staging with direct impact on therapeutic decisions. Automated detection and segmentation of lymph nodes is challenging, especially, due to significant variability in size, shape and location coupled with weak and variable image contrast. In this paper, we propose a joint detection and segmentation approach using a fully convolutional neural network based on 3D foveal patches. To enable network training, 89 publicly available CT data sets were carefully re-annotated yielding an extensive set of 4351 voxel-wise segmentations of thoracic lymph nodes. Based on these annotations, the 3D network was trained to perform per voxel classification. For enlarged potentially malignant lymph nodes, a detection rate of 79% with 8.0 false-positive detections per volume was obtained. A DICE of 0.44 was achieved on average.
Radiological assessment of the spine is performed regularly in the context of orthopedics, neurology, oncology, and trauma management. Due to the extension and curved geometry of the spinal column, reading is time-consuming and requires substantial user interaction to navigate through the data during inspection. In this paper a spine geometry guided viewing approach is proposed facilitating reading by reducing the degrees of freedom to be manipulated during inspection of the data. The method is using the spine centerline as a representation of the spine geometry. We assume that renderings most useful for reading are those that can be locally defined based on a rotation and translation relative to the spine centerline. The resulting renderings conserve locally the relation to the spine and lead to curved planar reformats that can be adjusted using a small set of parameters to minimize user interaction. The spine centerline is extracted by an automated image to image foveal fully convolutional neural network (FFCN) based approach. The network consists of three parallel convolutional pathways working on different levels of resolution and processed fields of view. The outputs of the parallel pathways are combined by a subsequent feature integration pathway to yield the (final) centerline probability map, which is converted into a set of spine centerline points. The network has been trained separately using two data set types, one comprising a mixture of T1 and T2 weighted spine MR images and one using CT image data. We achieve an average centerline position error of 1.7 mm for MR and 0.9 mm for CT and a DICE coefficient of 0.84 for MR and 0.95 for CT. Based on the thus obtained centerline viewing and multi-planar reformatting can be easily facilitated.
Most fully automatic segmentation approaches target a single anatomical structure in a specific combination of image modalities and are often difficult to extend to other modalities and protocols or segmentation tasks. More recently, deep learning-based approaches promise to be readily adaptable to new applications as long as a suitable training set is available, although most deep learning architectures are still tuned towards a specific application and data domain. In this paper, we propose a novel fully convolutional neural network architecture for image segmentation and show that the same architecture with the same learning parameters can be used to train models for 20 different organs on two different protocols, while still achieving segmentation accuracy that is on par with the state-of-the-art. In addition, the architecture was designed to minimize the amount of GPU memory required for processing large images, which facilitates the application to full-resolution whole-body CT scans. We have evaluated our method on the publicly available data set of the VISCERAL multi-organ segmentation challenge and compared the performance of our method with those of the challenge and two recently proposed deep learning-based approaches. We achieved the highest Dice similarity coefficients for 17 out of 20 organs for the contrast enhanced CT scans and for 10 out of 20 organs for the uncontrasted CT scans in a cross-comparison between our method and participating methods.
Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are – for a nearest neighbor method – surprisingly accurate, robust as well as data and time efficient.
Ultrasound is increasingly becoming a 3D modality. Mechanical and matrix array transducers are able to deliver 3D images with good spatial and temporal resolution. The 3D imaging facilitates the application of automated image analysis to enhance workflows, which has the potential to make ultrasound a less operator dependent modality. However, the analysis of the more complex 3D images and definition of all examination standards on 2D images pose barriers to the use of 3D in daily clinical practice. In this paper, we address a part of the canonical fetal screening program, namely the localization of the abdominal cross-sectional plane with the corresponding measurement of the abdominal circumference in this plane. For this purpose, a fully automated pipeline has been designed starting with a random forest based anatomical landmark detection. A feature trained shape model of the fetal torso including inner organs with the abdominal cross-sectional plane encoded into the model is then transformed into the patient space using the landmark localizations. In a free-form deformation step, the model is individualized to the image, using a torso probability map generated by a convolutional neural network as an additional feature image. After adaptation, the abdominal plane and the abdominal torso contour in that plane are directly obtained. This allows the measurement of the abdominal circumference as well as the rendering of the plane for visual assessment. The method has been trained on 126 and evaluated on 42 abdominal 3D US datasets. An average plane offset error of 5.8 mm and an average relative circumference error of 4.9 % in the evaluation set could be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.