KEYWORDS: Liver, Image segmentation, 3D modeling, Spleen, Computed tomography, Data modeling, Image processing algorithms and systems, Medical imaging, Convolution, 3D image processing
The detection and the evaluation of the shape of liver from abdominal computed tomography (CT) images are fundamental tasks in the computer-assisted liver surgery planning such as radiation therapy. However, the segmentation of the liver still remains many challenges to be solved, such as ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we developed an automatic liver segmentation model based on 3D U-net network. Some preprocessing steps were done to elevate the performance of our protocol first. Also, an approximate liver map was generated by calculating the gradient of CT images. The area which had high possibility to be liver was select as the training set to make sure the balance of data. Then, a deep learning U-net structure was applied for the processed training data. Finally, some post-processing methods, which include k-means clustering and morphology algorithms, was applied in our protocol. Our protocol showed the results with high structure similarity index (SSIM), dice score coefficient and peak signal-to noise ratio (PSNR) of liver segmentation model, demonstrating the potential clinical applicability of the proposed approach.
In this paper, a new computer-aided diagnosis system is proposed to automatically diagnose liver cirrhosis based on fourphases CT images, which included non-contrast phase, arterial phase, delay phase and portal venous phase. It is developed for the purpose of discriminating the cirrhosis into mild or severe level by automatic liver segmentation method and classification method using machine learning algorithm. First, the gradient-inverse map of CT images are calculated to derive the relative-smooth features in local area. Then we compared the centroid and area of each binary labeled groups through each slice to quantitatively extract the volume of interest (VOI) of liver automatically. In classification step, some first-order features and texture features are calculated to describe the intensity representation of liver parenchyma. Some parameters are also used to quantify the distribution of intensity in VOI. By the way, we also quantified the shape of VOI and derived some structural features. Finally, the trained support vector machine (SVM) and Neural Network (NN) classifier is applied to classify the subjects into clinical stages of the liver cirrhosis.
Lung Cancer is a leading cause of death worldwide, and about 85% of lung cancer is non-small cell lung cancer (NSCLC). The staging of lymph nodes in NSCLC patients is extremely important because respective stages require different treatments. FDG-PET/CT is a gold standard for lymph node metastasis staging of NSCLC. However, the results of discriminating lymph node staging on 18F-2-fluoro-2-deoxy-d-glucose (FDG) positron emission tomography (PET) / computed tomography (CT) still needs improvement. In addition to the traditional image parameters of FDG-PET/CT such as standardized uptake value (SUV), there are many other parameters available from FDG-PET/CT images, for example, the lymphatic drainage pathway. Other than this, texture analysis which distinguishes subtle difference can also be a way to define lymph node staging. For the purpose of a better accuracy on lymph node metastasis diagnosis on NSCLC patient in FDG-PET/CT, this research developed a computer-aided diagnosis (CAD) system to improve the diagnostic efficiency, which achieved 88.056% accuracy.
In this study, a new computer-aided system was proposed to automatically reconstruct the spine model. The bi-planar EOS X-ray imaging was adopted as the scanning technology, which is capable of a simultaneous capture of bi-planar X-ray images by slot scanning of the whole body using ultra-low radiation doses. High quality and high contrast anteroposterior (AP) and lateral (LAT) X-ray images will be acquired during scanning period and these two radiographs enable a precise three-dimensional reconstruction of vertebrae, pelvis and other parts of the skeletal system. To overcome the timeconsuming issue of spine reconstruction using EOS system, a generative adversarial network (GAN) was applied to reconstruct the entire spine model, which is consist of generator and discriminator and training by unsupervised learning approach. Nowadays, GAN model has already been adopted in the transformation from 2D image to 3D scenes. Therefore, our approach represents a potential alternative for EOS reconstruction while still maintaining a clinically acceptable diagnostic accuracy.
Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists’ diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3% while our model achieved a better accuracy of 79.6%. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1%, but our model maintained a high accuracy of 75.7%. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.