KEYWORDS: Ultrasonography, High dynamic range imaging, Prostate cancer, Prostate, Image segmentation, 3D modeling, 3D image processing, Silver, Binary data
A deep-learning model based on the U-Net architecture was developed to segment multiple needles in the 3D transrectal ultrasound (TRUS) images. Attention gates were adopted in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images from simulation CTs were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient TRUS images during high-dose-rate (HDR) prostate brachytherapy. The needle shaft and tip errors against CT-based ground truth were used to evaluate other methods and other methods as comparison. Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.29±0.24 mm at shaft error and 0.442±0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference was observed (p = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deep supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error. Besides, to our best knowledge, this is the first attempt on multi-needle localization in the prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time radiation plan dose assessment tools that can further elevate the quality and outcome of prostate HDR brachytherapy.
We propose an approach based on a weekly supervised method for MR-TRUS image registration. Inspired by the viscous fluid physical model, we made the first attempt at combining convolutional neural network (CNN) and long short-term memory (LSTM) Neural Network to perform deep learning-based dense deformation field prediction. Through the integration of convolutional long short-term memory (ConvLSTM) Neural Network and weakly supervised approach, we achieved accurate results in terms of Dice similarity coefficient (DSC) and target registration error (TRE) without using conventional intensity-based image similarity measures. Thirty-six sets of patient data were used in the study. Experimental results showed that our proposed ConvLSTM neural network produced a mean TRE of 2.85±1.72 mm and a mean Dice of 0.89.
KEYWORDS: Associative arrays, 3D image processing, Prostate, Ultrasonography, Cancer, Visualization, 3D acquisition, Detection and tracking algorithms, Reconstruction algorithms, Prostate cancer
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow of multi-needle detection via considering the images without needles as auxiliary. Specifically, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we developed an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning (ORDL). Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to determine the centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm (RANSAC) per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments are conducted on a prostate data set of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our approach can correctly detect 95% needles with a tip location error of 1.01 mm on the prostate dataset. This technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy and facilitate the clinical workflow.
In this study, we propose a new deep learning (DL) framework, a combination of fully convolutional and recurrent neural networks, and integrate them with a weakly supervised method for 3D MRI-transrectal ultrasound (TRUS) image registration. The MR and TRUS images are often highly anisotropic in its dimensions. For instance, in 3D US images the scale of each voxel in depth is often 5~10 times larger than that in each image slice. This high anisotropy makes the common 3D isotropic kernel suffer poor generality, resulting in unsatisfactory registration results in terms of accuracy. The key idea of the paper is to explicitly leverage 3D image anisotropy through the exploitation of the intra-slice context with a fully convolutional network (FCN) and the utilization of the inter-slice context with a recurrent neural network (RNN). After the 3D hierarchical features in MRI and TRUS have been extracted, we generate the dense deformation field by aligning corresponding prostate labels for individual image pairs. Experimental results showed that our proposed FCN-RNN neural network produces a mean target registration error (TRE) of 2.77±1.40 mm, and a mean dice similarity coefficient (DSC) of 0.9.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.