A fast and automatic method, using machine learning and min-cuts on a sparse graph, for segmenting Liver from CT Contrast enhanced (CTCE) datasets is proposed. The method first localizes the liver by estimating its centroid using a machine learnt model with features that capture global contextual information. Individual ‘N’ rapid segmentations are carried out by running a min-cut on a sparse 3D rectilinear graph placed at the estimated liver centroid with fractional offsets. Edges of the graph are assigned a cost that is a function of a conditional probability, predicted using a second machine learnt model, which encodes relative location along with a local context. The costs represent the likelihood of the edge crossing the liver boundary. Finally, 3D ensembles of ‘N’ such low resolution, high variance sparse segmentations gives a final high resolution, low variance semantic segmentation. The proposed method is tested on three publically available challenge databases (SLIVER07, 3Dircadb1 and Anatomy3) with M-fold cross validation. On the most popular database: SLIVER07 alone, consisting of 20 datasets we obtained a mean dice score of 0.961 with 4-fold cross validation and an average run-time of 6.22s on a commodity hardware (Intel 3.6GHz dual core, with no GPU). On a combined database of 60 datasets from all three, we obtained a mean dice score of 0.934 with 6-fold cross validation.
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter’s method, we call Analytical Showalter’s Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
To address the error introduced by computed tomography (CT) scanners when assessing volume and unidimensional measurement of solid tumors, we scanned a precision manufactured pocket phantom simultaneously with patients enrolled in a lung cancer clinical trial. Dedicated software quantified bias and random error in the X,Y, and Z dimensions of a Teflon sphere and also quantified response evaluation criteria in solid tumors and volume measurements using both constant and adaptive thresholding. We found that underestimation bias was essentially the same for X,Y, and Z dimensions using constant thresholding and had similar values for adaptive thresholding. The random error of these length measurements as measured by the standard deviation and coefficient of variation was 0.10 mm (0.65), 0.11 mm (0.71), and 0.59 mm (3.75) for constant thresholding and 0.08 mm (0.51), 0.09 mm (0.56), and 0.58 mm (3.68) for adaptive thresholding, respectively. For random error, however, Z lengths had at least a fivefold higher standard deviation and coefficient of variation than X and Y. Observed Z-dimension error was especially high for some 8 and 16 slice CT models. Error in CT image formation, in particular, for models with low numbers of detector rows, may be large enough to be misinterpreted as representing either treatment response or disease progression.
Accurate needle placement is a common need in the medical environment. While the use
of small diameter needles for clinical applications such as biopsy, anesthesia and
cholangiography is preferred over the use of larger diameter needles, precision placement
can often be challenging, particularly for needles with a bevel tip. This is due to
deflection of the needle shaft caused by asymmetry of the needle tip. Factors such as the
needle shaft material, bevel design, and properties of the tissue penetrated determine the
nature and extent to which a needle bends. In recent years, several models have been
developed to characterize the bending of the needle, which provides a method of
determining the trajectory of the needle through tissue. This paper explores the use of a
nonholonomic model to characterize needle bending while providing added capabilities
of path planning, obstacle avoidance, and path correction for lung biopsy procedures. We
used a ballistic gel media phantom and a robotic needle placement device to
experimentally assess the accuracy of simulated needle paths based on the nonholonomic
model. Two sets of experiments were conducted, one for a single bend profile of the
needle and the second set of tests for double bending of the needle. The tests provided an
average error between the simulated path and the actual path of 0.8 mm for the single
bend profile and 0.9 mm for the double bend profile tests over a 110 mm long insertion
distance. The maximum error was 7.4 mm and 6.9 mm for the single and double bend
profile tests respectively. The nonholonomic model is therefore shown to provide a
reasonable prediction of needle bending.
Endorectal MRI provides detailed images of the prostate anatomy and is useful for radiation treatment planning.
The endorectal probe (which is often removed during radiotherapy) introduces a large prostate deformation,
thereby posing a challenge for purposes of treatment planning. The probe-in MRI needs to be deformably
registered to the planning MRI prior to radiation treatment. The goal of this paper is to evaluate a deformable
registration workflow and quantify its accuracy and suitability for radiation treatment planning. We use three
metrics to evaluate the accuracy of the prostate/tumor segmentations from the registered volume to the gold
standard prostate/tumor segmentations: (a) Dice Similarity Coefficient (b) Hausdorff Distance (c) Mean surface
distance. These metrics quantify the acceptability of the registration within the prescribed treatment margin.
We evaluate and adapt existing methods, both manual and automated, to accurately track, visualize and quantify
the deformations in the prostate geometry between the endorectal MRI and the treatment planning image. An
important aspect of the work described in this paper is the integration of interactive guidance on the registration
process. The approach described in this paper provides users with the option of performing interactive manual
alignment followed by deformable registration.
We are investigating the feasibility of a computer-aided detection (CAD) system to assist radiologists in diagnosing
coronary artery disease in ECG gated cardiac multi-detector CT scans having calcified plaque. Coronary artery stenosis
analysis is challenging if calcified plaque or the iodinated blood pool hides viable lumen. The research described herein
provides an improved presentation to the radiologist by removing obscuring calcified plaque and blood pool. The
algorithm derives a Gaussian estimate of the point spread function (PSF) of the scanner responsible for plaque blooming
by fitting measured CTA image profiles. An initial estimate of the extent of calcified plaque is obtained from the image
evidence using a simple threshold. The Gaussian PSF estimate is then convolved with the initial plaque estimate to
obtain an estimate of the extent of the blooming artifact and this plaque blooming image is subtracted from the CT image
to obtain an image largely free of obscuring plaque. In a separate step, the obscuring blood pool is suppressed using
morphological operations and adaptive region growing. After processing by our algorithm, we are able to project the
segmented plaque-free lumen to form synthetic angiograms free from obstruction. We can also analyze the coronary
arteries with vessel tracking and centerline extraction to produce cross sectional images for measuring lumen stenosis.
As an additional aid to radiologists, we also produce plots of calcified plaque and lumen cross-sectional area along
selected blood vessels. The method was validated using digital phantoms and actual patient data, including in one case, a
validation against the results of a catheter angiogram.
There is growing interest in computer aided diagnosis applications including automatic detection of lung nodules from multislice computed tomography (CT). However the increase in the number and size of CT datasets introduces high costs for data storage and transmission, and becomes an obstacle to routine clinical exam as well as hindering widespread utilization of computerized applications. We investigated the effects of 3D lossy region-based JPEG2000 standard compression on the results of an automatic lung nodule detection system. As the algorithm detects the lungs within the datasets, we used this lung segmentation to define a region of interest (ROI) where the compression should be of higher fidelity. We tested 4 methods of 3D compression: 1) default compression of the whole image, 2) default compression of segmented lungs with masking out all non-lung regions, 3) ROI-based compression as specified in the JPEG2000 standard and 4) compression where voxels in the ROI are weighted to be given emphasis in the encoding. We tested 7 compression ratios per method: 1, 4, 6, 8, 10, 20, and 30 to 1. We then evaluated our experimental CAD algorithm on 10 patients with 67 documented nodules initially identified on the decompressed data. Sensitivities and false positive rates were compared for the various compression methods and ratios. We found that region-based compression generally performs better than default compression. The sensitivity with default compression decreased from 85% at no compression to 61% at 30:1 compression, a decrease of 25%, whereas the masked compression method saw a decreased in sensitivity on only 13.5% at maximum compression. At compression levels up to 10:1, all 3 region-based compression methods had decreases in sensitivity of 7.5% or less. Detection of small nodules (< 4mm in diameter) was more affected by compression than detection of large nodules; sensitivity to calcified nodules was less affected by compression than to non-calcified nodules.
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. Volume data is usually massive and is compressed so as to effectively utilize available network bandwidth. In our scenario, these compressed datasets are stored on a central data server and are transferred progressively to one or more clients over a network. In this paper, we study schemes that enable progressive delivery for visualization of medical volume data using JPEG2000. We then present a scheme for progressive encoding based on scene content, that enables a progression based on tissues or regions of interest in 3D medical imagery. The resulting compressed file is organized such that the tissues of interest appear in earlier segments of the bitstream. Hence a compliant decoder that chooses to stop transmission of data at a given instant would be able to render the tissue of interest with a better visual quality.
We present a scheme for compressed domain interactive rendering of large volume data sets over distributed environments. The scheme exploits the distortion scalability and multi-resolution properties offered by JPEG2000 to provide a unified framework for interactive rendering over low bandwidth networks. The interactive client is provided breadth in terms of scalability in resolution, position and progressive improvement by quality. The server exploits the spatial locality offered by the DWT and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI. Contextual background information can also be made available having quality fading away from the VOI. The scheme is ideally suited for client-server setups with low bandwidth constraints, with the server maintaining the compressed volume data, to be browsed by a client with low processing power and/or memory. Rendering can be performed at a stage when the client feels that the desired quality threshold has been attained. We investigate the effects of code-block size on compression ratio, PSNR, decoding times and data transmission to arrive at an optimal code-block size for typical VOI decoding scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.