KEYWORDS: Image segmentation, Image processing algorithms and systems, Lung, Medical imaging, Computed tomography, Magnetic resonance imaging, 3D image processing, Image processing, 3D metrology, Brain
Radiologists are required to read thousands of patient images every day, and any tools that can improve their workflow to help them make efficient and accurate measurements is of great value. Such an interactive tool must be intuitive to use, and we have found that users are accustomed to clicking on the contour of the object for segmentation and would like the final segmentation to pass through these points. The tool must also be fast to enable real-time interactive feedback. To meet these needs, we present a segmentation workflow that enables an intuitive method for fast interactive segmentation of 2D and 3D objects. Given simple user clicks on the contour of an object in one 2D view, the algorithm generates foreground and background seeds and computes foreground and background distributions that are used to segment the object in 2D. It then propagates the information to the two orthogonal planes in a 3D volume and segments all three 2D views. The automated segmentation is automatically updated as the user continues to add points around the contour, and the algorithm is re-run using the total set of points. Based on the segmented objects in these three views, the algorithm then computes a 3D segmentation of the object. This process requires only limited user interaction to segment complex shapes and significantly improves the workflow of the user.
KEYWORDS: Wavelets, Image segmentation, Convolution, Stationary wavelet transform, Image processing algorithms and systems, Confocal microscopy, 3D image processing, 3D acquisition, Image processing, Chemical elements
Wavelet approaches have proven effective in many segmentation applications and in particular in the segmentation of cells, which are blob-like in shape. We build upon an established wavelet segmentation algorithm and demonstrate how to overcome some of its limitations based on the theoretical derivation of the compounding process of iterative convolutions. We demonstrate that the wavelet decomposition can be computed for any desired level directly without iterative decompositions that require additional computation and memory. This is especially important when dealing with large 3D volumes that consume significant amounts of memory and require intense computation. Our approach is generalized to automatically handle both 2D and 3D and also implicitly handles the anisotropic pixel size inherent in such datasets. Our results demonstrate a 28X improvement in speed and 8X improvement in memory efficiency for standard size 3D confocal image volumes without adversely affecting the accuracy.
Fluorescence in situ hybridization (FISH) dot counting is the process of enumerating chromosomal abnormalities
in interphase cell nuclei. This process is widely used in many areas of biomedical research and diagnosis. We
present a generic and fully automatic algorithm for cell-level counting of FISH dots in 2-D fluorescent images.
Our proposed algorithm starts by segmenting cell nuclei in DAPI stained images using a 2-D wavelet based
segmentation algorithm. Nuclei segmentation is followed by FISH dot detection and counting, which consists
of three main steps. First, image pre-processing where median and top-hat filters are used to clean image
noise, subtract background and enhance the contrast of the FISH dots. Second, FISH dot detection using
a multi-level h-minima transform approach that accounts for the varying image contrast. Third, FISH dot
counting where clustered FISH dots are separated using a local maxima detection-based method followed by
FISH dot size filtering based on constraints to account for large connected components of tightly-clustered dots.
To quantitatively assess the performance of our proposed FISH dot counting algorithm, automatic counting
results were compared to manual counts of 880 cells selected from 19 invasive ductal breast carcinoma samples
exhibiting varying degrees of Human Epidermal Growth Factor Receptor 2 (HER2) expression. Cell-level dot
counting accuracy was assessed using two metrics: cell classification agreement and dot-counting match. Our
automatic results gave an overall cell-by-cell classification agreement of 88% and an overall accuracy of 81%.
We present a new model-based framework for coupled segmentation and de-noising of medical images. The
segmentation and de-noising steps are coupled through a discrete formulation of the total variation de-noising
problem in a restricted setting such that each pixel in the image has its de-noised intensity level selected from a
drastically reduced set of intensities. By creating such a reduced set of intensity levels, in which each intensity
level represent the intensity across a region to be segmented, the intensity value for each de-noised pixel will be
forced to assume a value in this limited set; by associating all pixels with the same de-noised value as a single
region, image segmentation is naturally achieved. We derive two formulations corresponding to two noise models:
additive white Gaussian and multiplicative Rayleigh. We furthermore show that the proposed framework enables
globally optimal foreground/background segmentation of images with Rayleigh distributed noise.
Acquisition of a clinically acceptable scan plane is a pre-requisite for ultrasonic measurement of anatomical
features from B-mode images. In obstetric ultrasound, measurement of gestational age predictors, such as
biparietal diameter and head circumference, is performed at the level of the thalami and cavum septum pelucidi.
In an accurate scan plane, the head can be modeled as an ellipse, the thalami looks like a butterfly, the cavum
appears like an empty box and the falx is a straight line along the major axis of a symmetric ellipse inclined either
parallel to or at small angles to the probe surface. Arriving at the correct probe placement on the mother's belly
to obtain an accurate scan plane is a task of considerable challenge especially for a new user of ultrasound. In
this work, we present a novel automated learning-based algorithm to identify an acceptable fetal head scan plane.
We divide the problem into cranium detection and a template matching to capture the composite "butterfly"
structure present inside the head, which mimics the visual cues used by an expert. The algorithm uses the stateof-
the-art Active Appearance Models techniques from the image processing and computer vision literature and
tie them to presence or absence of the inclusions within the head to automatically compute a score to represent
the goodness of a scan plane. This automated technique can be potentially used to train and aid new users of
ultrasound.
A large variety of image analysis tasks require the segmentation of various regions in an image. For example,
segmentation is required to generate accurate models of brain pathology that are important components of
modern diagnosis and therapy. While the manual delineation of such structures gives accurate information,
the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the
speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to
a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The
evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than
developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using
similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing
the parameter settings for individual cases and across a collection of datasets using the Design of Experiment
framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms.
We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that
of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized
parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation
tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the
statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Phase shift analysis sensors are popular in inspection and metrology applications. The sensor's captured image contains the region of interest of an object overlaid with projected fringes. These fringes bend according to the surface topography. 3D data is then calculated using phase shift analysis. The image profile perpendicular to the fringes is assumed to be sinusoidal. A particular version of phase shift analysis is the image spatial phase stepping approach that requires only a single image for analysis, but it is sensitive to noise. When noise, such as surface texture, appears in the image, the sinusoidal behavior is partially lost. This causes an inaccurate or noisy measurement. In this study, three digital de-noising filters are evaluated. The intent is to retrieve a smoother sine-like image profile while precisely retaining fringe boundary locations. Four different edge types are used as test objects. "Six Sigma" statistical analysis tools are used to implement screening, optimization, and validation. The most effective enhancement algorithms of the selection comprise (1) line shifting followed by horizontal Gabor filtration and vertical Gaussian filtering for chamfer edge measurement and (2) edge orientation detection followed by 2-D Gabor filter for round edges. These algorithms significantly improve the gauge repeatability.
PET images that have been reconstructed with unregularized algorithms are commonly smoothed with linear Gaussian filters to control noise. Since these filters are spatially invariant, they degrade feature contrast in the image, compromising lesion detectability. Edge-preserving smoothing filters can differentially preserve edges and features while smoothing noise. These filters assume spatially uniform noise models. However, the noise in PET images is spatially variant, approximately following a Poisson behavior. Therefore, different regions of a PET image need smoothing by different amounts. In this work, we introduce an adaptive filter, based on anisotropic diffusion, designed specifically to overcome this problem. In this algorithm, the diffusion is varied according to a local estimate of the noise using either the local median or the grayscale image opening to weight the conductance parameter. The algorithm is thus tailored to the task of smoothing PET images, or any image with Poisson-like noise characteristics, by adapting itself to varying noise while preserving significant features in the image. This filter was compared with Gaussian smoothing and a representative anisotropic diffusion method using three quantitative task-relevant metrics calculated on simulated PET images with lesions in the lung and liver. The contrast gain and noise ratio metrics were used to measure the ability to do accurate quantitation; the Channelized Hotelling Observer lesion detectability index was used to quantify lesion detectability. The adaptive filter improved the signal-to-noise ratio by more than 45% and lesion detectability by more than 55% over the Gaussian filter while producing "natural" looking images and consistent image quality across different anatomical regions.
Prostate cancer is diagnosed by histopathology interpretation of hematoxylin and eosin (H and E)-stained tissue sections. Gland and nuclei distributions vary with the disease grade. The morphological features vary with the advance of cancer where the epithelial regions grow into the stroma. An efficient pathology slide image analysis method involved using a tissue microarray with known disease stages. Digital 24-bit RGB images were acquired for each tissue element on the slide with both 10X and 40X objectives. Initial segmentation at low magnification was accomplished using prior spectral characteristics from a training tissue set composed of four tissue clusters; namely, glands, epithelia, stroma and nuclei. The segmentation method was automated by using the training RGB values as an initial guess and iterating the averaging process 10 times to find the four cluster centers. Labels were assigned to the nearest cluster center in red-blue spectral feature space. An automatic threshold algorithm separated the glands from the tissue. A visual pseudo color representation of 60 segmented tissue microarray image was generated where white, pink, red, blue colors represent glands, epithelia, stroma and nuclei, respectively. The higher magnification images provided refined nuclei morphology. The nuclei were detected with a RGB color space principle component analysis that resulted in a grey scale image. The shape metrics such as compactness, elongation, minimum and maximum diameters were calculated based on the eigenvalues of the best-fitting ellipses to the nuclei.
Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.