PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Definition of objects in medical images requires a multiscale approach because important structure appears across a wide range of scales. Object boundaries, when they are required, must be inferred from the multiscale structure of the image and a priori knowledge. For many objectbased tasks, explicit identification of boundaries is not necessary. Instead, it is possible to base object measures on medial axes and their radius functions obtained using statistical methods. A medial approach makes the easy decisions about the membership of pixels in the object first. The difficult decisions about the boundaries are made using a fuzzy measure of "objectness" that can account for edge uncertainty, partial volume effects, and a priori information. Objectness diffuses outward from the medial axis, and non-objectness diffuses inward from medial axes of surrounding regions. Their competition in boundary regions defines objectness even in the absence of an edge. The area of an object is the integral of objectness across space. Statistical pattern recognition methods (supervised and unsupervised classification; linear projections) are used to identiQy medial axes in a feature space defined by multiscale Gaussian filters. The pattern describing a pixel is formed from the response at that location and nearby locations to the filters. Approximations to derivatives of Gaussians are linear subspaces of this feature space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of automatic segmentation is to extract interesting regions and contours from a digital image. Today a very large number of segmentation algorithms are available, whose efficiency is usually domain-dependent, i.e., they operate to different degrees of accuracy according to the parameters used which are tuned to specific application domains. A method for result evaluation and error detection in automatic segmentation is proposed. A mathematical and a physical description of possible errors are presented, and an algorithm for error detection is implemented. Three types of segmentation errors are analyzed: undersegmentation errors, oversegmentation errors, and boundary errors. An undersegmentation error occurs when pixels belonging to different semantic objects are grouped into a single region. Such errors are the most dang erous because they can invalidate the whole segmentation process. The oversegmentation error, on the contrary, occurs when a single semantic object is subdivided by segmentation into several regions. Small oversegmentation errors may be acceptable in many applications (especially in the medical field), as they can easily be rectified by merging object parts. A boundary error consists in a discrepancy between the boundaries of a semantic object and those of the segmented one. In real images, all these errors may often be encountered at the same time. The system implemented permits one to detect each type of error, at the pixel level, by referring to a manually segmented image obtained by an expert. It produces a report on segmentation results, for both a whole image and single regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic method for identification of the center point of the left ventricle of the myocardium during systole is described for 2-dimensional short—axis echocardiographic images. This method, based on the use of large matched filters, identifies a single fixed center point during systole, by locating the three features: the epicardial boundary along the postenor wall, the epicardial boundary along the anterior wall, and the endocardial boundary along the anterior wall. Thus, it provides a first step towards the long term goal of automatic recognition of the endocardial and epicardial boundaries. An index associated with the filter used to approximate the epicardial boundary along the posterior wall provides an indication of the quality of the image and a reliability measurement of the estimate. When tested on 207 image sequences, 18 images were identified by this index (applied to the end diastolic frame) as unsuitable for processing. In the remaining 189 image sequences, 16 of the automatically defined center points were judged poor when compared with estimates made on the end diastolic frame by an independent expert observer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our approach to segmenting images involves object definition via multiscale methods. Typically, at small scales an object consists of many details. At larger scales these details are less noticeable, and other more global features are prominent. In either case we want to characterize those scales at which features of interest occur. The dominant trend in object definition has been to describe object boundaries via edge detection processes [1]. For binary objects, an alternate approach is to describe object shape with medial axes [2] or with skeletons using the methods of mathematical morphology [3]. Extensions of these ideas to objects described by gray scale images can be found in the literature, for example [4,5]. Such results are based on geometric properties of the graph of the intensity function. We may also apply filters to the intensity function which measure medialness of points. At a given point, the filter should have a strong response when the point is approximately midway between two boundaries. The response of the filter should be proportional to distance from the point to the boundary. We analyze a particular filter which has these properties, called the normalized Laplacian of Gaussian filter. Section 2 gives motivation and a mathematical description of the filter. The filter is described in a continuous variable setting. For discrete images viewed continuously as piecewise constant functions, an exact formula for the response of the filter is given. Section 3 gives the algorithm for construction of medial axes. In section 4 we discuss the numerical implementation of the filter. The appendix contains C code for an implementation of the filter using fast convolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wide variatiation in the specific resistance of human tissues suggests that images of this property should show good contrast. However the relationship between the measurements of voltage which can be made on the surface of a conducting object and the resitivity distribution within the object is non-linear. This means that complete reconstruction of the resitivity distribution from such measurements in general requires the solution of a set of non-linear equations, but useful results have been obtained from linear approximations to the full solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One impcrtant potential ci Magnetic Resonance Imaging as comjxrcd to other medical imaging modalities is that MRI can produce well-registered multivariate images. By channg operational parameters, MRI can generate different images which emphasizes one or more tissue parameters, while maintaining reasonable registration among these images. Curremly, these multivariate ünages are ocessed or viewed individually. This jiocessing seheme is inconvenient and sometimes gives incurate results. By treating multivariate images as vector-valued images, we can process them as a whole. For quantitative analysis of medical hnages, thscontinnity-preserving or -enhancing smoothing technique becomes important In this mper, a discontinuity-preserving vector smoothing technique is intrOdUced, which is based on current scalar Mean Field Annealing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine the possibility of using pure geometrical information from a prior image to assist in the reconstruction of tomographic data sets with lower number of counts. The situation can arise in dynamic studies, for example, in which the sum image from a number of time frames is available, defining desired regions-of-interest (ROFs) with good accury, and the time evolution of uptake in those ROl's needs to be obtained from the low count individual data sets. The prior information must be purely geometrical in such a case, so that the activity in the ROFs of the prior does not influence the estimated uptake from the individual time frames. It is also desired that the prior does not impose any other conditions on the reconstructions, i.e., no smoothness or deviation from a known set of values is desired. We amKk this problem in the framework of Vision Response Functions (VRFs), based on the work done by JJ. Koenderink inUtrecht. WeshowthatthereareassembliesofVRFsthatcanbepresentedinaformthatisinvariantwithrespecttorotations and translations and that some functions of those invariants can convey the desired geometric prior information independent of the level of ativity in the ROFs, except at very low levels. Preliminary results based on a one-dimensional reconstruction problem will be presented. Using the zero crossings of the Gaussian derivative form of the LaplaCian of a prior image at different scales, a variant of the EM algorithm has been found that allows the reconstruction of low count data sets with those priors. At this time, this involves using a modified Conjugate Gradient (CG) maximization method for the M-step of the algorithm. The results show that the distorted shapes of reconstructions of data sets with low counts are effectively corrected by the method, although many questions exist at this time about basic and computational issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bedside instruments are now available which can transilluminate tissue with near-infrared radiation and measure the boundary flux both temporally and spatially resolved. Consequently there is an increasing demand for image processing methods that allow reconstruction of the spatial distribution of the absorption and scattering coefficient within the tissue. Iterative algorithms for solving this inverse problem require an accurate forward model. Previous attempts to simulate light propagation within a specific medium have been made either with a Monte-Carlo model or by deriving the Greens function for a given geometry, assuming diffuse light propagation. While the former requires extended computing time to achieve a certain precision, the latter is restricted to simple geometries. We present here a Finite Element model that allows the solution of the forward problem for complex geometries within a reasonable time and that could be used in real-time bedside imaging equipment. This model permits fast calculation of the integrated intensity and the mean time of flight. The model is being used to investigate perturbations imposed on the measurement data by absorbing or scattering inhomogeneities to determine the viability of the iterative reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of direct 3—D image reconstruction based on many—knot and finite series—expansion techniques is presented. On the basis of the method and a priori 3—D information, a reconstruction algorithm is developed. The problem of reducing the dimensionality of the sparse coefficient matrix is studied. Experimental results showed that the present method is effective for reconstruction from finite directional projection data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous formulations for the noise power spectrum (NPS) of tomographic images have usually been obtained using a radially bandlimited discrete representation of the continuous 2- D function under reconstruction. Also, the same sampling distance is used to represent discrete versions of the function and its projection. In this paper, the expression for the NPS is generalized to spline and bandlimited subspaces of square-integrable functions and to unequal sampling distances for the image and the projection data. The theory was used to predict the NPS obtained using several different sets of basis functions: radially bandlimited (Shepp- Logan) and angular dependent splines, i.e., B-splines of degree 0 (Haar system), degree 1, degree 3, and separable bandlimited. Measurement of the NPS of simulated images was used to confirm the predictions of the theory. The NPS shows different radial and angular dependent characteristics for each set of basis functions, and for oversampling of the projection data. The magnitude of the aliasing in the reconstructed image depends on the choice of basis functions. Thus the basis functions used and the type of object imaged must be considered in any evaluation of the imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to identify several important issues in the statistical analysis of serial images of active brain tumors and to offer some approaches and methods to help resolve them. Current serial brain tumor imaging is very strong on data acquisition and display yet appears weak on data analysis and inference. To help bridge the gap between certain theoretical mathematical methods for medical imaging developed over the past several decades and actual clinical practice, we describe a new physical phantom that we have designed and built for our research. We also offer some extensions of several relevant tools and principles from statistical science to the analysis of our serial medical images. Among the tools we discuss are the physical phantom itself, a simple experimental design, methods that help to separate image registration and object deformation effects, and some simple paired t-test ideas for comparison of differences in spatial point processes generated from pixelwise events in serial images. We identify several sources of extraneous variation between paired images and propose a few simple methods to control or eliminate them. Replicated experiments with our physical phantom can be used to study the properties of these methods under controlled and known conditions. Several actual patient and simulated serial SPECT images help to motivate and illustrate our techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a nonlinear detector which uses student's t-test to locate tumors occurring in anatomic background. The detector computes the significance of any observed difference between the mean of features extracted from a small, circular search window and the mean of features belonging to an outer, concentric background window. The t-test is applied to search windows at every pixel location in the image. The t-statistic computed from the sample means and variances of the inner and outer regions is thresholded at a chosen significance level to give a positive detection. The response of the detector peaks when the inner window coincides with a bright spot of the same size. Nonuniform anatomic background activity is effectively suppressed, except for structure of the same size and shape as the tumors being sought. Because the t-statistic is a true measure of significance, it can be applied to any set of features which are likely to distinguish tumors. We apply the test to two features, one related to object intensity and the other to object shape. A final determination on the presence and location of tumors is made by a simple combination of the significance levels generated from each feature. Tests are performed using simulated tumors superimposed on clinical images. Performance curves resembling standard receiver-operating-characteristic (ROC) plots show a slight improvement over the prewhitening matched filter. Unlike the matched filter, however, the t-test detector assumes nothing specific about the tumor apart from its size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed generalized ideal observer models relating human performance in detection tasks to physical properties of medical imaging systems, such as spatial resolution and noise power spectrum. Our approach treats detection as a special case of amplitude estimation, with certain other aspects of the signal, e.g., size or location, considered additional unknown parameters. The models are based on the Barankin lower bound on the precision with which the quantities of interest can be determined. We have found the Barankin bound to be particularly promising in predicting human performance in detection with location uncertainty. Its predictions differ from those of other proposed models in two respects. First, our results suggest that the degradation in performance due to location uncertainty depends on resolution. Second, we have shown analytically that for a given search area, the ratio of ideal observer performance when location is unknown to performance when location is known is nearly independent of signal size. This differs from previously proposed models which predict that the effect of location uncertainty depends on the ratio of signal size to search area, but agrees with the results of reported perceptual experiments testing this question.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A psychophysical experiment was conducted to measure the performance of human observers in detecting an exactly known signal against a random, nonuniform background in the presence of noise correlations introduced by post-detection filtering (postprocessing). In order to predict this human performance, a new model observer was synthesized by adding frequency-selective channels to the Hotelling observer model which we have previously used for assessment of image quality. This new `channelized' Hotelling model reduces approximately to a nonprewhitening (NPW) observer for images with uniform background and correlated noise introduced by filtering, and to a Hotelling observer for images with nonuniform background and no postprocessing. For images with both background nonuniformity and post processing, the performance of this channelized Hotelling observer agrees well with human performance while the other two observer models (NPW and Hotelling observer) fail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Economic conversion is the process of transforming defense industries to commercial purposes. This paper presents the case for the necessity of conversion and discusses particular strategies for accomplishing it. Some opportunities for conversion of military technologies to medical imaging are identified, and a six-step plan is offered for defense contractors contemplating conversion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present certain computational procedures for measurement of differential geometric quantities of surfaces. A condensed overview of differential geometry is presented and methods for measurement of various quantities are given. Our measurements are made from a stack of pre-segmented image slices, although no such explicit requirements besides knowledge of a set of 3-D points in space is needed by our algorithms. We assume that sufficiently small surface patches can be approximated by a biquadric polynomial. As differential characteristics are local properties, local surface fits are all that will be needed. Once the tangent plane (normal to the surface) is estimated from covariance matrix of the actual coordinate values of a surface patch, any given surface patch may be treated as a height map. This allows for invariant surface fitting where the tangent plane is transformed to algin with the x - y plane. Area, curvature, and principal directions can then be computed from surface fits. Knowledge of differential geometric quantities will allow for matching surface features in applications involving registration of 3-D medical images assuming rigid transformations or for arriving at point correspondences where non-rigid transformations are necessary. In non-rigid motion computation, once initial match vectors are obtained from a bending and stretching model, membrane smoothing with confidences optimizes the flow estimates. To this end, a linear vector equation in terms of components of flow vectors is derived which must be satisfied at all nodes of a finite element grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several approaches have been recently suggested in order to recover the non-rigid motion of the left ventricle of the heart based on its curvatures features. The adequacy of these approaches depends on the actual characteristics of the curvature of the left ventricle and particularly on its temporal stability. We address in this paper the assessment and the visualization of the curvature features of the left ventricle. From experimental CT data, we compute the distribution of the curvature over the surface of the left ventricle by using an iterative relaxation scheme. The curvature distribution is visualized through voxel-based surface rendering. This visualization allows us to assess the structural stability of the curvature characteristics with respect to the deformation of the left ventricle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel imaging technologies provide a detailed look at structure of the tremendously complex and variable human brain. Optimal exploitation of the information stored in the rapidly growing collectiort of acquired and segmented MRI data calls for robust and reliable descriptions of the individual geometry of the cerebral cortex. A mathematical description and representation of 3D shape, capable of dealing with form of variable appearance, is at the focus of this paper. We base our development on the Medial Axis Transformation (MAT) generalized to three dimensions. Our implementation of the 3D MAT combines full 3D Voronoi-tesselation generated by the set of all border points with regularization procedures to obtain geometrically and topologically correct medial manifolds. The proposed algorithm was tested on synthetic objects and has been applied to 3D MRI data of 1mm isotropic resolution to obtain a description of the sulci in the cerebral cortex. Description and representation of the cortical anatomy is significant in clinical applications, medical research, and instrumentation developments.
keywords: neuroanatomy, cortical surface mapping, medial axis transformation, Voronoi-tesselation, boundary smoothing, skeleton pruning, 3D distance transformation, regularization, surface parametrisation, shape description.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique to perform automatically the 3-D registration of two 3-D scanner images. The aim is to compute the rigid geometric transform existing between two views of the same object, taken into two different positions. The basis of our method is to extract characteristic 3-D lines from the two images, called crest lines, and to compute the geometric transform that map one of the set of lines onto the other one. Our method is fully automatic. It is also very fast and robust, because the crest lines set is a very compact and stable representation of the principal geometric information of a 3-D image. We present in this paper the example of an automatic registration performed with two 3-D images of a skull.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various algorithms for spectral estimation were compared for the task of estimating spectra of NMR signals. These algorithms were the fast Fourier transform, maximum entropy, and an autoregressive model. Both simulated and real data were investigated. The simulated radio frequency (rf) data was designed to mimic data from the human liver using 31P NMR spectroscopy. All algorithms exhibited similar bias and variance of estimates in the simulation. Data from a solution containing water and ethanol was also acquired. Here, the FFT and autoregressive methods exhibited similar bias and variance. Investigations involving maximum entropy are currently underway.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many current medical applications of image analysis, objects are detected and delimited by boundary curves or surfaces. Yet the most effective multivariate statistics available pertain to labelled points (`landmarks') only. In the finite-dimensional feature space that landmarks support, each case of a data set is equivalent to a deformation map deriving it from the average form. This paper introduces a new extension of the finite-dimensional spline-based approach to incorporate edge information. In this implementation, edgels are restricted to landmark loci: they are interpreted as pairs of landmarks at infinitesimal separation in a specific direction. The effect of changing edge direction is a singular perturbation of the thin- plate spline for the landmarks alone. An appropriate normalization yields a basis for image deformations corresponding to changes of edge direction without landmark movement; this basis complements the basis of landmark deformations ignoring edge information. We derive explicit formulas for these edge warps, evaluate the quadratic form expressing bending energies of their formal combinations, and show the resulting spectrum of edge features in typical scenes. These expressions will aid all investigations into medical images that entail comparisons of anatomical scene analyses to a normative or typical form.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deformable surfaces are useful in a number of biomedical computer vision applications for defining well behaved object models. Given an initial estimate of a three-dimensional object boundary, we can fit an elastic surface to the image data which has prescribed smoothness and data integrity properties. In this paper, we evaluate four methods for implementing deformable surfaces. The first uses finite differences and a locally greedy gradient following algorithm to minimize a surface functional similar to the controlled continuity stabilizer described by Terzopoulos. The second solves the Euler-Lagrange equations associated with this functional with a direct iterative method which does not invert these linear equations. The third method uses Gauss-Seidel to iteratively solve these linear equations once per deformation step. The fourth method uses successive over relaxation (SOR) for this task. All four algorithms have O(n) space and time requirements (where n is the number of points on the parametric surface) and typically converge in time proportional to the distance form the initial estimate to the final surface. These methods are local and iterative in nature and lend themselves well to parallel solution on a fine grained architecture such as the connection machine. We present a comparison of the space, time, and convergence properties of these methods. Applications of deformable models to segmentation of CT, MRI, and nuclear medicine images are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to measure gingival volume growth from dental casts would provide a valuable resource for periodontists. This problem is attractive from a computer vision standpoint due to the complexities of data acquisition, segmentation of gingival and tooth surfaces and boundaries, and extraction of features (such as tooth axes) to help solve the correspondence problem for multiple casts. In this paper, a structured light 3-D range finder is used to collect raw data. The most complicated subtask is that of detecting discontinuities such as the gingival margin. Discontinuity detection is hindered both by cast anomalies (such as bubbles and holes generated during the process of dental impression) and by the subtle nature of the discontinuities themselves. First, we discuss an approach to segmenting a dental cast into tooth and gingival units using depth and orientation discontinuities. The visible cast surface is reconstructed by obtaining the minimum of a parameterized functional. The first derivative of the energy functional (which corresponds to the Euler-Lagrange equation) is solved using the multigrid methods. both orientation and depth discontinuities are detected by adding a discrete discontinuity functional to the energy functional. The principal axes and boundaries of the teeth provide the information necessary to determine the region to be measured in estimating gingival growth. Finally, voxels corresponding to growth regions are counted to measure the target volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The characteristic body is a mathematical tool developed for use in radar imaging. In this paper we extend its use to the seemingly divergent fields of biomagnetic imaging and optical propagation. Biomagnetic imaging is the estimation of electrical current flows that give rise to quasi-static magnetic fields surrounding biological organisms. This estimation incorporates measurements of the field and constraints derived from a priori knowledge and from ad hoc assumptions. One of the interesting constraints is confinement of the electrical-current sources to a surface such as that of the brain's cortex. By combining an appropriate characteristic body with Maxwell's equations and the 3-D vector Fourier transform, an algorithm for calculating the source currents presents itself quite naturally. We show how various constraints can be introduced into this reconstruction process through characteristic bodies. In addition, we show how the formalism can provide a framework for describing optical propagation and diffraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many important imaging applications generate a sequence of images that are (or can be made to be) a spatially invariant image sequence with linearly additive contributions from the components that form the images. They include functional images in nuclear medicine, multiparameter MR imaging, multi-energy x-ray imaging for DR and CT, and multispectral satellite images. Recent results in the modelling and analysis of linearly additive spatially invariant image sequences are based on the inherent structure of such images, and can be used to achieve significant data compression for image storage and still provide good reconstruction. The technique is applied here to a human renogram, with compression of a very noisy 180-image sequence to a 4-image set. The resulting reconstruction illustrates the potential of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general method for computer aided diagnosis (CAD) of mammograms using a multiscale decomposition is proposed and implemented using several different techniques. The first is the now classic wavelet decomposition in which the smoothed versions of the image at coarser resolutions are given by appropriate orthogonal projections. Then two distinct nonlinear filters are used to provide a decomposition into simplified resumes and corresponding details at several different `scales.' One filter uses a nonlinear partial differential equation to provide smoothing and the other uses a robust weighted majority-minimum range (WMMR) algorithm. Results indicate that these methods will prove valuable in the design of a software package to assist the radiologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.