The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
We present an adaptive image-acquisition methodology by replacing the traditional birefringent filter with slight out-of-focus blur generated by the camera lens. Because many cameras already have adjustable lenses and autofocus systems, our method can exploit existing hardware by simply changing the focusing strategy. During the image acquisition, the optimal defocus setting is automatically adapted to the power spectrum of the scene, which is evaluated by a generic autocorrelation model. We develop a criterion to estimate reconstruction errors without the baseband knowledge of the scene. This metric helps the camera to choose the optimal defocus settings. An optimal Wiener filter then recovers the captured scene and yields sharper images with reduced aliasing. The numerical and visual results show that our method is superior to current acquisition methods used by most digital cameras.
Signal reconstruction using an l1-norm penalty has proven to be valuable in edge-preserving regularization as
well as in sparse reconstruction problems. The developing field of compressed sensing typically exploits this
approach to yield sparse solutions in the face of incoherent measurements. Unfortunately, sparse reconstruction
generally requires significantly more computation because of the nonlinear nature of the problem and because
the most common solutions damage any structure that may otherwise exist in the system matrix. In this work
we adopt a majorizing function for the absolute value term that can be used with structured system matrices so
that the regularization term in the matrix to be inverted does not destroy the structure of the original matrix.
As a result, a system inverse can be precomputed and applied efficiently at each iteration to speed the estimation
process. We demonstrate that this method can yield significant computational advantages when the original
system matrix can be represented or decomposed into an efficiently applied singular value decomposition.
This paper presents an optimal image acquisition methodology by replacing the traditional birefringent filter
with slight out-of-focus blur generated by the camera lens. Since many cameras already have adjustable lenses
and auto-focus systems, our method can exploit existing hardware by simply changing the focusing strategy.
During the image acquisition, the optimal defocus setting is automatically adapted to the power spectrum of
the scene which is evaluated by a generic autocorrelation model. A criterion to estimate reconstruction errors
without the baseband knowledge of the scene is developed in the paper. This metric helps the camera to choose
the optimal focus settings. An optimal Wiener filter then recovers the captured scene and yields sharper images
with reduced aliasing. The numerical and visual results show that our method is superior to current acquisition
methods used by most digital cameras.
By omitting local decay and phase evolution,
traditional MRI models each datum as a sample
from k-space so that reconstruction can be implemented
by FFTs. Single-shot parameter assessment by
retrieval from signal encoding (SS-PARSE) acknowledges
local decay and phase evolution, so it models
each datum as a sample from (k, t)-space rather than
k-space. Local decay and frequency vary continuously
in space. Because of this, discrete models in space
can cause artifacts in the reconstructed parameters.
Increasing the resolution of the reconstructed parameters
can more accurately capture the spatial variations,
but the resolution is limited not only by computational
complexity but also by the size of the acquired data.
For a limited data set used for reconstruction, simply
increasing the resolution may cause the reconstruction
to become an underdetermined problem. This paper
presents a solution to this problem based on cubic
convolution interpolation.
KEYWORDS: Error analysis, Magnetic resonance imaging, Signal processing, Data acquisition, Functional magnetic resonance imaging, Computer programming, K band, Distortion, Data modeling, Interference (communication)
Quantitative and spatially accurate maps of local NMR relaxation rates from single-shot acquisitions are of value for
functional MRI and dynamic contrast studies. Addressing this need is SS-PARSE (Single-shot parameter assessment by
recovery from signal encoding), a recently introduced MRI technique for mapping magnetization magnitude and phase,
frequency, and net transverse decay rate R2* from a single-shot (<70 msec) signal. Instead of implicitly modeling the
local signal as arising from a constant magnetization vector, SS-PARSE models the evolution in phase and the decay in
amplitude of the local signal and estimates the local parameter maps producing the observed signal. Because the local
signal model used is fundamentally more accurate than the model implicitly used in most current MRI methodology, SS-PARSE
maps are inherently free from geometric errors due to off-resonance frequencies. The accuracy of the parameter
estimates is determined by (a) the information available in the signal (the form of the local signal model, the sampling
pattern, and random noise), and by (b) the effectiveness of the estimation algorithm in extracting the information present
in the signal. Sources of bias and random errors are discussed. The performance of the method is investigated using
experimental phantom data.
By acknowledging local decay and phase
evolution, single-shot parameter assessment by retrieval
from signal encoding (SS-PARSE) models each
datum as a sample from (k, t)-space rather than
k-space. This more accurate model promises better
performance at a price of more complicated reconstruction
computations. Normally, conjugate-gradients
is used to simultaneously estimate local image magnitude,
decay, and frequency. Each iteration of the
conjugate-gradients algorithm requires several evaluations
of the image synthesis function and one evaluation
of gradients. Because of local decay and frequency
and the non-Cartesian trajectory, fast algorithms
based on FFT cannot be effectively used to accelerate
the evaluation of the image synthesis function and gradients.
This paper presents a fast algorithm to compute
the image synthesis function and gradients by linear
combinations of FFTs. By polynomial approximation
of the exponential time function with local decay and
frequency as parameters, the image synthesis function
and gradients become linear combinations of non-
Cartesian Fourier transforms. In order to use the FFT,
one can interpolate non-Cartesian trajectories. The
quality of images reconstructed by the fast approach
presented in this paper is the same as that of the
normal conjugate-gradient method with significantly
reduced computation time.
Image reconstruction from Fourier-domain measurements is a specialized problem within the general area of image reconstruction using prior information. The structure of the equations in Fourier imaging is challenging, since the observation equation matrix is non-sparse in the spatial domain but diagonal in the Fourier domain. Recently, the Bayesian image reconstruction with prior edges (BIRPE) algorithm has been proposed for image reconstruction from Fourier-domain samples using edge information automatically extracted from a high-resolution prior image. In the BIRPE algorithm, the maximum a posteriori (MAP) estimate of the reconstructed image and edge variables involves high-dimensional, non-convex optimization, which can be computationally prohibitive. The BIRPE algorithm performs this optimization by iteratively updating the estimate of the image then updating the estimate of the edge variables. In this paper, we propose two techniques for updating the image based on fixed edge variables one based on iterated conditional modes (ICM) and the other based on Jacobi iteration. ICM is guaranteed to converge, but, depending on the structure of the Fourier-domain samples, can be computationally prohibitive. The Jacobi iteration technique is more computationally efficient but does not always converge. In this paper, we study the convergence properties of the Jacobi iteration technique and its parameter sensitivity.
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
We propose the Bayesian image reconstruction with prior edges (BIRPE) algorithm for reconstructing an image from Fourier-domain samples with prior edge information from a higher resolution image. A major difference between BIRPE and previous methods is that all edges are detected automatically, and no segmentation of the prior image is required. Also, an edge found in the prior image does not need to be confirmed by the observations; smoothing is reduced across the edge if either the prior image or the observations suggest an edge. Simulations and results on magnetic resonance spectroscopic data are presented that demonstrate the effectiveness of the BIRPE method.
Digital still cameras typically use a single optical sensor overlaid with RGB color filters to acquire a scene. Only one of the three primary colors is observed at each pixel and the full color image must be reconstructed (demosaicked) from available data. We consider the problem of demosaicking for images sampled in the commonly used Bayer pattern.
The full color image is obtained from the sampled data as a MAP estimate. To exploit the greater sampling rate in the green channel in defining the presence of edges in the blue and red channels, a Gaussian MRF model that considers the presence of edges in all three color channels is used to define a prior. Pixel values and edge estimates are computed iteratively using an algorithm based on Besag's iterated conditional modes (ICM) algorithm. The reconstruction algorithm iterates alternately to perform edge detection and spatial smoothing. The proposed algorithm is applied to a variety of test images and its performance is quantified by using the CIELAB delta E measure.
In general, image restoration problems are ill posed and need to be regularized. For applications such as realtime video, fast restorations are also needed to keep up with the frame rate. Restoration based on 2D FFT's provides a fast implementation assuming a constant regularization term over the image. Unfortunately, this assumption creates significant ringing artifacts on edges as well as blurrier edges in the restored image. On the other hand, shift-variant regularization will reduce edge artifacts and provide better quality but it destroys the structure that makes use of the 2D FFT possible, thus no longer have the computational efficiency of the FFT. In this paper, we use a Bayesian approach-maximum a posteriori (MAP) estimation to compute an estimate of the original image given the blurred image. To avoid the smoothing of edges, shift-variant regularization must be used. The Huber-Markov random field model is applied to preserve the discontinuities on edges. For fast minimization of the above model, a new algorithm involving the Sherman-Morrison matrix inversion lemma is
proposed.
This results in a restored image with good edge preservation and less computation. Experiments show restored images with sharper edges. Convergence is fast, and the computational speed can be improved considerably by breaking the image into subimages.
Most consumer-level digital cameras use a color filter array to capture color mosaic data followed by demosaicking to obtain full-color images. However, many sophisticated demosaicking algorithms are too complex to implement on-board a camera. To use these algorithms, one must transfer the mosaic data from the camera to a computer without introducing compression losses that could generate artifacts in the demosaicked image. The memory required for losslessly stored mosaic images severely restricts the number of images that can be stored in the camera. Therefore, we need an algorithm to compress the original mosaic data losslessly so that it can later be transferred intact for demosaicking. We propose a new lossless compression technique for mosaic images in this paper. Ordinary image compression methods do not apply to mosaic images because of their non-canonical color sampling structure. Because standard compression methods such as JPEG, JPEG2000, etc. are already available in most digital cameras, we have chosen to build our algorithms using a standard method as a key part of the system. The algorithm begins by separating the mosaic image into 3 color (RGB) components. This is followed by an interpolation or down-sampling operation--depending on the particular variation of the algorithm--that makes all three components the same size. Using the three color components, we form a color image that is coded with JPEG. After appropriately reformatting the data, we calculate the residual between the original image and the coded image and then entropy-code the residual values corresponding to the mosaic data.
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are
designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of
the error criterion.
KEYWORDS: Convolution, Super resolution, Image restoration, Passive millimeter wave sensors, Real time imaging, Video, Chemical elements, Stereolithography, Signal to noise ratio, Edge roughness
In applications of PMMW imaging such as real-time video, fast restorations are needed to keep up with the frame rate. FFT-based restoration provides a fast implemention but at the expense of assuming that the regularization term is constant over the image. Unfortunately, this assumption can create significant ringing artifacts in the presence of edges as well as edges that are blurrier than necessary. Furthermore,shift-invariant regularization does not allow for the possibility of superresolution.
Shift-variant regularization provides a way to vary the roughness penalty as a function of spatial coordinates to reduce edge artifacts and provide a degree of superresolution. Virtually all edge-preserving regularization approaches exploit this concept. However, this approach destroys the structure that makes the use of the FFT possible, since the deblurring operation is no longer shift-invariant. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT.
We propose a new restoration method for the shift-variant regularization approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from an FFT-based approach. This image is a shift-invariant restoration containing the usual artifacts. The other restoration involves a set of unknowns whose number equals the number of pixels with a local smoothing penalty significantly different from the typical value in the image. This restoration represents the artifact correction image. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.
KEYWORDS: Magnetic resonance imaging, Data modeling, Imaging spectroscopy, Spectroscopy, Reconstruction algorithms, Image restoration, Signal to noise ratio, Data acquisition, Fourier transforms, Image resolution
Spectroscopic imaging (SI) techniques combine the ability of NMR spectroscopy to identify and measure biochemical constituents with the ability of MR imaging to localize NMR signals. The basic imaging technique acquires a set of spatial-frequency-domain samples on a regular grid and takes an inverse Fourier transform of the acquired data to obtain the spatial-domain image. Unfortunately, the time
required to gather the data while maintaining an adequate signal-to-noise ratio (SNR) limits the number of spatial-frequency-domain samples that can be acquired. In this paper, we use a high-resolution MR scout image to obtain edge locations in the sample imaged with MRSI. MRI discontinuities represent boundaries between different tissue types, and these discontinuities are likely to appear in the spectroscopic image as well. We propose a new model that encourages edge formation in the MRSI image reconstruction wherever MR image edges occur. A major difference between our model and previous methods is that an edge found in the MR image need not be confirmed by the data; smoothing is reduced across the edge
if either the MR image or the MRSI data suggests an edge. Simulations and results on in vivo MRSI data are presented that demonstrate the effectiveness of the method.
KEYWORDS: Image restoration, Convolution, Passive millimeter wave sensors, Point spread functions, Barium, Real time imaging, Video, Imaging systems, Systems modeling, Signal to noise ratio
In applications of PMMW imaging such as real-time video, fast restorations are needed to keep up with the frame rate. FFT-based restoration provides a fast implementation, but it does so at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. The mathematically correct way to avoid boundary artifacts is to model the pixels outside the measured image window as unknown values in the restored image. However, this approach destroys the structure that makes the use of the FFT possible, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. This image can be thought of as a type of FFT restoration containing the usual boundary artifacts. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. This restoration represents the artifact correction image. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.
Passive millimeter-wave imagery has tremendous potential for imaging in adverse conditions. However, poor resolution and long acquisition times pose serious limitations to this potential. Therefore, an important issue is the optimization of the sampling pattern. Ordinarily, a focal plane sensor array has sensors placed in a rectangular grid pattern at sub-Nyquist density, and the array must be dithered to sample the image plane at the Nyquist density in each dimension. However, the Nyquist density oversamples the image due to the usually circular support of the diffraction-limited image spectrum. We develop an efficient algorithm for optimizing the dithering pattern so that the image can be reconstructed as reliably as possible from a periodic nonuniform set of samples, which can be obtained from a dithered rectangular-grid array. Taking into account the circular frequency support of the image, we sequentially eliminate the least informative array recursively until the minimal number of arrays remain. The resulting algorithm can be used as a tool in exploring the optimal image acquisition strategy.
In passive imaging, the spatial information acquired is strictly bandlimited. Because of this limitation, a number of postprocessing strategies have been proposed to accomplish a measure of superresolution. These strategies incorporate prior information about the image to improve resolution. We show that unless this information is shift- variant, it is unable to contribute to any superresolution. Shift-variant information about the image can be shown to be equivalent to forcing a correlation among the basis images that represent the image. We show that accomplishing superresolution from this correlation is very difficult and has fundamental limitations. Finally, we discuss the potential gains available from using prior information and propose an acquisition strategy that in some cases could improve the potential for superresolution.
Iterative techniques for image restoration are flexible and easy to implement. The major drawback of iterative image restoration is that the algorithms are often slow in converging to a solution, and the convergence point is not always the best estimate of the original image. Ideally, the restoration process should stop when the restored image is as close to the original image as possible. Unfortunately, the original image is unknown, and therefore no explicit fidelity criterion can be computed. The generalized cross-validation (GCV) criterion performs well as a regularization parameter estimator, and stopping an iterative restoration algorithm before convergence can be viewed as a form of regularization. Therefore, we have applied GCV to the problem of determining the optimal stopping point in iterative restoration. Unfortunately, evaluation of the GCV criterion is computationally expensive. Thus, we use a computationally efficient estimate of the GCV criterion after each iteration as a measure of the progress of the restoration. Our experiments indicate that this estimate of the GCV criterion works well as a stopping rule for iterative image restoration.
Image restoration results that are both objectively and subjectively superior can be obtained by allowing the regularization to be spatially variant. Space-variant regularization can be accomplished through iterative restoration techniques. The optimal choice of the regularization parameter is usually unknown a priori. The generalized cross-validation (GCV) criterion has proven to perform well as an estimator of this parameter in a space-invariant setting. However, the GCV criterion is prohibitive to compute for space-variant regularization. In this work, we introduce an estimator of the GCV criterion that can be used to estimate the optimal regularization parameter. The estimator of the GCV measure can be evaluated with a computational effort on the same order as that required to restore the image. Results are presented which show that this estimate works well for space-variant regularization.
Because of the presence of noise in blurred images, an image restoration algorithm must constrain the solution to achieve stable restoration results. Such constraints are often introduced by biasing the restoration toward the minimizer of a given functional. However, a proper choice of the degree of bias is critical to the success of this approach. Generally, the appropriate bias cannot be chosen a priori and must be estimated from the blurred and noisy image. Cross-validation is introduced as a method for estimating the optimal degree of bias for a general form of the constraint functional. Results show that this constraint is capable of improving restoration results beyond the capabilities of the traditional Tikhonov constraint.
Regularization is an effective method for obtaining satisfactory solutions to image restoration problems. The application of regularization necessitates a choice of the regularization parameter as well as the stabilizing functional. For most problems of interest, the best choices are not known a priori. We present a method for obtaining optimal estimates of
the regularization parameter and stabilizing functional directly from the degraded image data. The method of generalized cross-validation (GCV)
is used to generate the estimates. Implementation of GCV requires the computation of the system eigenvalues. Certain assumptions are made
regarding the structure of the degradation so that the GCV criterion can be implemented efficiently. Furthermore, the assumptions on the matrix
structure allow the regularization operator eigenvalues to be expressed as simple parametric functions. By choosing an appropriate structure for
the regularization operator, we use the GCV criterion to estimate optimal parameters of the regularization operator and thus the stabilizing functional.
Experimental results are presented that show the ability of GCV to give extremely reliable estimates for the regularization parameter and operator. By allowing both the degree and the manner of smoothing to be determined from the data, GCV-based regularization yields solutions that would otherwise be unattainable without a priori information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.