|
1.IntroductionThere is interest in the development of human-portable surveillance systems capable of observing a wide field of view over long horizontal or near-horizontal paths. In addition, remotely piloted surveillance systems must transmit images to remote users over wireless channels that have limited bandwidth. As a result, currently fielded surveillance systems compress images prior to transmission.1 An image processing system that could reconstruct images corrupted by turbulence prior to compression and transmission would allow such systems to make more efficient use of available bandwidth. Thus, effort must be directed toward the examination and selection of computationally efficient and robust image reconstruction techniques. Under all but the most benign conditions, temperature inhomogeneities in the atmosphere result in turbulence and variations in the index of refraction along the imaging path. Even small variations in the index of refraction cause changes in the optical path length that result in phase aberrations at the aperture.2 As the optical path length increases, or the turbulence strength increases, the aberrations become stronger, and the isoplanatic angle decreases. Terrestrial surveillance scenarios often involve imaging scenes that subtend angles much larger than the isoplanatic angle, a situation referred to as anisoplanatic imaging. Under these conditions, increasing the size of the aperture will not improve the quality of the image. Unless the seeing conditions are very favorable, anisoplanatism will dominate most practical horizontal imaging situations. The resulting distortion will limit the performance of any optical system operating in such a turbulent atmosphere, frequently causing the observed scene to be blurred beyond usefulness.3 Many strategies have been proposed to estimate the true scene from turbulence-corrupted images, not all of which are explored in this work. Each represents a compromise between how quickly the technique produces an estimate of the scene, the accuracy of the estimate, and the robustness of the technique. Quick and accurate convergence under a wide range of atmospheric seeing conditions, regardless of the scene illumination, while remaining transportable by a single operator are desirable characteristics. Power and portability constraints may eliminate traditional adaptive optics approaches that have been applied to similar problems in the imaging of space objects4 using currently available technology. An additional limitation to adaptive optic systems is that they only correct well over a few multiples of the isoplanatic angle, while common horizontal surveillance scenarios observe scenes that extend over many multiples of the isoplanatic angle. Ayers and Dainty,5 among others, and Lohmann et al.6 determined that near diffraction-limited phase information could be recovered from multiple-speckled images through the use of Knox-Thompson and bispectrum techniques.7 The recovered phase and intensity information are used to reconstruct nearly diffraction-limited images of astronomical objects. Carrano8 proposed the use of speckle imaging (SI) for horizontal imaging and EM Photonics, (Newark, Delaware) has developed an embedded system using the technique to reconstruct turbulent images over horizontal paths in near real time.9 Bos and Roggemann have extended this technique for use over wide fields of view and also explored the performance limits of SI over a range of turbulence levels. Both Carrano and Bos ignore anisoplanatism in their processing but nevertheless produce reconstructions with reduced error.3 Other blind deconvolution techniques have also been applied to reconstructing turbulence-corrupted images. Richardson10 and Lucy11 pioneered the application of an iterative blind deconvolution technique to a single image degraded by atmospheric turbulence. Their method relied on the enforcement of positivity, and finite support constraints on the object estimate. Schulz12 extended that method to include multiple input images and developed a penalized-maximum-likelihood algorithm to avoid the trivial solution that incorrectly concludes that the optical system’s point spread function (PSF) is a Dirac delta function and that the most likely object estimate is the observed turbulent image. Little research has been done to compare the performance characteristics of the multi-frame blind deconvolution (MFBD) and SI techniques under anisoplanatic conditions; that is the subject of this work. This article compares the performance of SI techniques and MFBD at estimating object intensities from simulated images exhibiting extreme levels of anisoplanatism, using the number of input frames and scene illumination levels as the operational parameters. To enable this comparison, the authors developed a horizontal imaging simulation model and used it to create three sets of 1000 simulated, turbulent images based on the image processing Lena image, common in the literature. In order to explore the technique’s performance under multiple illumination conditions, each set of simulated turbulent images was used to generate a set of speckle images with a mean photon count of 30, 15, and 7 photons per pixel. The effectiveness of SI and MFBD is assessed by calculating the mean-square-error (MSE) between the resulting object estimate and the diffraction limited image. We find that the both SI- and MFBD-reconstructed objects show significant improvement in MSE compared with the average MSE over the entire data set. SI reduces the MSE 46%, 42%, and 47% on average for low, moderate, and severe turbulence cases respectively, using 15 input frames, under daytime illumination. Similarly, the MFBD method provides 40%, 25%, and 36% improvements in MSE on average under the same conditions. Under low-light conditions (fewer than 30 photons per pixel), improvements of 39%, 29%, and 27% are available using SI with 25 input frames used in the reconstruction, and 38, 34 and 33, respectively, for the MFBD method using 150 input frames. The remainder of this article is organized as follows. In Sec. 2, we discuss horizontal imaging as it applies to the reconstruction of turbulent images under anisoplanatic conditions and develop foundations of the bispectrum and the objective functions to be applied to the optimization routine. In Sec. 3, the object recovery methods used for SI and MFBD are described for both Gaussian and Poisson noise models. In Sec. 4, the comparative results of processing for SI and MFBD are presented under all three turbulence cases and, in the case of the Poisson noise model, for three turbulence conditions and three values of mean photon counts. Section 5 presents the results of MFBD performance using field data followed by the results of a processing strategy that uses the bispectrum estimate to jumpstart the MFBD estimator in Sec. 6. Summary conclusions are offered in Sec. 7. 2.Background2.1.Horizontal ImagingWe begin our discussion by examining some assumptions regarding the horizontal-imaging problem. The atmospheric coherence radius, , is commonly used to define the effective aperture radius of an imaging system. Increasing the size of the aperture beyond will not improve the resolution of the system, when imaging in the presence of anisoplanatism. For spherical wave propagation, the atmospheric coherence radius, , is defined by Frieds13 and Roggemann and Welsh2 as where is the mean wavelength, the wave number . The limits of the integral are from at the pupil plane to the scene at . For purposes of this study, it is assumed that in Eq. (1) that the refractive-index structure constant does not vary with location, for paths that are horizontal or at shallow angles from the aperture.14 This assumption simplifies the expression of the atmospheric coherence radius given in Eq. (1) toIt is further assumed that the strength of the turbulence is such that scintillation effects can be neglected. Evaluating these expressions for the simulated propagation path and index of refraction structure for the low-turbulence condition , and . Simulation conditions for the moderate condition yields . For the severe turbulence condition, and . A complete description of the simulator used to create these data sets is available in Ref. 15. Using Fried’s16 definition for the isoplanatic angle, , and evaluating for the conditions under consideration, we find the isoplanatic angle to be between 10.8 and 6.63 μrad. For the conditions simulated here, a single pixel in the simulated imaging system captures 2.79 μrad. Expressing the values for the low, medium, and severe turbulence conditions of the simulation, we see that the isoplanatic patch covers 4, 3, and 2 pixels in the simulated imaging system and that anisoplanatic effects are present in the simulated images. These results are summarized in Table 1, and samples of the simulated images are shown in Fig. 1.Table 1Atmospheric simulation turbulence conditions. This table presents parameters associated with the three turbulence conditions used in the study.
2.2.Speckle ImagingAtmospheric turbulence can also be described by a correlation time constant . The phase perturbation at every multiple of can be considered independent of the previous and next interval. One strategy to mitigate the effects of atmospheric turbulence is the use of short exposure images, where the detector is exposed to light for a period less than to freeze the atmospheric turbulence in a single, independent realization of the phase perturbation. The term “speckle imaging” arises from astronomical imaging, where short exposure images of stellar objects exhibit high-frequency intensity fluctuations, or “speckles.” Noise-free image formation in the spatial domain is the convolution of the atmospheric PSF and the object intensity distribution, where is a two-dimensional (2-D) coordinate system in the image plane, and is a 2-D coordinate system in the object plane. A set of images that have been corrupted by additive Gaussian noise can be described. where represents an additive noise term characterized by an independent, identically distributed Gaussian random variable with zero mean and variance . Taking the Fourier transform of both sides of Eq. (4) allows a multiplication in the Fourier spatial-frequency domain. The product of the complex object spectrum with the atmospheric optical transfer function (OTF) , results in the received complex image spectrum .Two independent steps are required to recover the amplitude and phase information associated with the received complex object spectrum. First, inverse pseudo-Wiener filtering the ensemble average power spectral density (PSD) yields an estimate of the power spectrum of the object in the scene. where represents the long exposure OTF of the imaging system and turbulent atmosphere and represents an ensemble average.For our processing, the theoretical long-exposure atmospheric OTF of the simulated imaging system is applied to estimate the amplitude of the complex object spectrum.13,17 The additive term is a constant adjusted by the user that compensates for the fact that at high signal-to-noise ratios the PSD of the OTF may go to zero. The amplitude of the PSD is insufficient to form an image. A separate process is required to recover the phase information in the speckle images. The object phase information is recovered by computing the bispectrum,7 defined as where represents the complex conjugate of , and represents a constant spatial frequency offset.It can be shown2 that the object phase information can be recovered from the bispectrum by substituting the image spectrum defined in Eq. (6) into the the definition in Eq. (9). The estimate of the object phase can be determined at any arbitrary spatial frequency in terms of the bispectrum estimate, assuming the phases at and are known. Because images are real valued, the phase of is zero. The adjacent frequency points , , , can also arbitrarily be set to zero without loss of image quality. This assumption will result in loss of image registration information, and reconstructed images are often not centered within the image frame. Further processing can recenter the reconstructed images. 2.3.MFBD for the Gaussian Noise ModelMFBD is another common image reconstruction method5 that uses an iterative process to recover object amplitude and phase information. The term is the noise-free image characterized in Eq. (4) as the convolution of a deterministic object and the imaging system’s incoherent PSF. The incoherent PSF is the modulus squared of the coherent PSF. where the coherent PSF is given by where is a binary aperture function whose value is zero outside the aperture, is a 2-D coordinate system in the aperture plane, the • operator represents the inner product, andThe phase term represents the accumulated phase perturbation in the aperture of the imaging system decomposed to a linear sum of orthonormal basis functions, and are the weighting coefficients for each term and image. Zernike polynomials are a convenient set of basis functions commonly used in describing classical aberrations in optical systems. Using a Gaussian noise model, each point in each image is a random variable with a Gaussian probability density function (PDF). The PDF of is parameterized by the object intensities and the vector of aberration weighting coefficients .4 and the likelihood of the complete data set consisting of all the pixel intensities in all the corrupted images is given byTaking the natural log of Eq. (15) makes the analysis more tractable. The resulting summation, neglecting a constant term, yields the log-likelihood function The processing objective is to maximize this cost function, parameterized in terms of the true object and the vector of Zernike coefficients, that expand the phase terms . This is a complex nonconvex optimization problem that may not yield a global minimum. Further discussion is provided in Sec. 3.1. To maximize this cost function, the optimization routine needs at least an estimate of the gradient, , and Hessian, , of the likelihood function. Although an analytic form of the Hessian is not required, optimization is much more efficient if an analytic form of the gradient is provided. The expression of the gradient is derived in Appendix A. 2.4.Poisson Noise Model MFBDNot all images are taken in full daylight. At low light levels, photon noise may dominate image frames. This is often characterized by a grainy quality of the images due to the random arrival times of discrete photons. Photon noise in images is described by modeling the number of photons detected in an image frame at each pixel as a Poisson random variable with a mean photon count rate , which is proportional to average pixel intensity. For this simulation, the number of photons detected at each detector pixel is assumed to be an independent, Poisson distributed random variable with a mean rate given by a noiseless diffraction-limited image . The random nature of the PSF is neglected. The probability of detecting photons at a specific pixel location is given by The distribution over the entire set of pixel locations is given by4 As before, taking the natural log yields a modified log-likelihood function where the last term is a constant and can be neglected. The expression for the gradient of the Poisson noise model log-likelihood function is derived in Appendix B.3.MethodsOur simulations assume that the propagation occurs over horizontally homogeneous conditions with both the object and the imaging system immersed in a turbulent atmosphere. Furthermore, we assume that the height above ground does not vary significantly between imaging system and a static scene. In addition, is assumed to be constant over the propagation path.14 We assume that the simulated data has effectively frozen the turbulence at the moment the turbulent image is created. Reconstruction processing begins by selecting images from the complete data set in groups of , 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, and 23 and 25, 50, 75, 100, 125, 150, 175, and 200 frames. Once the number of frames to process, , is determined, a stack of images is selected for processing. The number of reconstructions, , used to determine the average MSE declines as the number of frames used in the reconstruction increases. For example, For through 25, . When for , for and 100. When 200 frames are applied to the reconstruction algorithm, only five independent trials are available. Prior to applying the simulated turbulent images to the reconstruction algorithms, they must be recentered, as tilt is not estimated in the MFBD algorithm. This was accomplished by using a correlation filter to compare each image in the stack to an ensemble average image and then shifting the turbulent image to recenter it. When the optimization routine exits, the recovered object is compared with the diffraction-limited image and the MSE is averaged over all pixels after both images are normalized to a maximum value of 255, as follows where is the normalized diffraction-limited image, is the current-normalized estimate of the object, and is the total number of pixels in the image. The image stack is incremented to begin processing the next group of turbulent images and the process repeated. When the entire data set has been processed, the average of the vector of MSEs for images processed at a time is calculated. To facilitate direct comparison between the reconstruction methods, exactly the same simulated turbulent images were applied to each estimator in every trial for both methods, and the MSE was calculated from the recovered images.3.1.Multiframe Blind DeconvolutionIn order to reduce edge effects in the reconstructed object, each image in the data set was preprocessed to pad the recentered image by replicating the edges of the image outward and then adding a border of zeros. The images as applied to the estimator were padded with 8 replicated pixels followed by 5 zero pixels at the margins of each image, bringing the total size of the image to . We jointly process all images and all Zernike coefficients, thus for a data set of , images, using Zernike polynomial terms, there will be parameters that must be jointly estimated. To make the optimization tractable and reduce run times, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS)15 method is used to optimize the cost functions in Eqs. (16) and (19) to find the object and aberration coefficients most likely to have produced the simulated images in the data set. One of the characteristics of line-search methods like L-BFGS is the need for an estimate of the Hessian . The limited memory form of the BFGS does not require an explicit expression for . It estimates the value of the Hessian matrix by maintaining the last few updates of and . L-BFGS is a quasi-Newtonian, “hill-climbing” technique that begins with an initial guess at a solution for and then proceeds along a line in the direction pointed to by the gradient of the objective function evaluated at each location. For MFBD processing, the spatial average of all the frames used in the trial was used as starting point for the estimator. In Sec. 6, we propose a hybrid bispectrum-MFBD (B-MFBD) method, in which a bispectrum object estimate is used to provide the starting point for the MFBD estimator. The initial guess Zernike coefficients applied to the estimator for both techniques are random Gaussian numbers with a mean of 0.5 and unity variance. Based on previous work,18 for the image reconstruction processing in this article, the number of iterations was limited to 25 and the number of Zernike terms was fixed at 30. 4.Results4.1.Gaussian Noise ModelFigure 2 shows the plots of MSE for the Gaussian case MFBD, and bispectrum estimators, as the number of images, are varied for low, moderate, and severe turbulence conditions. For comparison, the average MSE for the each data set is also provided. We see that at and thereafter both estimators can be expected to perform better than the average MSE for all three simulated turbulence conditions. For the low turbulence case, , marginal improvement in MSE declines at , reaching a maximum improvement of over the average MSE over the entire data set. For the moderate turbulence case, , the improvement in MSE available by including additional input frames hits a maximum of of full scale at . For the severe turbulence case, , the improvement in MSE available by including additional input frames hits a maximum of of full scale at and neither the MSE nor the standard deviation improves significantly as additional input images are added to the processing stack. However, for low and moderate turbulence cases, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are included or if the number of iterations is increased.18 A sample of the simulated turbulent image data set is compared with a sample-reconstructed object for MFBD and SI as shown in Figs. 3Fig. 4–5. 4.2.Poisson Noise Model, Photon Rate 2 × 106, 30 Photons Per PixelEach set of 1000 turbulent images, representing the the three turbulence cases, was used to generate a set of speckle images with a mean photon count per image of . Each set of images was processed using the MFBD and SI methods described above. The MFBD method used the cost function and gradients described in Eqs. (19), (31), and (32). Figure 6 shows the plots of MSE for the Gaussian MFBD and SI reconstructors, as the number of images, are varied for (a) Case 1: low turbulence , (b) Case 2: moderate condition , and (c) Case 3: severe turbulence . For comparison, the average MSE for each data set is also provided. For the low turbulence case, MFBD performance is less than the data set average until 50 input frames are used in each reconstruction. At and thereafter the estimator can be expected to produce an estimate that has a value lower than 2095, the average MSE of the images in the simulated data set. Marginal improvement in MSE continues as additional frames are added to the image stack, reaching a maximum of about 38% improvement over the average MSE across the data set. Bispectrum achieves similar results with far fewer input frames. For the low turbulence case, MFBD performance approximates that of the bispectrum technique after 125 images are processed in each image stack. For moderate and severe turbulence cases, the use of 150 input frames produces a slightly lower MSE than the SI method. The reduced number of trials for large frame groups is conjectured to be the reason why the MSE plots are not consistently monotonic. Processing times for SI can be a short as 1 min, MFBD can be as short as 30 min when run to convergence. A sample simulated turbulence image is compared with MFBD and SI reconstructed objects in Figs. 7Fig. 8–9. The MFBD sample was processed using 175 frames; the SI method used 200 frames in the reconstruction. 4.3.Poisson Noise Model, Photon Rate 1 × 106, 15 Photons Per PixelFigure 10 shows the plots of MSE for the MFBD, and bispectrum reconstructors, as the number of images, , are varied for (a) Case 1: low turbulence , (b) Case 2: moderate condition , and (c) Case 3: severe turbulence . For comparison, the average MSE for each turbulence condition is also provided. For the low-turbulence case, MFBD performance is less than the input images until 50 input frames are used in each reconstruction. The MFBD estimator produces a lower MSE than that of the bispectrum estimator when the number of frames processed . MFBD error reduces as additional frames are added to the image stack, reaching a maximum of about 38% improvement over the average MSE across the data set. A sample simulated turbulent image is compared with reconstructed objects in Figs. 11Fig. 12–13. The MFBD sample was processed using 175 frames, the bispectrum method used 200 frames in the reconstruction. Processing times for bispectrum are significantly shorter than for MFBD. 4.4.Poisson Noise Model, Mean Photon Rate 5 × 1015, 8 Photons Per PixelFigure 14 shows the plots of MSE for the Gaussian MFBD and bispectrum estimators, as the number of images are varied. For comparison, the average MSE for the entire data set is also provided. On average, MFBD performance is less than the input images until 25 input frames are used in each reconstruction. At and thereafter, the estimator can be expected to produce an estimate that has a lower MSE than the average MSE of the images in the simulated data set. One possible reason that the SI estimate performance has diminished is that the estimator was optimized for high light levels. Only five bispectrum subplanes were used to reconstruct the phase estimate, which is sufficient for high-light levels, but the inclusion of higher-frequency offsets may increase the noise in the final estimate, thereby degrading the MSE performance. The MSE produced by the MFBD estimator will on average be less than that of SI techniques when the number of frames processed . After , MSE is not further reduced by MFBD processing. For the moderate and severe turbulence cases shown in Figs. 14(b) and 14(c), the MFBD estimator produces a lower MSE then SI after . Sample simulated turbulent images are compared with reconstructed objects in Figs. 15Fig. 16–17. The MFBD sample was processed using 175 frames; the bispectrum method used 200 frames in the reconstruction. 5.MFBD Estimator Performance Using Field DataMFBD estimator performance is also evaluated with field images that were gathered as an ancillary part of a free-space-laser communications experiment.19 The transmitter side of the experiment consisted of an 808-nm laser transmitter and a pinwheel target. The receiver end of the experiment was a 300-cm Celestron telescope located 3.046-km downrange. The imaging path extended horizontally over both land and water at above sea level. As part of the communications experiment, simultaneous images and turbulence measurements were taken using a Shack–Hartmann wave front sensor and a point gray CCD camera. The images used in MFBD reconstruction were recorded in the absence of continuous wave front sensor (WFS) measurements because the active laser present in the images recorded for the communications experiment regularly saturated the image sensor. Turbulence measurements were taken for three 27-s intervals pre- and postimage collection. WFS sensor data was not concurrent with the images. Our assumption is that the atmospheric turbulence will not change more rapidly than the data could be recorded. The Fried parameter estimates, , are shown in Table 2. Table 2Date, time, and wave front sensor turbulence measurement range averaged over 27 seconds pre- and postimage collection as the images in data set 1, 2, and 3 were recorded.
Images from the July 1, 2009, data set were processed using the MFBD estimator. A region of interest was extracted from the recorded images and padded up to by replicating the pixels at the margin of the image outward. Fifteen images were processed using a spatially averaged initial estimate. The images were applied to the estimator for 20 iterations, and the result is shown in Figs. 18 and 19. Images from July 24, 2009, 1900 EDT data set were processed using the MFBD estimator. A region of interest was extracted from the recorded images and padded up to by replicating the pixels at the margin of the image outward. for this data set ranged from 2.3 to 2.4 cm, substantially smaller than data set 1. To provide more information for the estimator to work with, 25 images were processed using 35 Zernike coefficients and 20 iterations of the estimator. A spatially averaged image was used as a starting point for the optimization routine, and the result is shown in Fig. 20. 6.Using Bispectrum as the MFBD Initial EstimateBoth bispectrum and MFBD have strengths to recommend them. Bispectrum produces rapid results but requires an estimate of and to get started. MFBD converges more slowly and has the potential to produce a higher-quality image reconstruction. But, MFBD requires an initial estimate of the object to get started and is subject to trapping in a local minimum of the cost function. The reduced run time of the bispectrum estimator suggests that it may offer a quick way to provide a better starting point for the MFBD estimator. Reconstruction processing in Sec. 3.1 used the spatial average of the input images as a starting point for the L-BFGS optimization routine. A hybrid strategy was attempted to combine the best of both techniques. In the first case, a bispectrum estimate for the object intensities was used to replace the ensemble average of the input images as the starting point for the MFBD estimator. The MSE was calculated as a function of the number of input frames used in the estimate. Setting the number of processing iterations at and limiting the number of input frames to the hybrid method, B-MFBD, reduced the MSE by 16.9%, 18.6%, and 14.9% over MFBD processing using the ensemble average as the starting point when processing the low, moderate, and severe turbulence data sets, respectively. When compared with the SI processing, reductions of 9.4%, 8.4%, and 6.6% in MSE were observed. Reconstruction times for each frame averaged for bispectrum, 10 min for MFBD, and 35 min for the B-MFBD hybrid. These results are summarized in Table 3. The hybrid B-MFBD improves on the performance of the standard MFBD estimator and approximates the performance of the SI estimator. The significant characteristic of the B-MFBD estimator is a 4% to 6% improvement on the variance of the estimate over SI and MFBD applied by themselves. The plots for MSE versus the number of frames are shown in Fig. 21. Gaussian (a) Case 1: low turbulence , (b) Gaussian Case 2: moderate turbulence , and (c) Gaussian Case 3: severe turbulence . Further efforts to explore the performance characteristics of this technique used a value for the gradient of the cost function as the exit criteria for the L-BFGS optimization routine. A value of was set, and the B-MFBD estimator was allowed to run until convergence produced the object estimate as seen in Figs. 22(c), 23(c), and 24(c). Because processing times are both long and unpredictable, this technique is unlikely to see application in an embedded system. Table 3Mean and variance for multiframe blind deconvolution (MFBD), bispectrum, and the hybrid B-MFBD methods for low turbulence CN2=2.25×10−14 m(−2/3), moderate condition CN2=3.75×10−14 m(−2/3), and severe turbulence CN2=5.25×10−14 m(−2/3). Processing was accomplished with Nf=15, Nz=15, and the exit criteria for the optimization routine set at 40 iterations.
7.ConclusionsThe performances of an SI estimator and an unconstrained optimization-based MFBD estimator were compared in terms of the MSE between the object reconstructed using the method and a diffraction-limited image. Three 1000-image data sets of a single image distorted by low, moderate, and severe turbulence that includes anisoplanatic effects were applied to both methods and their MSE performance was evaluated. At normal illumination levels, a wide range of turbulence cases would be well served by either SI or MFBD. Point performance estimates, using a data set of 1000 simulated turbulence-corrupted images, indicate that speckle-imaging techniques reduce the MSE 46%, 42%, and 47% on average for low, moderate, and severe turbulence conditions, respectively, using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides 40, 29, and 36 improvements in MSE on average under the same conditions. But the speckle techniques are significantly faster. The comparison is repeated under decreasing illumination conditions (less than 100 photons per pixel) where improvements of 39, 29, and 27 are available using SI methods and 25 input frames and 38, 34, and 33, respectively, for the MFBD method and 150 input frames, respectively, under the assumption that the phase errors can be characterized as a Gaussian distribution. For all simulated turbulence cases, significant reductions were observed with as few as two input images. For the Poisson case, significant results were achieved with as few as 50 frames for MFBD and 2 frames for speckle. Processing with field data with the MFBD produced qualitatively similar results over a modest range of atmospheric turbulence. The B-MFBD estimator using the SI estimator to provide the MFBD processing algorithm with an initial starting point produces modest MSE performance improvement over SI alone using 15 input frames, 15 Zernike terms and 40 iterations of the MFBD algotithm. The B-MFBD method produced more substantial improvement when compared with MFBD processing that used a spatial average of input frames as the optimizer’s starting point when using similar inputs and exit criteria. Qualitative improvements were achieved by letting the optimization run until convergence, but this technique may not be suitable for embedded system implementation because of long, unpredictable processing time. AppendicesAppendix A:Gradient of the Gaussian Noise Model Log-Likelihood Cost FunctionWith respect to the pixel intensities, the gradient of the Gaussian log-likelihood function20 can be represented as and the derivative of with respect to the object is given by where is the incoherent PSF.The gradient with respect to the Zernike coefficients is Since the object is constant with respect to the Zernike coefficients, Eq. (24) reduces to From Eqs. (11)–(13), we can express the derivative of the incoherent PSF with respect to the in terms of the coherent PSF, where represents the complex conjugate of the function in parentheses. Equation (26) can be represented as where takes the real part of a complex value.Combining the results of Eqs. (29), (28), (27) yields the final result Appendix B:Gradient of the Poisson Noise Model Log-Likelihood Cost FunctionTaking the derivative with respect to the pixel intensities the gradient of the Poisson log-likelihood function20 described in Eq. (19) can be represented as With respect to the Zernike coefficients the gradient of the Poisson log-likelihood function can be represented as where the was shown in Eq. (30).ReferencesS. Baiottiet al.,
“Advances in UAV data links: analysis of requirement evolution and implications on future equipment,”
in Proceedings of RTOSCI Symposium on Walfare Automation: Procedures and Techniques for Unmanned Vehicles,
B10-1
–B10-14
(2000). Google Scholar
M. C. RoggemannB. M. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida
(1996). Google Scholar
J. P. BosM. C. Roggemann,
“Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,”
Proc. SPIE, 8056 805603
(2011). http://dx.doi.org/10.1117/12.884093 Google Scholar
R. G. PaxmanT. J. SchulzJ. R. Fienup,
“Joint estimation of object and aberrations by using phase diversity,”
J. Opt. Soc. Am. A, 9
(7), 1072
–1085
(1992). http://dx.doi.org/10.1364/JOSAA.9.001072 JOAOD6 0740-3232 Google Scholar
G. R. AyersJ. C. Dainty,
“Iterative blind deconvolution method and its applications,”
Opt. Lett., 13
(7), 547
–549
(1988). http://dx.doi.org/10.1364/OL.13.000547 OPLEDP 0146-9592 Google Scholar
A. W. LohmannG. WeigeltB. Wirnitzer,
“Speckle masking in astronomy: triple correlation theory and applications,”
Appl. Opt., 22
(24), 4028
–4037
(1983). http://dx.doi.org/10.1364/AO.22.004028 APOPAI 0003-6935 Google Scholar
G. R. AyersM. J. NorthcottJ. C. Dainty,
“Knox-thompson and triple-correlation imaging through atmospheric turbulence,”
J. Opt. Soc. Am. A, 5
(7), 963
–985
(1988). http://dx.doi.org/10.1364/JOSAA.5.000963 JOAOD6 0740-3232 Google Scholar
C. J. Carrano,
“Speckle imaging over horizontal paths,”
Proc. SPIE, 4825 109
–120
(2002). http://dx.doi.org/10.1117/12.453519 PSISDG 0277-786X Google Scholar
P. E. Curtet al.,
“Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method,”
Proc. SPIE, 7244 724404
(2009). Google Scholar
W. H. Richardson,
“Bayesian-based iterative method of image restoration,”
J. Opt. Soc. Am., 62
(1), 55
–59
(1972). http://dx.doi.org/10.1364/JOSA.62.000055 JOSAAH 0030-3941 Google Scholar
L. B. Lucy,
“An iterative technique for the rectification of observed distributions,”
Astron. J., 79
(6), 745
–754
(1974). http://dx.doi.org/10.1086/111605 ANJOAA 0004-6256 Google Scholar
T. J. Schulz,
“Multiframe blind deconvolution of astronomical images,”
J. Opt. Soc. Am. A, 10
(5), 1064
–1073
(1993). http://dx.doi.org/10.1364/JOSAA.10.001064 JOAOD6 0740-3232 Google Scholar
D. L. Frieds,
“Limiting resolution looking down through the atmosphere,”
J. Opt. Soc. Am., 56
(10), 1380
–1384
(1966). http://dx.doi.org/10.1364/JOSA.56.001380 JOSAAH 0030-3941 Google Scholar
J. C. WyngaardY. IzumiS. A. Collins,
“Behavior of the refractive-index-structure parameter near the ground,”
J. Opt. Soc. Am., 61
(12), 1646
–1650
(1971). http://dx.doi.org/10.1364/JOSA.61.001646 JOSAAH 0030-3941 Google Scholar
J. E. DennisR. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16), Soc for Industrial & Applied Math, Prentice-Hall, Inc., Englewood Cliffs, New Jersey
(1996). Google Scholar
D. L. Fried,
“Anisoplanatism in adaptive optics,”
J. Opt. Soc. Am., 72
(1), 52
–52
(1982). http://dx.doi.org/10.1364/JOSA.72.000052 JOSAAH 0030-3941 Google Scholar
J. P. BosM. C. Roggemann,
“Technique for simulating anisoplanatic image formation over long horizontal paths,”
Opt. Eng., 51
(10), 101704
(2012). http://dx.doi.org/10.1117/1.OE.51.10.101704 OPEGAR 0091-3286 Google Scholar
G. E. ArcherJ. P. BosM. C. Roggemann,
“Mean squared error performance of MFBD nonlinear scene reconstruction using speckle imaging in horizontal imaging applications,”
Proc. SPIE, 8399 83990Q
(2012). http://dx.doi.org/10.1117/12.920801 PSISDG 0277-786X Google Scholar
A. V. SergeyevM. C. RoggemannC. Demars,
“Near the ground laser communication system: anisoplantic studies based on the PSF measurements,”
(2011). http://dx.doi.org/10.1117/12.882576 Google Scholar
P. A. BillingsM. R. StriblingB. Stribling,
“Mitigating turbulence-induced image blur using multi-frame blind deconvolution,”
in AMOS Technical Conf.,
506
–512
(2001). Google Scholar
BiographyGlen Archer received his PhD in electrical engineering from Michigan Technological University in December of 2013. He received his BS in electrical engineering from Texas Tech University in 1986 and received a commission as a second lieutenant in the United States Air Force. In 2001, he retired from the Air Force to take a position in the Electrical and Computer Engineering Department at Michigan Tech, where he currently serves as a principal lecturer and the associate chair. His research interests include image processing and engineering education. Jeremy P. Bos received his PhD in electrical and computer engineering from Michigan Technological University in August of 2012. Before returning to pursue his doctorate at Michigan Tech, he worked in the automotive and defense industries for nearly 10 years. During this time, he became a professional engineer (2006) and earned his MS in electrical engineering from Villanova University (2003). He received his BS in electrical engineering from Michigan Technological University in 2000. He is currently a postdoctoral fellow at the Air Force Research Laboratory under the National Research Council’s Research Associateship Program. His research interests are in the areas of atmospheric optics, image processing, and machine intelligence. Michael C. Roggemann is a professor of electrical engineering at Michigan Tech. He is coauthor of the book “Imaging Through Turbulence” and has authored or coauthored over 60 journal articles and over 50 conference papers. Roggemann is a member of the IEEE, and is a fellow of both the Optical Society of America and SPIE. He was also briefly affiliated with Boeing Corporation, where he served as a senior research scientist from 2002 to 2005; he was also a technical fellow at Boeing. Roggeman was an associate professor of engineering physics at the Air Force Institute of Technology, Wright-Patterson AFB in Ohio, from 1992 to 1997. He is an honorably retired Air Force officer at the rank of major. |