Open Access
22 August 2019 Fusion of interpolated frames superresolution in the presence of atmospheric optical turbulence
Author Affiliations +
Abstract

An extension of the fusion of interpolated frames superresolution (FIF SR) method to perform SR in the presence of atmospheric optical turbulence is presented. The goal of such processing is to improve the performance of imaging systems impacted by turbulence. We provide an optical transfer function analysis that illustrates regimes where significant degradation from both aliasing and turbulence may be present in imaging systems. This analysis demonstrates the potential need for simultaneous SR and turbulence mitigation (TM). While the FIF SR method was not originally proposed to address this joint restoration problem, we believe it is well suited for this task. We propose a variation of the FIF SR method that has a fusion parameter that allows it to transition from traditional diffraction-limited SR to pure TM with no SR as well as a continuum in between. This fusion parameter balances subpixel resolution, needed for SR, with the amount of temporal averaging, needed for TM and noise reduction. In addition, we develop a model of the interpolation blurring that results from the fusion process, as a function of this tuning parameter. The blurring model is then incorporated into the overall degradation model that is addressed in the restoration step of the FIF SR method. This innovation benefits the FIF SR method in all applications. We present a number of experimental results to demonstrate the efficacy of the FIF SR method in different levels of turbulence. Simulated imagery with known ground truth is used for a detailed quantitative analysis. Three real infrared image sequences are also used. Two of these include bar targets that allow for a quantitative resolution enhancement assessment.

1.

Introduction

Designing imaging systems involves a complex trade space. The focal plane array detector pitch determines the spatial sampling frequency, and the f-number of the optics and wavelength determine the diffraction-limited optical cutoff frequency. The Nyquist sampling criterion dictates that the sampling frequency must exceed twice the cutoff frequency in order to guarantee that there will be no aliasing in the acquired imagery. The desire for a wide field of view and high optical resolution calls for a low f-number and high optical cut-off frequency. However, practical limitations associated with a small detector size often lead to employing focal plane arrays that do not meet the Nyquist criterion under diffraction-limited imaging conditions.1 Such undersampling may prevent the imaging system from achieving the full resolution afforded by the optics. A wide variety of superresolution (SR) algorithms have been proposed to successfully address such undersampling using multiple unique frames with relative motion between the scene and camera.2,3

Another potential source of image degradation, especially for long-range imaging, is atmospheric optical turbulence. Random fluctuations in the index of refraction along the optical path result in spatially and temporally varying image blur and warping.46 A variety of turbulence mitigation (TM) methods have been developed to address this issue by processing multiple short exposure images.79 There is an important connection between optical turbulence and the undersampling problem. With heavy turbulence, the blurring acts as an anti-aliasing low-pass filter that lowers the effective cut-off frequency of the optical system and limits or prevents aliasing. However, in light to moderate turbulence, significant aliasing artifacts may be present simultaneously with the turbulence degradation. In Sec. 3 of this paper, we provide an optical transfer function (OTF) analysis to illustrate this point and demonstrate the potential need for simultaneous SR and TM. This joint restoration problem has received more limited attention in the literature than SR and TM alone.1015

It is interesting to compare and contrast how SR and TM methods exploit multiple frames for restoration. In the case of SR, spatial sampling diversity is provided by the multiple input frames. For a static scene, video provides “excess” temporal resolution that can be traded for increased single-frame spatial resolution to combat the system’s native undersampling. In the case of spatial-domain TM, the use of multiple frames is important for two main reasons. The first is that averaging multiple globally registered frames can help to reveal the correct scene geometry that is otherwise distorted in each frame by the quasiperiodic and spatially-varying warping from turbulence. The second benefit of using multiple frames for TM is that temporal averaging of registered input frames tends to produce a prototype image with a more spatially invariant residual blur than the individual frames.9,16 A spatially invariant point spread function (PSF) blur can be addressed with a relatively simple restoration method, such as a Wiener filter. Thus, one can see that averaging in some form is key for spatial-domain TM. However, this averaging is at odds with how SR exploits the unique information in each frame for sampling diversity. Thus, any method that seeks to accomplish both SR and TM jointly needs to balance temporal averaging with preservation of spatial sampling diversity.

A framework that is well suited to balance the factors described above is the fusion of interpolated frames (FIF) SR method, recently proposed by two of the current authors.17 The FIF SR method is a multiframe SR algorithm that fuses interpolated versions of each low resolution (LR) input frame. The fusion weights are based on the subpixel alignment for each interpolated pixel, any color information that may be available, and estimated local scene motion activity. A Wiener filter is then applied to the fused image to address the modeled OTF blurring. Here, we extend the FIF SR approach in two important ways to allow it to effectively treat SR and TM simultaneously. First, we incorporate an atmospheric OTF model to the overall degradation model used for the restoration step. Specifically, we use the approach that incorporates an estimate of the level of image registration, or atmospheric tilt reduction, as described by Hardie et al.9 The second key aspect of our extended FIF SR method is that we employ a tunable subpixel fusion weighting parameter that may be set according to the level of turbulence. Under light turbulence, the parameter may set so as to provide a large weight only to pixels that lie very close to the high resolution (HR) grid. This provides maximum SR and minimum interpolation blurring. However, it also effectively reduces the amount of temporal averaging. When the turbulence is stronger, the parameter may be set to be less selective in the weighting to increase the effective averaging of the frames. While this is beneficial for TM, there tends to be more interpolation blurring in the fused image in this case. To help address this, we introduce and model an interpolation blurring OTF component. Thus, the overall OTF model used here in the restoration step smartly incorporates knowledge of key aspects of the algorithm processing steps that precede the OTF restoration. In particular, it incorporates the level of registration that impacts the residual atmospheric blurring and the level of subpixel weighting that impacts the interpolation blur. We believe this smart OTF model allows the Wiener filter to better restore the fused image and provide improved performance.

We would like to note that this paper represents a greatly expanded version of recent conference papers by the same authors.18,19 Major additions include the development of the interpolation OTF, all new and expanded simulation results, and new results with real image sequences. The remainder of this paper is organized as follows. In Sec. 2, we describe the FIF SR method and present the development that models the PSF and OTF of the interpolation and fusion steps. In Sec. 3, we introduce the overall OTF model used by the FIF SR method that incorporates diffraction, turbulence, and interpolation blurring. This section also includes an analysis of the impact of turbulence on undersampling and aliasing. Experimental results are presented in Sec. 4. These results include both simulated and real imagery. The simulated data are generated with a numerical wave propagation method and allow for a detailed quantitative analysis with ground truth.20 This analysis shows that the FIF SR method can effectively perform TM and SR simultaneously for a range of scenarios. Furthermore, one particularly interesting result we show is that the tilt variance from turbulence can actually improve SR results, compared with no turbulence, when no camera platform motion is present. In this case, the random wavefront tilts provide the critical relative motion between the scene and camera for SR sampling diversity, as described by Fishbain et al.10 and Yaroslavsky et al.11 We believe our simulation study is the first quantitative error analysis of its kind to demonstrate this phenomenon in the literature. Finally, we offer conclusions in Sec. 5.

2.

Fusion of Interpolated Frames Superresolution

2.1.

Algorithm Description

Figure 1 shows a block diagram summarizing the FIF SR method, originally from Karch and Hardie,17 and adapted here for joint SR and TM. In our implementation, short-exposure LR observed frames are registered using one of a variety of registration methods appropriate to the application. Next, single-frame interpolation is used to upsample the individual input images by a factor of M to the Nyquist sampling grid, based on the diffraction-limited optical cut-off frequency. The interpolated images are formed in alignment to a common reference frame or frame average.9,16 We shall consider different interpolation kernels here in our analysis. The next block is where the multiframe fusion takes place. Each interpolated pixel in each frame gets a weight based on subpixel alignment for each interpolated pixel. Finally, a Wiener filter is applied to the fused image to produce a single restored image. The Wiener filter makes use of a PSF model that incorporates atmospheric parameters, optical system parameters, level of tilt reduction in the registration step, and the interpolation blurring.

Fig. 1

Block diagram of the FIF SR method. Observed frames are registered, interpolated individually, and then fused based on a subpixel weighting. A Wiener filter provides restoration based on an OTF model that incorporates registration accuracy and subpixel weighting.

OE_58_8_083103_f001.png

Note that the FIF SR algorithm may be viewed as a type of nonuniform interpolation SR method.2 The FIF SR method and other nonuniform interpolation SR methods simplify the SR problem by separating it into registration and nonuniform interpolation followed by OTF restoration. Applying the Wiener filter for OTF restoration after the nonuniform interpolation is justified by the assumption that the warping and burring operators commute in the degradation observation model. In the case of translational motion, the assumption is fully valid.2 For some other types of motion, such as affine, the warping and burring operators have been shown by Hardie et al.21 to approximately commute.

Let us now consider the heart of the FIF SR method, which is the fusion step. Assume there are K input frames and one fused output frame. We define the interpolated input pixel i in the frame k as fk(i), and this corresponds to a sample of fk(x,y) in Fig. 1. Let the fused pixel i on the Nyquist grid be denoted as g(i). This represents a sample of g(x,y) in Fig. 1. With this notation, the fused image pixels are given by

Eq. (1)

g(i)=k=1Kwi,k(β)fk(i)k=1Kwi,k(β),
where

Eq. (2)

wi,k(β)=e(dx(i,k)2+dy(i,k)2)/β2
is the subpixel interpolation weighting function with parameter β. The weight for frame k at HR output position i is based on the distance from that output position and the nearest noninterpolated pixel from frame k. The horizontal distance is denoted dx(i,k) and the vertical distance is dy(i,k). These distances are illustrated in Fig. 2.

Fig. 2

Nyquist interpolation grid (red squares) and the noninterpolated pixel positions from a single LR frame (blue circles). A larger distance implies a larger interpolation error for that frame at that pixel, and consequently, a lower weight using Eq. (2).

OE_58_8_083103_f002.png

Note that Eq. (2) gives a Gaussian weighting, with more weight given to frames that have a lower distance to an observed pixel. This is because larger distances tend to give larger interpolation errors.22 The weighting function is plotted in Figs. 3(a) and 3(b) for β=0.25 and 0.10, respectively. One can see that increasing β makes the fusion less selective, giving increased weight to frames with larger interpolation distances. In the limiting case of β=, equal weight is given to all frames and the fusion becomes a simple temporal average. Using a larger β provides more temporal averaging in the fusion process that can help to reveal the proper scene geometry in moderate to heavy turbulence, and make the atmospheric blurring more spatially invariant in the fused image.9,16 This also helps to attenuate temporal noise. On the other hand, a small β gives a high level of selectivity, where significant weight is only given to interpolated pixels that are close to observed samples in that frame. A small β would be expected to give the best SR results with minimal interpolation error, provided a sufficient diversity of samples is available with a high signal-to-noise ratio.17

Fig. 3

Interpolation weighting from Eq. (2) for (a) β=0.25 and (b) β=0.10. Binning weighting from Eq. (3) is shown in (c) for M=4.

OE_58_8_083103_f003.png

The tunability provided by the parameter β sets it apart from most other nonuniform interpolation SR methods2 and makes it well suited to perform SR in the presence of turbulence. For example, it is interesting to compare the FIF SR approach of fusing interpolated frames with methods that use binning. Binning methods register the input LR frames and populate an HR grid by putting the LR pixels into discrete bins on the HR grid.2,21 LR pixels are assigned to the nearest HR bin in a simple quantization process or by interpolation to the nearest HR bin. With such binning methods, care must be taken to address the very common scenario of empty bins.21 By fusing interpolated frames and using Gaussian weighting, the FIF SR method does not have this issue and will never have any empty output pixels. Using a different weighting function, however, it is possible for the FIF SR framework to perform fusion that is equivalent to binning. Specifically, this binning operation is achieved with the weighting function,

Eq. (3)

wi,k={1|dx(i,k)|<12Mand|dy(i,k)|<12M0otherwise,
depicted in Fig. 3(c) for M=4. While we do not recommend this weighting function for joint SR and TM because of lack of tunability, we believe it is insightful to see the relationship between binning and Gaussian weighting of interpolated frames in Fig. 3.

Further insight may be gained with regard to the parameter β by considering what we term the “averaging power” of the fusion. By this, we mean the variance reduction factor for independent and identically distributed (i.i.d.) temporal samples, relative to a standard average. Consider the fusion of K i.i.d. temporal samples with variance σ2. The output variance would depend on the fusion weights and is given by

Eq. (4)

σi2(β)=Pi(β)σ2K,
where Pi(β) is the averaging power, and σ2/K is the output variance for a standard average. The averaging power factor can range from 0 to 1, with 1 providing the same variance reduction as a standard average, and 0 providing none. As a weighted sum, the averaging power for the FIF SR fusion is given by

Eq. (5)

Pi(β)=(k=1Kwi,k(β))2k=1Kwi,k2(β).
If we assume uniform subpixel distances, a large number of input frames (i.e., a uniform continuum of subpixel shifts), and weights given by Eq. (2), the averaging power factor approaches the following for all pixels:

Eq. (6)

P(β)=(x=0.50.5y=0.50.5e(x2+y2)/β2dydx)2(x=0.50.5y=0.50.5(e(x2+y2)/β2)2dydx).
The averaging power in Eq. (6) is plotted in Fig. 4 as a function of β. Note that smaller values of β produce a smaller averaging power. This is a result of the greater spatial selectivity. Interestingly, we have observed good algorithm performance in many cases near the inflection point of the curve in Fig. 4 near β=0.25. For comparison, the averaging power for binning method given by Eq. (3) and shown in Fig. 3(c), is simply P=1/M2 for all pixels. This result is based on the fraction of frames expected to fall into each bin under a uniform subpixel displacement assumption.

Fig. 4

Averaging power factor from Eq. (6) as a function of β.

OE_58_8_083103_f004.png

It should be noted that the original FIF SR paper17 considers other weighting components, in addition to that in Eq. (2). One additional weighting term is designed to exploit color correlation. Another term is included to address in-scene motion to minimize motion blur and distortion.17 However, since our focus here is on the simultaneous SR and TM, we limit the scenarios under consideration to single-band imagery (i.e., no color), and no in-scene motion. Treating color and in-scene motion are certainly important problems that we hope to address in future work, in the context of joint TM and SR.

2.2.

Interpolation Impulse Response

In this subsection, we derive an OTF model for the fused interpolation blur. First, note that each sample in g(x,y) in Fig. 1 may be expressed as a weighted sum of observed pixels. The specific weighting will depend on the exact motion and interpolation kernel used as well as the weighting from Eq. (2). For anything other than rigid translational motion, the weights will vary spatially.21,23 However, if we assume a large number of frames with motion significantly greater than one LR pixel, we believe it is reasonable to assume a uniform subpixel distribution of motion. This allows us to model the interpolation blur with a spatially invariant impulse response. This is very similar to how the turbulence blur can be treated as spatially invariant after fusing a large number of frames.9,16 Computing a spatially invariant interpolation impulse response for the fused image g(x,y) is powerful in that it can be incorporated into the overall degradation model, as shown in Fig. 1. This allows us to mitigate some of the nonideal aspects of the interpolation step. It also adds to our overall understanding of the FIF SR method and further informs the selection of β.

The choice of interpolation kernel impacts the resulting interpolation blur model. Consider three common 1-D interpolation kernels24 that are illustrated in Fig. 5. The zero-order hold (ZOH), or nearest neighbor, interpolation kernel is given by

Eq. (7)

Fzoh(x)=rect(x)={1|x|<1/20otherwise,
where x is position in LR pixel spacings. A linear interpolation kernel is given by

Eq. (8)

Flin(x)={1|x||x|<10otherwise.
The last interpolation kernel we consider here is cubic given by

Eq. (9)

Fcub(x)={1.5|x|32.5|x|2+1|x|10.5|x|3+2.5|x|24|x|+21<|x|<|2|0otherwise.
All of these kernels may be used in multiple dimensions as separable functions. Thus, our analysis is done in 1-D, and then extended to 2-D.

Fig. 5

Continuous interpolation functions as a function of LR pixel spacings for ZOH, linear, and cubic interpolation.

OE_58_8_083103_f005.png

Next, let the LR sampling function (i.e., integers corresponding to LR sample positions) be expressed as

Eq. (10)

samp(x)=k=δ(xk),
where δ(·) is a Dirac delta function. The interpolation weights, for a shift of s between the LR and interpolated grids, are obtained by sampling the interpolation kernel F(x) giving

Eq. (11)

Fs(x)=F(x)samp(xs)=F(x)(k=δ(xks))=k=F(k+s)δ(xks).
The interpolation kernel, F(x), may be one of those presented in Eqs. (7)–(9) or any other. Integrating over a uniform subpixel shift (assuming temporal frames provide a uniform distribution of subpixel shifts) with distance weighting w(s), the 1-D frame-averaged interpolation impulse response is given by

Eq. (12)

h(x)=s=0.50.5w(s)Fs(x)ds=s=0.50.5w(s)(k=F(k+s)δ(xks))ds=k=s=0.50.5w(s)F(k+s)δ(xks)ds.
The integral in Eq. (12) is solved using the sifting property, yielding

Eq. (13)

s=0.50.5w(s)F(k+s)δ(xks)ds={w(xk)F(x)|xk|<0.50otherwise=F(x)w(xk)rect(xk).
Combining the result in Eq. (13) with the summation in Eq. (12), we get

Eq. (14)

h(x)=k=F(x)w(xk)rect(xk)=F(x)k=w(xk)rect(xk).

Using a 1-D version of the weighting function from Eq. (2), we have w(s)=es2/β2. Putting this into Eq. (14) gives

Eq. (15)

hβ(x)=F(x)k=e(xk)2/β2rect(xk).
It is interesting to note that if we employ uniform fusion weights, with β= and w(s)=1, the resulting fusion is a simple frame average and the interpolation impulse response in Eq. (14) reduces to the interpolation kernel itself:

Eq. (16)

h(x)=F(x)k=rect(xk)=F(x).
Using separability, we can get the 2-D PSF as

Eq. (17)

hβ(x,y)=hβ(x)hβ(y).
The corresponding transfer function is

Eq. (18)

Hβ(u,v)=FT{hβ(x,y)},
where FT{·} represents the 2-D Fourier transform. A discrete equivalent model can be found by sampling a band-limited version of Eq. (15) by virtue of impulse invariance.25

Examples of the frame-averaged interpolation impulse response from Eq. (15) are plotted in Fig. 6 for ZOH, linear, and cubic interpolation kernels. One can see that with β=, the impulse response is simply the interpolation kernel, as shown in Eq. (16). As β gets smaller, the width of the impulse response also gets smaller. The interpolation OTFs are shown in Fig. 7 for the same kernels and values of β as those in Fig. 6. Note that in these plots, the folding frequency for SR with upsampling by a factor of M is 0.5M cycles/(LR spacing). From Fig. 7, it is clear that the interpolation blur for all but very small β can be quite significant. The good news is that much of the frequency content, while attenuated, is available for restoration, provided that the interpolation OTF is included in overall degradation model. To allow for SR restoration of the higher frequencies, it is important to use the smallest β possible, while meeting the other algorithm requirements.

Fig. 6

Frame-averaged interpolation impulse response from Eq. (15) with multiple β for (a) ZOH interpolation, (b) linear interpolation, and (c) cubic interpolation.

OE_58_8_083103_f006.png

Fig. 7

Frame-averaged interpolation frequency response cross-section from Eq. (18) with multiple β for (a) ZOH interpolation, (b) linear interpolation, and (c) cubic interpolation.

OE_58_8_083103_f007.png

3.

Atmospheric Optical Transfer Function and Aliasing

3.1.

Overall OTF Model

In this subsection, we present the overall OTF model used for the FIF SR restoration. The model is spatially invariant and is meant to capture the degradation as seen in the fusion image g(x,y) in Fig. 1. The spatial invariance is justified because g(x,y) represents a weighted sum of many short-exposure images.9,16 The overall model and its components are expressed as

Eq. (19)

Hα,β(u,v)=Hdif(u,v)Hdet(u,v)Hatm,α(u,v)Hβ(u,v).
The diffraction-limited optics component is given by Hdif(u,v), the detector OTF is Hdet(u,v), the atmospheric OTF is Hatm,α(u,v), and finally, Hβ(u,v) is the interpolation OTF from Sec. 2.2. For an optical system with circular exit pupil, the diffraction-limited OTF is given by26

Eq. (20)

Hdif(ρ)={2π[cos1(ρ2ρc)ρ2ρc1(ρ2ρc)2]ρρc0otherwise,
where ρ=u2+v2, ρc=1/(λN) is the spatial cut-off frequency, λ is the wavelength, and N is the f-number of the optics. Note that N=l/D, where l is the focal length and D is the aperture diameter. The detector OTF is the Fourier transform of the detector active area shape.1 Finally, the atmospheric turbulence model is based on that originally derived by Fried:5

Eq. (21)

Hatm,α(ρ)=exp{3.44(λlρr0)5/3[1α(λlρD)1/3]},
where r0 is the atmospheric coherence diameter or Fried parameter.

In Fried’s derivation, α=0 in Eq. (21) gives the long exposure OTF. The average tilt-corrected short-exposure OTF under near field conditions is given by Eq. (21) when α=1. The near field condition is defined by Fried to be DLλ, where L is the optical path length. As shown by Tofsted’s analysis,27,28 most sensors operate in regimes, where Fried’s near field condition should be invoked for the short exposure case (even for a far-field optical condition). Since we are applying this OTF to partially tilt-corrected imagery resulting from the potentially imperfect registration step in Fig. 1, we follow the approach in Hardie et al.9 and treat the parameter α as a continuous tilt reduction factor. At the extremes, a value of α=0 is used for no registration (i.e., the long exposure OTF), and a value of α=1 would be used for ideal registration (i.e., the short-exposure OTF). This parameter may be estimated based on the type of registration used.9

Referring to the block diagram in Fig. 1, the ideal image, z(x,y), relates to the fusion image as

Eq. (22)

g(x,y)=z(x,y)*hα,β(x,y),
where

Eq. (23)

hα,β(x,y)=FT1{Hα,β(u,v)}.
As illustrated in Fig. 1, a Wiener filter29 is employed in an effort to provide deconvolution of the overall blurring impulse response in Eq. (23). The Wiener frequency response is given by

Eq. (24)

HW(u,v)=Hα,β(u,v)*|Hα,β(u,v)|2+Γ,
where Γ represents a constant noise-to-signal power spectral density ratio. One of the unique aspects of our approach is that the Wiener filter is designed not only to mitigate the degradation of the camera system and turbulence but also to account for the level of registration efficiency (using the parameter α), and the level of interpolation blurring (using the parameter β). By accounting for the impact of these preceding steps in the algorithm itself, we believe improved restoration may be achieved.

An example OTF from Eq. (19), showing the various components, is provided in Fig. 8 for α=0.5 and β=0.3 using cubic interpolation. The optical parameters for this figure are listed in Table 1. The parameters in Table 1 are also used for the simulated data presented in Sec. 4.1. Also shown in Fig. 8 is the native sensor folding frequency for this particular example. Note that any signal energy above the folding frequency will be aliased during sampling. It is interesting to note from Fig. 8 that the frame-averaged interpolation OTF, |Hβ(u,0)| (shown in red), is comparable to the isolated atmospheric OTF from Eq. (21) (shown in green). Thus, it is clear that with β values on this order, mitigating the impact of the interpolation is important for restoration, especially for SR, where frequency content out to the diffraction-limited cutoff frequency is sought.

Fig. 8

Example OTF from Eq. (19) showing the various components for α=0.5 and β=0.3. The optical parameters for this figure are listed in Table 1.

OE_58_8_083103_f008.png

Table 1

Optical parameters used for the OTF plot in Fig. 8 and for the simulation results in Sec. 4.1.

ParameterValue
ApertureD=0.0908  m
Focal lengthl=0.500  m
F-numberf/#=5.506
Wavelengthλ=0.787  μm
Object distanceL=5  km
Optical cut-off frequency230.78 cycles/mm
Sampling frequency153.85 cycles/mm
Folding frequency76.92 cycles/mm
Nyquist pixel spacing (focal plane)δf=2.167  μm
Nyquist pixel spacing (object plane)δo=0.02167  m
Pixel pitch (focal plane)δ¯f=6.501  μm
Pixel pitch (object plane)δ¯o=0.06501  m
Undersampling factorM=3 (Q=2/3)

As can be seen in Eqs. (19)–(21), the parametric form of the OTF used for FIF SR restoration has several parameters. In practice, it is generally reasonable to assume that the optical parameters, λ, D, and l, would be known a priori. The remaining parameters are α, β, and r0. We have observed generally good results with β=0.25 for a wide range of datasets. The tilt reduction parameter, α, typically ranges from 0 for no registration to about 0.5 for optical flow registration. Imperfect values of α can usually be reasonably well compensated for by altering the employed value of r0. Thus, the bottom line is that OTF estimation and resulting restoration performance are mainly sensitive to r0. Fixing the other parameters and searching over the r0 space can often lead to very useful results. Subjective evaluation or other no-reference metrics30 can be used for selection. Other scene-based methods for estimating r0 can be found in the literature.31,32

3.2.

Aliasing and Turbulence

As noted earlier, the Nyquist criterion dictates that the sampling frequency be greater than two times the highest spatial frequency in the image to guarantee no aliasing. The sampling frequency is given by ρs=1/p, where p is the detector pitch, and the highest spatial frequency is limited by the diffraction-limited optical cutoff frequency ρc from Eq. (20). Thus, the diffraction-limited sampling status of an imaging system can be characterized by the diffraction-limited sampling factor parameter,33

Eq. (25)

Q=λNp=λlDp=ρsρc.
Note that when Q>2 the system is guaranteed to be Nyquist sampled (i.e., ρs>2ρc). One may also consider the quantity M=2/Q as the undersampling factor, while Q/2 is the oversampling factor.

While the metric in Eq. (25) is very useful, it does not take atmospheric turbulence into account. As can be seen from Eq. (21), the atmospheric optical turbulence acts like a nonideal low pass filter with no absolute cutoff frequency. Nevertheless, the OTF signal energy above the folding frequency diminishes in Eq. (21) as r0 goes down, reducing the potential aliasing for a given sampling frequency. In order to capture this effect, we propose a modified Q parameter, where we substitute r0 for the aperture, D. This gives rise to what we term the turbulence-limited sampling factor,

Eq. (26)

Q˜=λlr0p=ρsρ0,
where ρ0=r0/(λl) may be viewed as a pseudo cut-off frequency for the long exposure turbulence OTF. From Eq. (21), we see that Hatm,0(ρ0)=exp(3.44)=0.0321. We see from Eq. (26) that a larger Q˜ indicates a lower pseudo cut-off frequency from the turbulence, relative to the sampling frequency, and reduced level of potential aliasing. Note that longer focal lengths and smaller r0 values increase Q˜ for a fixed sampling frequency. Finally, it may be helpful to consider that the maximum of Q and Q˜ indicates the dominant factor in limiting potential aliasing.

Several overall OTFs are shown in Fig. 9 to illustrate the relationship between the focal length and r0 for a fixed diffraction-limited sampling factor parameter of Q=2/3. For these plots, the optical parameters in Table 1 are used along with α=0.5 and β=0.3. Focal lengths of l=0.20, l=0.50, and l=1.00 are shown in Figs. 9(a)9(c), respectively. Note that with the short focal length (wide field of view), the overall OTF is less sensitive to the atmospheric effects represented by r0. Each of the OTF curves in Fig. 9(a) has a significant amount of the OTF above the folding frequency. Furthermore, we see in this plot that the level of aliasing is more often limited by diffraction than turbulence (i.e., Q>Q˜). However, with increased focal length (reduced field of view), the effects of turbulence are magnified, as can be seen in Fig. 9(c). For the higher levels of turbulence in Fig. 9(c), the atmosphere acts as an anti-aliasing low-pass filter, essentially eliminating the possibility of aliasing. Most of the curves in this figure show that aliasing is limited more by turbulence than diffraction (i.e., Q˜>Q).

Fig. 9

Overall OTFs for a system with optical parameters listed in Table 1, α=0.5, β=0.3, and (a) a focal length of l=0.20  m, (b) l=0.50  m, and (c) l=1.00  m.

OE_58_8_083103_f009.png

The plots in Fig. 9 also illustrate that many scenarios exist where both turbulence and aliasing may be present. With short focal lengths, light to moderate turbulence is less problematic and traditional SR may be appropriate. For long focal length systems, all but light turbulence tends to effectively eliminate aliasing, making traditional TM appropriate. However, both aliasing and turbulence degradations are significant factors in heavy turbulence with short focal lengths, light turbulence with long focal lengths, and moderate focal lengths with a wide range of turbulence. This demonstrates the importance and relevance of developing algorithms capable of performing SR in the presence of turbulence, such as the one presented here.

4.

Experimental Results

In this section, we present a number of experimental results to demonstrate the efficacy of the FIF SR method with both undersampling and turbulence. The results in Sec. 4.1 use simulated data that allow for a quantitative performance analysis. The results in Sec. 4.2 use real data from three different sensors.

4.1.

Simulated Data

The simulated data have been generated using the anisoplanatic optical turbulence simulation tool recently developed by Hardie et al.20 The simulation tool uses numerical wave propagation and has performed well reproducing key image statistics in validation studies.20 The specific simulation presented here is novel in that we are emulating an undersampled imaging system. The turbulence degraded images are first simulated at the Nyquist rate for the diffraction-limited optical system.20 Next, however, we simulate detector integration and follow this with downsampling by a factor of M=3. After downsampling, we introduce additive Gaussian noise with a standard deviation of two digital units to imagery having a 0-255 original dynamic range. The optical system parameters used in the simulation are listed in Table 1, and the simulation parameters are listed in Table 2. We have simulated seven levels of turbulence, each with K=200 temporally independent frames.

Table 2

Simulation parameters used for the simulated imagery in Sec. 4.1.

ParameterValue
Object distanceL=5  km
Propagation stepΔz=500  m
Cropped screen samplesN=256
Propagation screen widthX=1.00  m
Pupil plane point spreadD˜=1.00  m
Propagation sample spacingΔx=0.0039  m
Number of phase screensn=10 (9 nonzero)
Phase screen typeModified von Kármán with subharmonics
Inner scalel0=0.01  m
Outer scaleL0=300m
Image size (pixels)301×301  pixels
Number of framesK=200
Image size (object plane)2.3218×2.3218  m
Downsampling factorM=3 (3×undersamping)
Dynamic range0-255 digital units
Noise standard deviationσn=2 digital units

Quantitative results for two different truth images are provided in Tables 3 and 4 for several algorithm variations. Table 3 is for the Kodak lighthouse image,17 and Table 4 is for the stream and bridge image.9 The metric we use to evaluate the simulated data results is peak signal-to-noise-ratio (PSNR). The PSNRs for the lighthouse image are also plotted in Fig. 10. For the results with parameter optimization, the parameters are found by a search to maximize the PSNR and these parameters are listed in Table 5. Note that results are shown with and without camera jitter. When camera jitter is on, we simulate camera platform motion by providing additional uniform random subpixel shifts between frames with no motion blur. The FIF SR results include the full OTF model, except where no β restoration is indicated. The true lighthouse image and three levels of degradation are shown in Fig. 11. Various restored images are shown in Fig. 12 using the optimum parameters in Table 5. Global affine registration to the average frame is used for all of the FIF SR results reported on the simulated data.

Table 3

Lighthouse image PSNR (dB) results using simulated K=200 frames with ση=2.0 and M=3.

Method (camera Jitter on)Cn2×10−15 (m−2/3)
0.000.200.501.002.005.0010.0
Single-frame bicubic22.6722.2222.0922.2521.1121.1120.33
Average + bicubic22.4322.4122.3122.1321.8921.2820.69
Affine + bicubic + average22.7422.7222.6222.4622.1821.6121.01
FIF SR (α=0.50, β=0.25)28.4628.0327.4026.7525.4423.8722.69
FIF SR (α=0.50, β=0.25, no β rest.)27.4127.2226.6526.1525.1823.8322.73
FIF SR (α=0.50, optimum β)28.5728.0727.4126.7725.4423.9022.76
Method (camera jitter off)
Single-frame bicubic22.6722.6022.4722.2521.8921.1920.54
Average + bicubic22.7022.6822.5622.3722.0421.3720.75
Affine + bicubic + average22.7022.7122.6222.4722.2021.6121.02
FIF SR (α=0.50, β=0.25)23.4624.0624.7125.1525.1023.8722.74
FIF SR (α=0.50, β=0.25, no β rest.)23.4224.1124.7225.0524.9123.8022.77
FIF SR (α=0.50, optimum β)23.4625.0725.6825.6325.1723.8922.79

Table 4

Stream bridge image PSNR (dB) results using 200 simulated frames with ση=2.0 and M=3.

Method (camera Jitter on)Cn2×10−15 (m−2/3)
0.000.200.501.002.005.0010.0
Single-frame bicubic23.5622.7422.6722.7120.9120.8519.62
Average + bicubic23.1723.1522.9922.6922.3221.3620.47
Affine + bicubic + average23.6823.6823.5123.2622.8121.8920.96
FIF SR (α=0.50, β=0.25)29.6429.1528.6727.9426.9825.5823.99
FIF SR (α=0.50, β=0.25, no β rest)28.6628.4528.0227.4626.7025.4824.05
FIF SR (α=0.50, optimum β)29.7429.1628.6727.9426.9825.5924.06
Method (camera jitter off)
Single-frame bicubic23.5623.4423.1422.7122.0820.9620.06
Average + bicubic23.6023.5723.3723.0722.5521.5020.57
Affine + bicubic + average23.6023.6223.4823.2522.8321.8920.97
FIF SR (α=0.50, β=0.25)24.8725.4726.0326.5426.7125.5224.06
FIF SR (α=0.50, β=0.25, no β rest)24.7525.3725.9426.3826.4725.4424.11
FIF SR (α=0.50, optimum β)24.9826.1726.8026.9826.7625.5224.12

Fig. 10

Plot of lighthouse image PSNR (dB) results from Table 3 using 200 simulated frames with ση=2.0 and M=3.

OE_58_8_083103_f010.png

Table 5

Optimum FIF SR parameters for lighthouse image results.

ParameterCn2×10−15 (m−2/3)
0.000.200.501.002.005.0010.0
β (jitter on)0.0910.2820.2700.3080.2440.2310.346
β (jitter off)0.3210.0910.0910.1160.1800.2440.270
Γ (jitter on)7.01×1042.07×1042.81×1042.07×1043.87×1044.21×1042.07×104
Γ (jitter off)2.43×1027.16×1036.00×1033.52×1031.26×1033.87×1042.81×104

Fig. 11

Lighthouse image: (a) truth, (b) single-frame bicubic interpolation of a degraded image with no turbulence, (c) Cn2=1×1015  m2/3, and (d) Cn2=1×1014  m2/3. Degraded images have additive Gaussian noise with ση=2 digital units and downsampling of M=3.

OE_58_8_083103_f011.png

Fig. 12

FIF SR restoration results with lighthouse image. (a) No turbulence and no jitter, (b) no turbulence with jitter, (c) Cn2=1×1015  m2/3 and no jitter, (d) Cn2=1×1015  m2/3 with jitter, (e) Cn2=1×1014  m2/3 and no jitter, (f) Cn2=1×1014  m2/3 with jitter.

OE_58_8_083103_f012.png

The quantitative results show that the FIF SR method provides a significant boost over simple methods, such as single-frame interpolation and simple averages. The boost is seen over a wide range of turbulence levels. Another very interesting phenomenon can be seen in Fig. 10. Note that when platform jitter is turned off, increasing the Cn2 turbulence level from zero actually increases the PSNR of the SR output. This may seem counterintuitive, as turbulence is generally considered to be a degradation, and not a benefit. However, what we see is that the wavefront tilt variance provided by light turbulence acts to shift the image relative to the camera, allowing for the necessary sampling diversity for SR. This phenomenon was first described by Fishbain et al.10 and Yaroslavsky et al.11 As the turbulence level gets higher, we see in Fig. 10 that the degrading impact of the turbulence outweighs this sampling benefit, and the PSNR drops. Also, note that this effect is not seen when there is camera platform jitter. This is because the platform jitter provides sampling diversity more effectively than the turbulence and without the turbulence blurring. Platform jitter outperforms the corresponding no jitter scenario, and turbulence is never a benefit when platform jitter is present. As turbulence levels increase, we do see the benefit of jitter diminish as SR becomes increasingly difficult. At high turbulence levels, very little signal energy is available above the folding frequency to be recovered. Another point of interest in Tables 3 and 4, as well as Fig. 10, is that including the interpolation OTF into the Wiener filter is a clear benefit. When the optimum β for restoration is used, this always outperforms the “no β restoration” case. To the best of our knowledge, no simulation study and quantitative error analysis of this kind for joint TM and SR has been reported in the literature previously.

Subjective analysis of the images in Figs. 11 and 12 appears in line with the quantitative results. Figures 12(a) and 12(b) show the FIF SR output with no turbulence. Figure 12(a) has no camera jitter and Fig. 12(b) includes camera jitter. With no jitter and no turbulence in Fig. 12(a), there is a lack of sampling diversity and little is achieved in the way of true SR. Moiré patterns on the fence and jagged edges on the roof line are still quite visible here. On the other hand, Fig. 12(b) shows significant SR enhancement on the fence and roof edges. This result represents traditional multiframe SR without turbulence and is well understood.2

Perhaps, the most interesting result is that in Fig. 12(c). Here, we have light turbulence and no camera jitter. This result appears to be far better than the no-turbulence no-jitter case in Fig. 12(a). A significant amount of aliasing reduction is exhibited here, as a result of entirely turbulence-induced motion. This result is nearly as good as that with additional camera jitter in Fig. 12(d). Finally, at high turbulence levels, platform jitter makes little difference, as seen by comparing Figs. 12(e) and 12(f). It is also clear, at this high level of turbulence, that details in the fence and elsewhere are lost, but not aliased, as a result of the low-pass filtering from the turbulence.

4.2.

Real Data

In this subsection, we present results for real data using three different sensors. A summary of the data and processing parameters for each of the datasets is provided in Table 6. The Fried parameters and noise-to-signal ratios (NSRs) have been selected based on subjective evaluation of the results. In the case of the truck sequence, an edge target is used to estimate r0.

Table 6

Real image data parameters for the results in Sec. 4.2.

DataCoasterTruckAirborne
Parameter
Diffraction-limited sampling (Q)0.401.950.47
Turbulence-limited sampling (Q˜)1.203.440.20
F-number2.516.192.3
Wavelength (λ)4  μm0.787  μm4  μm
Fried (r0)0.080 m0.0325 m0.050 m
RegistrationAffineAffine (BMA)Perspective
Tilt reduction (α)0.250.25 (0.50)0.25
FIF interpolation (β)0.250.25
NSR (Γ)0.0010.00040.002
Number of frames (K)200500100
Upsampling (M)323

The first dataset (coaster) is shown in Fig. 13. These data are from a midwave infrared (MWIR) sensor and show a portion of a wooden roller coaster with a significant amount of turbulence and aliasing (Q=0.40 and Q˜=1.20). Single-frame bicubic interpolation images of two regions of interest (ROIs) are shown in Figs. 13(a) and 13(c). The corresponding FIF SR outputs are shown in Figs. 13(b) and 13(d), using the parameters listed in Table 6. The impact of turbulence is quite evident in the inputs frames, such as the warped handrail visible in Fig. 13(a) near the power line pole. At the same time, aliasing artifacts are also clearly visible in the form of Moiré patterns on the groups of suspended wires. In contrast, the handrail appears to have corrected geometry in Figs. 13(b) and 13(d), as a result of the FIF SR processing. Furthermore, the individual suspended wires are clearly distinguishable and free from aliasing artifacts in the FIF SR output. We believe these data provide an excellent demonstration of successful joint TM and SR.

Fig. 13

Roller coaster infrared image sequence results. (a) ROI 1 single frame bicubic interpolation, (b) ROI 1 FIF SR output, (c) ROI 2 single-frame interpolation, and (d) ROI 2 FIF SR output. Sensor and algorithm parameters are listed in Table 5.

OE_58_8_083103_f013.png

The next dataset (truck) shows a truck and bar target in heavy turbulence imaged with a near IR sensor. In contrast with the previous example, this sensor is nearly Nyquist sampled under diffraction-limited conditions (Q=1.95). With turbulence added, no significant aliasing is expected (Q˜=3.44). However, we still apply upsampling of M=2 for our restoration. Single-frame bicubic interpolation is shown in Fig. 14(a), the FIF SR output is shown with affine registration in Fig. 14(b). The result using block matching algorithm (BMA) optical flow registration9 is shown in Fig. 14(c). Both restorations appear to be significantly better than the single-frame interpolation, but the bar target with BMA does appear noticeably better because of the high level of local warping. This finding is consistent with previous studies using these data.9 One noticeable artifact is the ringing on the truck cab. This is a result of the Wiener filter operating on solar glint. Ringing is also present, to a lesser degree, in the restored images near strong edges. We are currently exploring methods for reducing these artifacts. One possibility is to employ an adaptive Wiener filter with spatially varying NSR.34,35

Fig. 14

Truck infrared image sequence results. (a) Single-frame bicubic interpolation, (b) FIF SR using global affine registration, and (c) FIF SR using BMA registration. Sensor and algorithm parameters are listed in Table 5.

OE_58_8_083103_f014.png

The final dataset (airborne) is a MWIR dataset of a bar target acquired from an airborne platform.36 These data have been used for SR studies in prior work,17,21 but without considering turbulence. The bar target is a series of four bar patterns. The scaling factor between bar groups is designed to be 21/6. In contrast to the truck sequence, aliasing is a much more significant problem than turbulence in the airborne data with Q=0.47 and Q˜=0.20. Since the platform is moving between frames and the scene is approximately planar, we use global perspective registration.21 Single-frame bicubic interpolation is shown in Fig. 15(a) with M=3. The corresponding FIF SR output is shown in Fig. 15(b). The Moiré patterns are quite evident within the bicubic image in Fig. 15(a). After FIF SR processing, there appears to be an approximately 2× resolution enhancement based on the resolvable bar patterns.

Fig. 15

Airborne image sequence results. (a) Single-frame bicubic interpolation and (b) FIF SR output. Sensor and algorithm parameters are listed in Table 5.

OE_58_8_083103_f015.png

5.

Conclusions

In this paper, we have provided a study of SR in the presence of optical turbulence. The OTF analysis presented in Sec. 3 demonstrates scenarios, where significant levels of aliasing may be present simultaneously along with turbulence degradation. This provides motivation for the development of restoration methods that can provide joint SR and TM, such as the FIF SR method presented.

We have also introduced a turbulence-limited sampling parameter in Sec. 3.2, Q˜, to complement the previously defined diffraction-limited sampling factor Q. We believe Q˜ is helpful in describing how a given turbulence level impacts the potential for aliasing in an imaging system. A larger Q˜ indicates a lower pseudo cut-off frequency from turbulence, relative to the sampling frequency, and reduced level of potential aliasing. Also, the maximum of Q and Q˜ represents the dominant factor in limiting aliasing (i.e., either diffraction or turbulence).

In addition, we have extended the FIF SR method with an OTF model to equip it to operate in the presence of turbulence. The atmospheric component has a parameter, α, that accounts for the level of tilt reduction provided by the registration step. In Sec. 2.2, we have derived an OTF component that models the blurring from the FIFs, as a function of the parameter β. This allows us to mitigate the blurring from the interpolation operation. This compensation is particularly important for higher values of β that might be employed when addressing turbulence. Together α and β parameters give the OTF model a high level of flexibility to effectively address a wide range of SR and TM scenarios.

Our experimental results in Sec. 4 include a ground-truth based quantitative error analysis with simulated images generated with a numerical wave propagation method.20 The results demonstrate quantitatively that the FIF SR method proposed is able to effectively perform SR and TM in the scenarios considered. One particularly interesting result presented in Sec. 4.1 shows that turbulence-induced warping motion alone can provide the sampling diversity necessary for effective multiframe SR. However, our results also show that camera platform motion or jitter, when present, appears to be more effective at this task. The real data results in Sec. 4.2 show the versatility of the FIF SR method with three distinct scenarios. The truck sequence is dominated by turbulence. The airborne sequence is dominated by aliasing. Finally, the coaster data includes a significant combination of both turbulence and aliasing.

Acknowledgments

This work has been supported in part under Air Force Research Labs (AFRL) Award No. FA8650-17-D-1801. Approved for public release under case number 88ABW-2019-2309.

References

1. 

R. C. Hardie et al., “Impact of detector-element active-area shape and fill factor on super-resolution,” Front. Phys., 3 31 (2015). https://doi.org/10.3389/fphy.2015.00031 Google Scholar

2. 

S. C. Park, M. K. Park and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag., 20 21 –36 (2003). https://doi.org/10.1109/MSP.2003.1203207 ISPRE6 1053-5888 Google Scholar

3. 

R. C. Hardie, R. R. Schultz and K. E. Barner, “Super-resolution enhancement of digital video,” EURASIP J. Adv. Signal Process., 2007 020984 (2007). https://doi.org/10.1155/2007/20984 Google Scholar

4. 

R. E. Hufnagel and N. R. Stanley, “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am., 54 52 –61 (1964). https://doi.org/10.1364/JOSA.54.000052 JOSAAH 0030-3941 Google Scholar

5. 

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am., 56 (10), 1372 –1379 (1966). https://doi.org/10.1364/JOSA.56.001372 JOSAAH 0030-3941 Google Scholar

6. 

M. Roggemann and B. Welsh, Imaging through Turbulence, CRC Press, Boca Raton (1996). Google Scholar

7. 

A. W. M. van Eekeren et al., “Turbulence compensation: an overview,” Proc. SPIE, 8355 83550Q (2012). https://doi.org/10.1117/12.918544 PSISDG 0277-786X Google Scholar

8. 

K. Schutte et al., “An overview of turbulence compensation,” Proc. SPIE, 8542 85420O (2012). https://doi.org/10.1117/12.981942 PSISDG 0277-786X Google Scholar

9. 

R. C. Hardie et al., “Block matching and wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis,” Opt. Eng., 56 (7), 071503 (2017). https://doi.org/10.1117/1.OE.56.7.071503 Google Scholar

10. 

B. Fishbain, L. P. Yaroslavsky and I. A. Ideses, “Real time turbulent video perfecting by image stabilization and super-resolution,” in Seventh IASTED Int. Conf. Visualization, Imaging and Image Process., 213 –218 (2007). Google Scholar

11. 

L. Yaroslavsky et al., “Superresolution in turbulent videos: making profit from damage,” Opt. Lett., 32 3038 –3040 (2007). https://doi.org/10.1364/OL.32.003038 OPLEDP 0146-9592 Google Scholar

12. 

L. P. Yaroslavsky et al., “Super-resolution of turbulent video: potentials and limitations,” Proc. SPIE, 6812 681205 (2008). https://doi.org/10.1117/12.765580 PSISDG 0277-786X Google Scholar

13. 

B. Fishbain et al., “Superresolution in color videos acquired through turbulent media,” Opt. Lett., 34 587 –589 (2009). https://doi.org/10.1364/OL.34.000587 OPLEDP 0146-9592 Google Scholar

14. 

R. C. Hardie et al., “Real-time video processing for simultaneous atmospheric turbulence mitigation and super-resolution and its application to terrestrial and airborne infrared imaging,” in Proc. Mil. Sens. Symp. (MSS), Passive Sens., (2012). Google Scholar

15. 

D. R. Droege et al., “A real-time atmospheric turbulence mitigation and super-resolution solution for infrared imaging systems,” Proc. SPIE, 8355 83550R (2012). https://doi.org/10.1117/12.920323 PSISDG 0277-786X Google Scholar

16. 

D. Fraser, G. Thorpe and A. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A, 16 1751 –1758 (1999). https://doi.org/10.1364/JOSAA.16.001751 JOAOD6 0740-3232 Google Scholar

17. 

B. K. Karch and R. C. Hardie, “Robust super-resolution by fusion of interpolated frames for color and grayscale images,” Front. Phys., 3 28 (2015). https://doi.org/10.3389/fphy.2015.00028 Google Scholar

18. 

R. C. Hardie et al., “Super-resolution in the presence of atmospheric optical turbulence,” Proc. SPIE, 10650 106500H (2018). https://doi.org/10.1117/12.2303657 PSISDG 0277-786X Google Scholar

19. 

M. Rucci and R. C. Hardie, “A holistic registration approach to fusion of interpolated frames,” in Proc. OSA Imaging and Appl. Opt. (COSI, IS, MATH, pcAOP, (2019). https://doi.org/10.1364/ISA.2019.IM3B.4 Google Scholar

20. 

R. C. Hardie et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” Opt. Eng., 56 (7), 071502 (2017). https://doi.org/10.1117/1.OE.56.7.071502 Google Scholar

21. 

R. C. Hardie, K. J. Barnard and R. Ordonez, “Fast super-resolution with affine motion using an adaptive wiener filter and its application to airborne imaging,” Opt. Express, 19 26208 –26231 (2011). https://doi.org/10.1364/OE.19.026208 OPEXFF 1094-4087 Google Scholar

22. 

R. C. Hardie and K. J. Barnard, “Fast super-resolution using an adaptive wiener filter with robustness to local motion,” Opt. Express, 20 21053 –21073 (2012). https://doi.org/10.1364/OE.20.021053 OPEXFF 1094-4087 Google Scholar

23. 

R. C. Hardie, “A fast super-resolution algorithm using an adaptive wiener filter,” IEEE Trans. Image Process., 16 2953 –2964 (2007). https://doi.org/10.1109/TIP.2007.909416 IIPRE4 1057-7149 Google Scholar

24. 

P. Getreuer, “Linear methods for image interpolation,” Image Process. On Line, 1 238 –259 (2011). https://doi.org/10.5201/ipol.2011.g_lmii Google Scholar

25. 

A. V. Oppenheim and R. W. Schafer, Discrete-time Signal Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey (2010). Google Scholar

26. 

J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts and Company Publishers, Englewood, Colorado (2004). Google Scholar

27. 

D. H. Tofsted, “Analytic improvements to the atmospheric turbulence optical transfer function,” Proc. SPIE, 5075 281 –292 (2003). https://doi.org/10.1117/12.488594 PSISDG 0277-786X Google Scholar

28. 

D. H. Tofsted, “Reanalysis of turbulence effects on short-exposure passive imaging,” Opt. Eng., 50 (1), 016001 (2011). https://doi.org/10.1117/1.3532999 Google Scholar

29. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed.Prentice-Hall, Inc., Upper Saddle River, New Jersey (2006). Google Scholar

30. 

J. P. Bos and M. C. Roggemann, “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng., 51 (8), 083201 (2012). https://doi.org/10.1117/1.OE.51.8.083201 Google Scholar

31. 

S. Zamek and Y. Yitzhaky, “Turbulence strength estimation from an arbitrary set of atmospherically degraded images,” J. Opt. Soc. Am. A, 23 3106 –3113 (2006). https://doi.org/10.1364/JOSAA.23.003106 JOAOD6 0740-3232 Google Scholar

32. 

F. Molina-Martel, R. Baena-Gallé and S. Gladysz, “Fast PSF estimation under anisoplanatic conditions,” Proc. SPIE, 9641 96410I (2015). https://doi.org/10.1117/12.2194570 PSISDG 0277-786X Google Scholar

33. 

R. D. Fiete, “Image quality and λFN/p for remote sensing systems,” Opt. Eng., 38 (7), 1229 –1240 (1999). https://doi.org/10.1117/1.602169 Google Scholar

34. 

M. Rucci, R. C. Hardie and K. J. Barnard, “Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive wiener filter,” Appl. Opt., 53 C1 –C13 (2014). https://doi.org/10.1364/AO.53.0000C1 APOPAI 0003-6935 Google Scholar

35. 

B. N. Narayanan, R. C. Hardie and E. J. Balster, “Multiframe adaptive wiener filter super-resolution with JPEG2000-compressed images,” EURASIP J. Adv. Signal Process., 2014 55 (2014). https://doi.org/10.1186/1687-6180-2014-55 Google Scholar

36. 

F. O. Baxley et al., “Flight test results of a rapid step-stare and microscan midwave infrared sensor concept for persistent surveillance,” in Proc. Mil. Sens. Symp. (MSS), Passive Sens., (2010). Google Scholar

Biography

Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering at the University of Dayton and holds a joint appointment in the Department of Electro-Optics and Photonics. He received the University of Dayton’s top university-wide teaching award, the 2006 Alumni Award in teaching. He also received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for the work on superresolution. His research interests include a wide range of topics in digital signal and image processing, medical image processing, and machine learning.

Michael Rucci is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, Ohio. His current research includes day/night passive imaging, turbulence modeling and simulation, and image processing. He received his MS and BS degrees in electrical engineering from the University of Dayton in 2014 and 2012, respectively.

Barry K. Karch is a principal research electronics engineer in the Multispectral Sensing & Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB OH. He received his BS degree in electrical engineering (1987), MS degree in electro-optics and electrical engineering (1992/1994), and PhD degree in electrical engineering (2015) from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 29 years.

Alexander J. Dapore is a senior image processing engineer at L3Harris Technologies. He received his BSEE and MSEE degrees from the University of Illinois at Urbana-Champaign in 2008 and 2010, respectively. He has worked on research and development projects in many areas of digital image processing. His specific areas of interest are image restoration, image enhancement, object/threat detection and tracking, multiview computer vision, and the real-time implementation of digital image processing algorithms on GPGPU platforms.

Douglas R. Droege is a director of advanced programs for the ISR Systems Segment of L3Harris Technologies. He has 19 years of experience designing military infrared systems and in his current role, he develops advanced system concepts involving both imaging and electronic warfare technologies. He holds a PhD degree in electrical engineering from the University of Dayton. He received the 2013 L3 Corporate Engineer of the Year award and the 2012 L3 Integrated Sensor Systems Leadership award.

Joseph C. French is a research engineer in electro/optics with Leidos and currently supports the Air Force Research Laboratory at Wright-Patterson AFB, Ohio. He received his BS and MS degrees in electrical engineering from Missouri University of Science and Technology, Rolla, Missouri, and a PhD degree in electrical engineering from the University of Dayton, Dayton, Ohio, in 2016. His work includes infrared signatures, optical system modeling, camera calibration and model generation, georeferencing uncertainty analysis, and orthorectification/image projection.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Russell C. Hardie, Michael Rucci, Barry K. Karch, Alexander J. Dapore, Douglas R. Droege, and Joseph C. French "Fusion of interpolated frames superresolution in the presence of atmospheric optical turbulence," Optical Engineering 58(8), 083103 (22 August 2019). https://doi.org/10.1117/1.OE.58.8.083103
Received: 14 May 2019; Accepted: 2 August 2019; Published: 22 August 2019
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Turbulence

Optical transfer functions

Atmospheric optics

Optical turbulence

Super resolution

Image fusion

Cameras

Back to Top