Open Access
3 December 2020 Detailed characterization of a mosaic based hyperspectral snapshot imager
Author Affiliations +
Abstract

Some widely used optical measurement systems require a scan in wavelength or in one spatial dimension to measure the topography in all three dimensions. Novel hyperspectral sensors based on an extended Bayer pattern have a high potential to solve this issue as they can measure three dimensions in a single shot. This paper presents a detailed examination of a hyperspectral sensor including a description of the measurement setup. The evaluated sensor (Ximea MQ022HG-IM-SM5X5-NIR) offers 25 channels based on Fabry–Pérot filters. The setup illuminates the sensor with discrete wavelengths under a specified angle of incidence. This allows characterization of the spatial and angular response of every channel of each macropixel of the tested sensor on the illumination. The results of the characterization form the basis for a spectral reconstruction of the signal, which is essential to obtain an accurate spectral image. It turned out that irregularities of the signal response for the individual filters are present across the whole sensor.

1.

Introduction

Hyperspectral sensors are used today in a wide range of applications. The beginnings of the technology go back to P.J.C. Janssen, who observed the corona of the sun with a slit monochromator in 1869. Until the 1980s, the use of hyperspectral systems remained rare due to the lack of image sensors. Driven mainly by astronomy, a large number of hyperspectral systems were developed from then on. However, the basic principle remained the same and was based on modified Czerny–Turner, Offner, and Michelson spectrometer approaches. In addition to these sensors, a new group of hyperspectral systems, the so-called single-shot or snapshot hyperspectral sensors, has been developed. This class of sensors is capable of recording the spectrum for any point in a two-dimensional scene without a scan over the wavelength or along a spatial axis. A good overview of those sensors has been given by Hagen and Kudenov.1,2 Due to the scan-free and therefore, fast image acquisition, these sensors are of high interest for a variety of applications, such as surface metrology. For this application and others, the measuring time is often a critical parameter as it makes the measurement susceptible to environmental influences, for example, vibrations.3

A subgroup of snapshot sensors, the so-called mosaic sensors, has recently become commercially available. They are based on an extended Bayer pattern, which means they offer an increased number of channels compared to the classic color imager. This allows to detect a spectrum instead of only one color impression at any point of the scene. To investigate to what extent the mosaic sensors meet the requirements for metrological use and to reconstruct the measured spectra, they must be characterized in detail. Several publications have already proposed setups for the characterization of mosaic sensors. Agrawal et al.4 developed a first system that provides a tunable collimated illumination based on a halogen lamp. The angle of incidence of the illumination is adjustable. Dittrich et al.5 presented a system that allows collimated illumination at different angles of incidence with the help of a pinhole array. The illumination is tunable by the use of a monochromator. This paper presents a laser-based characterization setup, which allows an angle-dependent sensor characterization without the need for a pinhole array. A stabilized white-light laser with a high spectral irradiance is used for illumination. We present the exemplary characterization of a snapshot mosaic sensor (Ximea MQ022HG-IM-SM5X5-NIR).68 The investigated sensor addresses a wide range of applications such as agriculture, food inspection, or medical imaging.9 With the presented characterization, a first impression can be gained as to whether or not these sensors are also suitable for quantitative spectral measurements, as they are necessary, for example, in the field of optical metrology.

2.

Measurement and Reconstruction Principle

Common monochrome cameras record the intensity I as a function of the spatial coordinates x and y. Color cameras, which are mostly based on a Bayer pattern, are able to record intensities not only as a function of the spatial coordinates, but also as a function of the wavelength λ. However, the spectral resolution is very poor since only a few, normally three, broadband channels are available.10 With these cameras, in addition to the color image acquisition, applications in the field of single-shot surface metrology can be carried out.11 However, the limited spectral resolution is not sufficient for a variety of applications. This is why sensors with an increased number of spectral channels have been developed and are now available. The sensor examined in the following has 25 channels.

Instead of absorption-based color filters, as are normally used in color cameras,12 Fabry–Pérot filters are applied to the chip’s 2/3″ pixelarray, which consists of 2048×1088  pixels. Each pixel has a size of 5.5 m. Figure 1 shows a schematic drawing of the characterized sensor. A pattern of 5×5 filters is mapped to one so-called macropixel. Each filter consists of a bottom mirror, a top mirror, and a cavity in between. The heights of the cavities define the transmission spectra of the monolithically mounted filters.6

Fig. 1

Schematic drawing of the discussed sensor. Each macropixel consists of 25 spectral channels.

OE_59_12_125102_f001.png

The signal Sn(λ) of a channel n depends on the individual transmission Tn(λ) of the mounted filter

Eq. (1)

Sn(λ)=I(λ)Tn(λ)dλ.
As interference filters are not ideal narrow banded, a direct determination of the spectrum based on a single channel signal is impossible, the signals of all channels must be considered instead. Based on a discrete form of Eq. (1)

Eq. (2)

Sn=i=024I(λi)Tn(λi)
the sensor response can be described in a vectorial form

Eq. (3)

S=MV×N·I,
where M is a matrix with the shape V, the number of virtual bands, times N, the number of filters within a single macropixel. The discretization interval of the transmission curves defines the virtual bands. By inverting M, the illumination spectrum I can be reconstructed as follows:

Eq. (4)

I=M1·S.
Due to manufacturing irregularities and different incidence angles of the light at different positions on the chip, Tn(λ) varies across the sensor. This means that a single matrix M is not sufficient for the spectral reconstruction of all macropixels. To determine M, the transmission Tn(λ) must be examined for each macropixel.

3.

Characterization Setup

The aim of our setup is the measurement of the transmission of each filter in dependence on the wavelengths of the sensor’s operation range (684 and 965 nm). Since interference filters are sensitive to the light incidence angle, the setup allows a collimated illumination under an adjustable angle. A white-light laser (Leukos Samba W) with a continuous spectrum from 450 to 2400 nm was used as light source. The measured standard deviation of the output power is about 0.5% with a peak to valley of 1.5%. To guarantee a stable operation, a warm-up time of 250 s was taken into account. The output power was monitored with a calibrated power meter (Newport 2936-R, Newport 918D-SL-OD3R).

The setup is divided into two parts: a monochromator and the actual illumination unit. The monochromator is used to select a narrow spectral range of the continuous laser spectrum. The fiber output of the light source is equipped with a broadband collimator. The light leaves the collimator with a beam diameter of 1.5 mm and a divergence <5  mrad (half angle). As shown in the schematic drawing in Fig. 2(a) or in the picture of the setup in Fig. 2(b), the light enters the setup at point (I). Afterward, it is filtered by a longpass filter (II), blocking all light below 590 nm. The blocked light is not needed for the characterization and would lead to overlapping diffraction orders at the following blaze-grating (III) (Thorlabs GR25-0608). The grating period is 600  lp/mm. An achromatic lens (IV) (Thorlabs AC254-050-B-ML) with a focal length of 50-mm focuses the diffracted light onto a 300-m multimode fiber (V) (Thorlabs M12L02).

Fig. 2

(a) Schematic drawing and (b) picture of the monochromator unit: light input (I), longpass filter (II), diffraction grating (III), lens (IV), and multimode fiber (V)

OE_59_12_125102_f002.png

The grating is mounted on a rotational actuator (Standa 8MR190-2). By rotating the actuator, the wavelength coupled into the multimode fiber can be adjusted. The half spectral bandwidth Δλ, which can be coupled into the fiber, is calculated from the quotient between the critical angle, which can still be coupled into the fiber θ, and the angular dispersion w

Eq. (5)

Δλ=2·θw.
w is given as

Eq. (6)

w=dβdλ=mg·cos(β),
with the diffraction order m and the grating period g. β defines the diffraction angle with respect to the grating vertex and can be calculated using

Eq. (7)

g·[sin(β)+sin(α)]=m·λ.
Depending on the incidence angle of the light α with respect to the grating vertex and the wavelength λ, θ depends on the focal length of the lens f and the fiber diameter d

Eq. (8)

θ=arctan(d2·f).

In the setup, the angle between the incident and the diffracted beams at the grating is

Eq. (9)

βα=40°.

With f=50  mm, d=0.3  mm, g=600  lp/mm, and λ=743  nm, Eqs. (59) lead to a spectral bandwidth of Δλ=8.4  nm. This is very close to the measured bandwidth Δλ of 7.9 nm (FWHM) at the central wavelength of 743 nm, which is shown in Fig. 3. The signal is represented in arbitrary units (a.u.).

Fig. 3

Measured spectrum at the monochromator output for a central wavelength of 743 nm with an FWHM of 7.9 nm

OE_59_12_125102_f003.png

After selecting the wavelength with the monochromator, the light enters the illumination setup shown in Fig. 4 at point (I). After passing a speckle reducer (II) (Optotune LSR-3005-6D-NIR) and a rotating diffusor disc (III), the light is collimated by a lens with a focal length of 200 mm (IV). Before the light reaches the camera (VI), which is mounted on a rotational stage, it is again filtered by a longpass filter (V) [Edmund Optics 675 nm longpass filter (O.D. 4) #84-747]. The filter is needed for a proper camera operation to suppress side filter peaks and is specified by the manufacturer. Therefore, it is included in all characterizations shown in this paper. Since our setup illuminates the entire sensor, the response for each pixel is recorded without any extra effort compared to a single macropixel characterization. Only the data must be processed separately for each pixel, which can be achieved in a fast way by parallelized evaluation algorithms.

Fig. 4

(a) Schematic drawing and (b) picture of the illumination set up: light input (I), speckle reducer (II), rotating diffusor disc (III), lens (IV), longpass filter (V), and camera on rotational stage (VI)

OE_59_12_125102_f004.png

The coherence of the light source leads to speckles on the detector [Fig. 5(a)]. To reduce this effect, three speckle-reducing devices were inserted: in addition to the already mentioned speckle reducer (II) and the rotating diffusor disc (III), a stepper motor is inserted that constantly moves the fiber connecting the monochromator and the illumination device while the sensor averages 100 images. Figure 5(b) shows the signal after the speckle reduction. The remaining standard deviation of the speckle noise is below 0.6%. The remaining periodic pattern seems to be fixed pattern noise due to the hyperspectral sensor, since it is no longer visible when using a different image sensor.

Fig. 5

(a) Monochrome sensor signal without and (b) with speckle reducing device. The remaining standard deviation of speckle noise is below 0.6%

OE_59_12_125102_f005.png

4.

Measurement Procedure

For a successful spectrum reconstruction, exact knowledge of the individual sensor characteristics is essential. In the following, the hyperspectral sensor Ximea MQ022HG-IM-SM5X5-NIR is exemplarily examined with the presented laboratory setup. Figure 6 shows the responses of the different filters as they have been provided by the camera manufacturer.

Fig. 6

Filter response for the different channels (0 to 24). Data provided by the manufacturer.

OE_59_12_125102_f006.png

The peak filter responses vary between 3% and 20%. Therefore, the integration time must be different for each channel for a given illumination wavelength. For automation, a high dynamic range (HDR) imaging technique was used, meaning that images were always taken at three different integration times: 2.5, 10, and 40 ms. At each integration time, 100 images were taken and averaged. Afterward, each pixel value is compared with a certain threshold for the highest integration time. The threshold serves to avoid values close to sensor saturation, because in this range, sensors often deviate from a linear characteristic curve. The threshold value must be determined individually for each sensor by experiment. If a pixel value is below the threshold, the value is placed into the HDR image to be generated. If it is above, the next lower integration time for the pixel is evaluated. If the value is then below the threshold, it is multiplied by 4, since the integration time is 4 times lower and it is placed into the HDR image. Otherwise, the integration time is lowered again and the value is multiplied by 16 to compensate the 16 times lower integration time. For the evaluated sensor and setup, the integration times are chosen such that all pixels are exposed correctly for one integration time. This HDR approach is only valid for a linear sensor response, as is the case for the tested hyperspectral sensor.

A measured filter curve is shown in Fig. 7(a). For this purpose and for all filter curves presented in this paper, the complete wavelength range was scanned step by step and the signal was recorded at each wavelength using the described HDR method. In the marked areas, slight discontinuities in slope are observable. These are all located at the threshold level of 1000 counts, which indicates a nonlinearity for high signals. When lowering the threshold, the discontinuities in slope are no longer present, as shown in Fig. 7(b).

Fig. 7

Response of a single pixel: (a) high threshold leading to kinks. (b) HDR with reduced threshold.

OE_59_12_125102_f007.png

For the characterization, the response for the wavelength from 671 to 981 nm was measured using 1 nm steps with the described technique. The acquired data set was then processed as follows: First, the dark values for the different illumination times were subtracted from the raw pixel value. A beam profile correction was then carried out. To this end, the irradiance distribution for each wavelength has been recorded by a monochrome sensor, which was placed at the same position as the hyperspectral sensor. A two-dimensional Gaussian curve was fitted and used to correct the hyperspectral images. Figure 8(a) shows an exemplary monochrome image. Figure 8(b) shows that after subtracting the illumination profile, a homogeneous intensity distribution is obtained. Figures 8(c) and 8(d) show a channel of the hyperspectral imager before and after correction. In comparison to Fig. 8(b), the scene in Fig. 8(d) does not seem to be more homogeneous, indicating a possible inhomogeneity of the spectral channels’ response.

Fig. 8

Irradiance profile before and after correction: (a) monochrome image; (b) corrected version of (a); (c) single channel of hyperspectral camera; (d) corrected version of (c)

OE_59_12_125102_f008.png

Due to the spectral characteristic of the light source, the illumination intensity varies for each wavelength. This effect must be compensated to achieve a meaningful characterization. Therefore, the optical power was measured with a calibrated power meter (Newport 2936-R, Newport 918D-UV-OD3R) and the camera signal was divided by the measured power. The response of a single channel before and after correction is shown in Fig. 9.

Fig. 9

Signal of one channel as a function of wavelength before (solid) and after (dashed) power correction.

OE_59_12_125102_f009.png

As can be seen in Fig. 8, there are some artifacts in the images, which are caused by particles on the optical elements. To reduce these disturbances, a median filter with a kernel size of 11×11  pixels was applied. To make the images of the different wavelengths comparable to each other, the data set was finally normalized to the highest value of all channels.9

5.

Perpendicular Incidence of Light on the Sensor

Figure 10(a) shows the corrected signal of channel 1 under perpendicular illumination using a wavelength of 914 nm. According to the manufacturer, the channel is most sensitive for this wavelength. Figure 10(a) shows a clear variation of the signal across the sensor. For each marked area, 100 associated filter curves of the pixels inside the area are plotted in Fig. 10(b). It can be seen that the curves within one area fit perfectly to each other so that just three curves are visible. The discrepancy of the transmission curves from different areas is obvious.

Fig. 10

(a) Corrected image of channel one for illumination with 914 nm; (b) 100 signal curves of each cut out marked in (a) as function of the wavelength.

OE_59_12_125102_f010.png

To verify this effect with the characteristics of the hyperspectral sensor, it must be assured that it is not caused by the illumination setup. For this purpose, the camera was mounted on a linear axis, which allows the camera to be moved perpendicular to the optical axis. Images were taken at different positions with the hyperspectral and a monochrome camera [Forward-looking Infrared (FLIR) Grasshopper 3 GS3-U3-23S6M, CMOS, 5.86  μm pixel size, and 76% quantum efficiency]. Using the monochrome camera, the illumination profile moves through the image without any variations as shown in Figs. 11(a)11(e). In Figs. 11(f)11(j), the illumination profile changes for the different positions of the hyperspectral camera. The red marked area with a size of 21×21  pixels marks a fixed point in space.

Fig. 11

Images taken for different lateral displacements: (a)–(e) monochrome sensor; (f)–(j) channel 1 of the hyperspectral imager. Fixed point in space red marked in the images.

OE_59_12_125102_f011.png

Figure 12 shows the average of all values inside the marked areas as a function of displacement for the two cameras. This clearly shows that the sensitivity changes across the hyperspectral sensor for one channel, whereas the signal of the monochrome camera remains constant.

Fig. 12

Mean signal of the marked areas in Fig. 11 for the monochrome and the hyperspectral camera as a function of displacement.

OE_59_12_125102_f012.png

Since irregularities in the recorded signals are obviously not caused by the characterization system, it must be assumed that they originate directly from the sensor. By recording the filter curves for each pixel on the sensor, a maximum sensitivity of 87% for channel 6 was measured in relation to the globally most sensitive pixel. In contrast, the sensitivity for channel 24 is at most 15%. In addition, it was found for this channel that the most sensitive wavelength changes approximately by 100 nm. The minimum, maximum, and average sensitivities and peak wavelengths of the channels are summarized in Table 1.

Table 1

Peak wavelength and sensitivity for all channels across the sensor.

ChannelPeak wavelength (nm)Peak signal (a.u.)
MinMeanMaxMinMeanMax
0901.1904.3906.30.250.280.33
1911.6913.8915.80.270.360.46
2892.7895.3898.00.310.340.38
3883.3884.9888.50.300.390.45
4683.0685.0687.20.190.330.43
5809.9812.3816.20.470.620.79
6822.5825.2827.70.650.790.87
7799.4801.1804.60.530.630.74
8785.8788.1792.10.510.730.80
9695.6698.1702.90.200.240.37
10759.5762.2766.90.520.610.68
11774.2775.8780.50.590.730.91
12748.0750.5755.40.630.760.87
13734.4736.6740.70.360.480.76
14709.2712.7717.60.340.550.73
15940.9942.0943.00.200.240.26
16940.9944.1946.20.130.170.23
17934.7936.9938.80.260.330.35
18925.2928.1931.50.220.310.35
19671.5673.8676.70.240.581.00
20862.3864.1867.50.340.410.52
21872.8875.1879.10.440.530.58
22851.8854.2858.10.360.400.49
23840.3842.8846.60.340.410.52
24841.3940.9949.30.080.090.15

On average, the wavelength shifts by 5.7 nm, excluding channel 24. This channel was excluded due to its huge shift. The gradient of spatial distribution of the most sensitive wavelength is different for each channel. For example, the most sensitive wavelength for channel 9 decreases from left to right, whereas it increases for channel 16. The sensitivity changes by at least 6% for the most sensitive wavelength for each channel. Channel 19 varies by more than 70%. The sensitivity distribution also varies differently for each channel.

Since both wavelength and sensitivity are changing, these influences must also be considered in combination. For this purpose, the mean peak wavelength was taken from Table 1 and the signal at exactly this wavelength was determined for each channel by interpolation. Figure 13 shows the spread of the signal for the interpolated signals. The figure shows that the intensity varies differently for each channel. For example, the lowest and highest values of channel 19 are separated by 60.4%, whereas the spread of channel 0 is below 10%. The extension of the box for channel 9 is only a few percent, but includes many outliers: The majority of the values are within a narrow band, but a few small areas close to the sensor edge vary significantly. All percentage values refer to the maximum sensor value located in channel 19.

Fig. 13

Signal distribution for the individual channel’s mean peak wavelength. The orange line inside the box marks the median value. Each box contains 50% of all values. The median separates the box into the lower and upper quartiles with the same number of values. The whisker antenna has an extend on each side of a maximum of 1.5 times the box’s extend and ends at the last value inside the range. All values outside the whisker antenna are marked with a black circle as outliers.

OE_59_12_125102_f013.png

Table 2 relates the lowest to the highest sensor values. This describes the fluctuation of the values inside the channel. The minimum sensor value is at least 20% lower than the corresponding maximum signal. Averaged over all channels the spread is 57.9%.

Table 2

Percentage signal spread extracted from Fig. 13.

Channel012345678
Min/Max78%57%80%67%43%47%63%73%65%
Channel91011121314151617
Min/Max33%56%47%48%32%28%77%57%75%
Channel18192021222324
Min/Max62%21%60%63%70%65%80%

6.

Comparison of the Measured and the Manufacturer Data

To establish a relationship between the measured filter curves and the filter curves provided by the manufacturer, they are compared with each other in the following. The comparison is based on the data for perpendicular light incidence.

First, the measured filter curves for the 25 channels were averaged individually over the entire sensor area. Since, as already described above, the filter curves vary strongly over the sensor area, the average of the filter curves was additionally formed in a central area consisting of 10×10  pixels for each channel. The result are channel-dependent filter curves, which reflect a region in the center of the sensor.

Table 3 shows the peak wavelengths and peak signals for the data provided by the manufacturer, the averaged data over all pixels, and the data resulting in the sensor center. With respect to the peak signals, it should be noted that all three data sets were scaled so that their respective maximum is 1. This procedure is necessary because the manufacturer data refers to the actual quantum efficiency, while the actual hyperspectral data set was scaled to 1 for its maximum. Regarding the peak wavelength, the averaged data set and the central data set are largely congruent so that the deviation from the peak wavelength of the manufacturer’s data is on average 1.8, respectively, 1.9 nm. In channel 24, a high difference of 20  nm can be observed. Excluding this value, the average deviation drops to below 1.3 nm. Accordingly, a high agreement of the peak wavelength between manufacturer data and recorded hyperspectral data can be recorded. A comparison of the maximum filter responses shows an average deviation of 10.3% between manufacturer data and the averaged data of the entire sensor. If only the central sensor area is considered, the deviation with respect to the manufacturer data amounts to 8.3% on average. Overall it can be stated that the measured signals for low wavelengths are generally lower than the manufacturer’s data, while the signals for high wavelengths have a higher peak than the manufacturer’s data.

Table 3

Channel-specific comparison of manufacturer’s data with the averaged data for all pixels and with data from the sensor center.

ChannelPeak wavelength (nm)Peak signal (a.u.)
Manuf.MeanCenterManuf.MeanCenter
0905904.2904.20.260.3680.33
1914913.7913.70.330.450.42
2896895.9894.80.330.430.40
3886885.4884.30.370.500.48
4685685.1685.10.690.420.41
5813812.0812.00.690.790.71
6825824.6825.60.821,001,00
7802801.5801.50.710.800.73
8789787.9787.90.800.930.92
9697697.7697.70.390.300.28
10763762.7761.60.760.760.71
11776775.3775.30.870.930.84
12750750.1750.11.000.950.98
13736736.5736.50.650.600.53
14712712.4712.40.880.680.73
15945942.0942.00.240.310.30
16952944.1945.10.250.220.21
17937936.8936.80.300.420.41
18928928.4927.30.280.400.37
19677673.6673.60.720.640.66
20864863.4863.40.400.510.44
21876874.9874.90.520.670.66
22855854.2853.90.420.510.44
23844843.4842.40.430.510.44
24959940.9940.90.170.110.10

7.

Oblique Incidence of Light on the Sensor

The smaller the F-number in a classical imaging system, the larger the angles of the light beams reaching the detector. Since the filters applied on the sensor are interference filters whose characteristics strongly depend on the angle of incidence, the behavior under oblique light incidence was investigated. With the following formula, the wavelength at which constructive interference appears inside a Fabry–Pérot filter, can be calculated

Eq. (10)

λ=2nLm·cos(θ),
with the cavity length L, the angle of incidence θ, and the diffraction order m.5,13 It is obvious that the wavelength λ is proportional to the cosine of θ. This relation is confirmed by the recorded hyperspectral data: Fig. 14 shows the averaged filter curves of four data sets with incidence angles of 0 deg, 5 deg, 10 deg, and 15 deg. In Fig. 14(a), channel 8 is shown, in Fig. 14(b), channel 21 is shown. The peak wavelength of channel 8 is 787.9 nm for perpendicular incidence. For an incidence under 15 deg, it is reduced to 780.5 nm. Equivalent to this, the peak wavelength of channel 21 shifts from 874.9 to 866.5 nm.

Fig. 14

Filter response under oblique light incidence for (a) channel 8 and (b) channel 21.

OE_59_12_125102_f014.png

Figure 14(b) shows a side peak at 845  nm, which is more pronounced with higher angles of incidence. To be able to trace the origin of this side peak, Fig. 15 sketches the oblique light incidence on a section of the sensor. The bars of different heights symbolize the corresponding interference filters with different cavity lengths. The pixels of channels 20 to 24 are located in one line and their arrangement is repeated in both directions. It might be possible that a secondary peak occurs due to light passing through from the adjacent filter. Thus, the side peak of channel 21 is located at the peak wavelength of channel 22. In addition to a shift of the central wavelength, Fig. 14 shows a decreasing signal level with increasing angle. Several factors can lead to this effect. For example, the projected area decreases with oblique incidence of light, which reduces the energy input per pixel. In addition, due to the structure of the interference filters on the chip, it is conceivable that shading may further reduce the intensity in some channels. However, when the sensor is used in a classical imaging system, light not only hits a pixel at one angle, but from many angles within the numerical aperture simultaneously. Thus, a large number of filter curves are incorporated into the signal of a pixel, whose measurement would take a lot of time. Goossens et al.14,15 presented a mathematical method that takes into account the shift of the filter curves for each angle within the numerical aperture such that not all angles must be measured. However, the presented method does not consider crosstalk between the individual channels.

Fig. 15

Schematic representation of the filters with different filter heights and oblique light incidence.

OE_59_12_125102_f015.png

8.

Conclusion

Hyperspectral snapshot sensors have a high potential in a variety of applications. In applications where a scan process has a disturbing effect on the measurement result, they can especially be an alternative to scanning systems. For quantitative results of these sensors, a detailed characterization is necessary to avoid large measurement errors. This requires a dedicated setup allowing a spectral narrow-banded collimated illumination of the sensor. The setup enables a homogeneous illumination of the sensor under a variable angle of incidence.

The systematic examination of the sensor showed strong variations of the response within one channel. As an example, the maximum sensitivity of channel 24 varies by 80% over the sensor area. In addition, the most sensitive wavelength of the individual channels changes across the sensor. On average, the wavelength varies by 5.7 nm, with the exception of channel 24, which shows a variation of more than 100 nm. If the correction matrix provided by the manufacturer, which is the same for all macropixels, is applied, an insufficient reconstruction of the spectrum will occur. To avoid this, an individual matrix must be created for each macropixel.

Under oblique incidence of light, crosstalk from adjacent channels could be observed, which already occurs at an angle of incidence of 5 deg.

Acknowledgments

We thank the German Federal Ministry for Economics Affairs and Energy (BMWi) for financial support within the ZIM project “Mobimik” (Grant No. 16KN075722). Disclosures: All of the authors’ institutions received funding from the Federal Ministry of Economics Affairs and Energy (BMWi). Apart from these grants, this submission was prepared free of other conflicts of interest.

References

1. 

N. Hagen, “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng., 51 (11), 111702 (2012). https://doi.org/10.1117/1.OE.51.11.111702 Google Scholar

2. 

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., 52 090901 (2013). https://doi.org/10.1117/1.OE.52.9.090901 Google Scholar

3. 

R. Hahn et al., “Single-shot low coherence pointwise measuring interferometer with potential for in-line inspection,” Meas. Sci. Technol., 28 025009 (2017). https://doi.org/10.1088/1361-6501/aa52f1 MSTCEP 0957-0233 Google Scholar

4. 

P. Agrawal et al., “Characterization of VNIR hyperspectral sensors with monolithically integrated optical filters,” in IS&T Int. Symp. Electron. Imaging Sci. and Technol., 1 –7 (2016). Google Scholar

5. 

P.-G. Dittrich et al., “Measurement principle and arrangement for the determination of spectral channel-specific angle dependencies for multispectral resolving filter-on-chip CMOS cameras,” Proc. SPIE, 11144 111440S (2019). https://doi.org/10.1117/12.2527871 PSISDG 0277-786X Google Scholar

6. 

B. Geelen, N. Tack and A. Lambrechts, “A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic,” Proc. SPIE, 8974 89740L (2014). https://doi.org/10.1117/12.2037607 PSISDG 0277-786X Google Scholar

7. 

B. Geelen, N. Tack and A. Lambrechts, “A snapshot multispectral imager with integrated tiled filters and optical duplication,” Proc. SPIE, 8613 861314 (2013). https://doi.org/10.1117/12.2004072 PSISDG 0277-786X Google Scholar

8. 

N. Tack, “A compact, high-speed, and low-cost hyperspectral imager,” Proc. SPIE, 8266 82660Q (2012). https://doi.org/10.1117/12.908172 PSISDG 0277-786X Google Scholar

9. 

R. Hahn et al., “Detailed characterization of a hyperspectral snapshot imager for full-field chromatic confocal microscopy,” Proc. SPIE, 11352 213 –226 (2020). https://doi.org/10.1117/12.2556797 PSISDG 0277-786X Google Scholar

10. 

G. Sharma, “Digital color imaging,” IEEE Trans. Image Process., 6 (7), 901 –932 (1997). https://doi.org/10.1109/83.597268 IIPRE4 1057-7149 Google Scholar

11. 

A. K. Ruprecht et al., “Chromatic confocal detection for high-speed microtopography measurements,” Proc. SPIE, 5302 53 (2004). https://doi.org/10.1117/12.525658 PSISDG 0277-786X Google Scholar

12. 

B. Bayer, “Color imaging array,” (1976). Google Scholar

13. 

B. Saleh and M. Teich, Fundamentals of Photonics, 5 1010 John Wiley & Sons, Inc., New York (1991). Google Scholar

14. 

T. Goossens et al., “Finite aperture correction for spectral cameras with integrated thin-film Fabry–Perot filters,” Appl. Opt., 57 (26), 7539 (2018). https://doi.org/10.1364/AO.57.007539 APOPAI 0003-6935 Google Scholar

15. 

T. Goossens et al., “Vignetted-aperture correction for spectral cameras with integrated thin-film Fabry–Perot filters,” Appl. Opt., 58 (7), 1789 (2019). https://doi.org/10.1364/AO.58.001789 APOPAI 0003-6935 Google Scholar

Biography

Robin Hahn is a PhD student at the Institut für Technische Optik at the University of Stuttgart. He received his master’s degree in 2016 from the University of Stuttgart in Photonic Engineering. His main research interests are in the field of optical 3D metrology, in particular interferometry and confocal microscopy. He is vice president of the SPIE Student Chapter of the University of Stuttgart.

Tobias Haist studied physics and received his PhD in engineering from the University of Stuttgart. Currently, he is leading the group 3D Surface Metrology at the Institut für Technische Optik, where he is working on new applications for spatial light modulators and 3-D measurement systems. His main research interests include optical and digital image processing, computer generated holography, and optical measurement systems.

Otto Hauler is currently employed as a research associate with project responsibility at the Reutlingen Research Institute (RRi) of Reutlingen University. His research interests include the development of process automation of chemically challenging systems, the development of chemometric models for applications in the field of hyperspectral imaging, and the simulation of quantum mechanical systems for the breakdown of important process steps.

Karsten Rebner is full professor of chemistry at Reutlingen University. After his studies in Chemistry and his PhD in spectroscopy, he worked for several years at BASF SE at the center of technical expertise of process analytics in Ludwigshafen. At present, he is head of the center for research and education “Process Analysis & Technology.” His main research areas are in optical spectroscopy, process analytical technology (PAT), on-line hyperspectral imaging and multivariate statistics.

Marc Brecht is an experimental physicist at Reutlingen University and the University of Tübingen since 2016. He studied physics in Tübingen and Berlin. At the TU-Berlin he did his doctorate. After PhD he habilitated at the FU-Berlin. In 2010 he was awarded a Heisenberg scholarship and moved to the University of Tübingen. In 2013 he accepted a position at the ZHAW in Winterthur (Switzerland) and moved to his current position in 2016.

Wolfgang Osten is a retired full professor at the University of Stuttgart and former director of the Institut für Technische Optik. His research work is focused on new concepts for machine vision by combining modern principles of optical metrology, sensor technology and digital image processing. He is fellow of OSA, SPIE, EOS, SEM, and senior member of IEEE. He is recipient of the Gabor Award of SPIE, the Kingslake Medal, the Vikram Award, and the Leith Medal of OSA.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Robin Hahn, Freya-Elin Hämmerling, Tobias Haist, David Fleischle, Oliver Schwanke, Otto Hauler, Karsten Rebner, Marc Brecht, and Wolfgang Osten "Detailed characterization of a mosaic based hyperspectral snapshot imager," Optical Engineering 59(12), 125102 (3 December 2020). https://doi.org/10.1117/1.OE.59.12.125102
Received: 18 June 2020; Accepted: 12 November 2020; Published: 3 December 2020
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
Sensors

Hyperspectral imaging

Imaging systems

Cameras

Manufacturing

Optical filters

Optical engineering

Back to Top