Open Access
18 May 2022 Event-based imaging polarimeter
Author Affiliations +
Abstract

The design and initial experimental results from an event-based, rotating-polarizer, imaging polarimeter are presented. The speed of division-of-time imaging systems is traditionally constrained by the frame rate of the focal plane sensor and the need to sample several polarization angles during rotation. The asynchronous event reporting of event-based cameras (EBCs) frees these constraints but introduces challenges. The Stokes vectors used to describe polarization are based on irradiance, but EBCs are inherently change-detection systems that struggle to estimate irradiance. Two methods of estimating the polarization state without first recovering irradiance images are presented and compared. The DAVIS 346 sensor enables simultaneous recording of events with conventional frame-based images, enabling direct comparison of event-based polarization estimates to traditional techniques.

1.

Introduction

Event-based cameras (EBCs) use in-pixel analog processing to detect changes in the photocurrent. When a pixel sees a change that exceeds a user-defined threshold, the pixel generates a signal that is used by a peripheral circuit to create an event report. Each pixel generates events independently of other pixels, with no fixed frame rate or integration time. This asynchronous operation mode offers many advantages including low latency, reduced power draw and data bandwidth, and high dynamic range.1

These benefits do come at a cost, of course. One issue is that EBCs struggle to estimate the scene irradiance, particularly for static portions of the scene. Algorithms to recover a conventional image from the event stream have been an area of active research.2,3 The active pixel sensor (APS) circuit in the DAVIS camera series is the most robust solution to this problem, but it introduces another set of issues.4,5 Another possible solution is to induce a controlled scene change. Anyone who has tapped on their EBC to see the scene while adjusting focus and pointing has already used this concept. Here, we propose the use of a rotating polarizer to induce changes. We expect the number of events generated in each pixel to depend on the degree of polarization in the scene as well as the polarizer rotation rate.

Polarimetric images have many applications in remote sensing.69 The polarization state of each pixel is computed from multiple observations. With a rotating polarizer, the observations are made at different times; also known as a division-of-time polarimeter. (Schott10 presents a good comparison of other types.) Division-of-time instruments are generally considered the simplest way to make polarization measurements but are subject to artifacts caused by changes in the scene between observations.11 This can be mitigated by reducing the overall measurement time, which suggests the low latency of EBC pixels may be advantageous.

The proposed device is similar to one demonstrated by Lu et al.,12 which showed that manmade objects (which are in general more strongly polarizing than natural surfaces) created more events. In this paper, we attempt to quantify the polarization and instrument performance more systematically.

An alternative to division-of-time-based polarimetry is division-of-focal-plane. In this scheme, all measurements are made simultaneously, with different pixels measuring light of different polarizations. Haessig et al.13 have demonstrated an event-based polarimetric imager of this type by placing polarizing filters directly on the pixels similar to RGB color imaging. The polarization state at each pixel is computed from surrounding pixels by a deep neural network. They used a rotating polarizer to generate a changing scene rather than as part of the sensor. By changing the rotation speed of their target, they demonstrate a clear benefit for event-based sensors for fast-moving targets. In this work, we present and compare different analytic methods for a temporally-modulated signal rather than spatial demosaicing.

2.

Theory

2.1.

Polarization

The polarization state of incident light can be quantified using the Stokes vector notation14

Eq. (1)

S=(Etot,EHEV,E45E135,ERHCELHC),
where Ex is the irradiance in polarization state x. The subscripts correspond to the horizontal, vertical, +45  deg, 45  deg, right-hand circular, and left-hand circular polarization states.

The change in polarization due to reflection or transmission can be described using Mueller matrices, which are 4×4 matrices in which element Mij describes how to input light in-state j couples into output state i, so that

Eq. (2)

Sout=MSin.
The Mueller matrix15 for an ideal linear polarizer with a transmission axis at an angle θ relative to the x axis is

Eq. (3)

M(θ)=12[1cos2θsin2θ0cos2θcos2(2θ)sin2θcos2θ0sin2θsin2θcos2θsin2(2θ)00000].
Equations (2) and (3) will predict the polarization state of the light entering the camera, but the camera is only sensitive to the total energy (i.e., the S0 component). The expected irradiance is, therefore

Eq. (4)

E(θ)=j=03M0j(θ)Sj=12(S0+S1cos2θ+S2sin2θ).
Notice that this system is completely insensitive to circular polarization because M03=0 for all θ. In many remote sensing applications, it is common to neglect S3 and only measure the linear polarizations. This choice is made partly out of convenience—linear polarizations are simpler to measure—and partly because elliptical polarization is relatively uncommon in natural scenes. Henceforth, we will use the shortened (linear-only) notation, which omits S3 and Mi3. In this case, the polarization state is also often described using the degree of linear polarization (DoLP) and angle of polarization (AoP), defined as below:

Eq. (5)

D=DoLP=S12+S22S0,ψ=AoP=12arctan(S2S1).
In this notation, we can rewrite the Stokes vector as S=S0[1,Dcos2ψ,Dsin2ψ].16 Combining this with Eq. (4) yields

Eq. (6)

E(θ)=12S0[1+Dcos2(θψ)].
For completely polarized light where D=1, this equation reduces to the familiar form of Malus’ law.17

The irradiance can be measured at multiple different rotation angles. We can use Eq. (4) to construct a system of equations

Eq. (7)

E=gMSin=g2[1cos2θ1sin2θ11cos2θ2sin2θ21cos2θnsin2θn]Sin,
where Ej represents the irradiance observed while the polarizer is at angle θj, and g is a scalar that accumulates all the constant multiplicative factors such as optics transmission and digital gain. Each row of matrix M is simply the top row of the Mueller matrices for each polarizer angle.

We can get a least-squared-error estimate of the polarization state from Eq. (7) by using the Moore–Penrose pseudoinverse

Eq. (8)

Sin=(1/g)M+E=(1/g)(MM)1ME.
If we consider a single rotation of the polarizer divided evenly into n samples, the matrix M is well-conditioned for all n6. The condition number decreases monotonically with increasing n, so increasing the number of samples should only improve estimates.

This result highlights one issue with division-of-time polarimeters—the need for a scene that is static over the time needed to collect multiple measurements. If the input changes during the collection of the n samples, Eq. (8) is no longer correct. With conventional focal plane array (FPA) sensors, the goal of measuring more samples for accuracy conflicts with the frame rate and exposure time limitations of the FPA. Even with fast framing cameras, motion of the polarizer during integration will reduce the accuracy and sensitivity of the system.13 The use of an EBC frees us from the frame rate constraints, although the refractory period of the pixels still limits the effective bandwidth of the system.18

2.2.

Event Rate

The Stokes’ vector approach assumes we are measuring intensity, but event-based sensors do not directly report intensity. The pixel circuit reports changes in the log of the photocurrent. Using Lichtsteiner’s model,18 the event rate n can be approximated as

Eq. (9)

n(t)=Nt1Θt[ln(I(t))],
where Θ is the comparator threshold. This continuous function obviously cannot exactly predict the discrete event output, but is a helpful model for the moment.

Substituting Eq. (6) into Eq. (9), we expect the event rate for a rotating polarizer to be

Eq. (10)

n(t)2ωΘ(sin2θb+cos2θ),
where θ is the angle between the polarizer axis and the polarization of the incident light, or θ=ωt+ϕψ. The constant b is a bias term proportional to 1/D. In real measurements, b will also include any other constant offset such as detector bias or leakage through the polarizer. For small values of b, n behaves like tan  2θ with a regularization term to avoid singularities where cosine goes to zero. For b1, the denominator is approximately constant and the event rate approaches a sine function as shown in Fig. 1. Values of b1 cause singularities in Eq. (10) that no real camera can produce; limitations such as refractory period and buffer size will always produce event rates that look like the b>1 values in Fig. 1. This equation predicts a zero-mean event rate, which can only make sense if we use signed values for events to account for polarity. In the work below, all events are assigned a value of ±1 unless otherwise noted.

Fig. 1

The theoretical event rate (normalized to unit variance for display scaling) in response to horizontal polarization over one full rotation of the polarizer, plotted for different values of bias, b. As the bias increases, the event rate becomes more harmonic and the phase offset of the event rate relative to Malus’s law irradiance (dashed black line) approaches π.

OE_61_5_053101_f001.png

The phase term in the definition of θ above is determined by the AoP relative to the polarizer axis at time t=0. The experiments conducted so far do not have any absolute time reference, making ϕ arbitrary.

The cumulative sum of events (considering events to have value ±1 to indicate polarity) is the discrete analog of computing the integral of a derivative, so that

Eq. (11)

N(t)=τ=0tn(τ)1Θln(I(t))+const.
We can therefore recover an estimate of the photocurrent as

Eq. (12)

I^=exp[ΘN(t)]12S0[1+Dcos2θ].
The experiments described below were intended to verify these relationships and show an event-based sensor can be used to recover estimates of the polarization state.

3.

Experiment

3.1.

Instrument

We assembled two event-based imaging polarimeters using a Prophesee EVK and a DAVIS 346 imaging through a 1-in. diameter thin-film polarizer (Thorlabs LPVIS 100) mounted in an Elliptec rotation stage (Thorlabs ELL14K). The results presented here all use the DAVIS 346 camera because the APS imaging mode enables a comparison between conventional and event-based polarization analysis. The rotating polarizer was mounted about 2 mm in front of the lens (Edmund Optics 67714; 16 mm, f/1.4). This gap enabled some light leakage around the polarizer, which is expected to contribute to the bias in b. Figure 2 shows the prototype. The camera and rotation stage are mounted to a custom-printed plastic mount.

Fig. 2

Prototype imager consisting of a DAVIS 346 EBC behind a rotating polarizer, shown here attached to a Manfrotto tripod mounting plate.

OE_61_5_053101_f002.png

3.2.

Highly Polarized Target

The camera was pointed at a Samsung quantum-dot light emitting diode (QLED) monitor as an initial test to verify the equations above. The monitor is strongly polarized in the horizontal direction. Figure 3 shows the values reported from a single pixel as the polarizer rotates through 540  deg. The APS video, the event rate n and the event count N all show good agreement with the theory above [Eqs. (6), (11), and (10), respectively]. Figure 4 shows the results of using Eq. (8) to compute the Stokes vector for each pixel in the scene. The hue in this visualization indicates AoP, with the saturation controlled by DoLP, so that white is unpolarized, pale pastels are weakly polarized, and bright colors are strongly polarized.19 We would expect all the QLED region to be strongly polarized with the same AoP, where the wall above the monitor should be unpolarized or weakly polarized.

Fig. 3

Data from a single-pixel (black) during one 540-deg rotation of the polarizer, with fits to the equations above (red). (a) Conventional image brightness from the APS readout, fit by Malus’s law. (b) Cumulative sum of signed events, N(t), fit by Eq. (11). (c) Relative photocurrent, approximated as exp(N), again fit to Malus’ law (with N normalized to unit max). (d) Event rate, n(t) fit by Eq (10). Plots (b)–(d) used events summed over a 5×5  pixel neighborhood for smoothing.

OE_61_5_053101_f003.png

Fig. 4

(a) A visual image (single frame from APS) at one polarizer angle. (b) A visualization of the polarization computed from APS values, using hue to indicate AoP and saturation for DoLP. (c) Similar view of polarization computed from the DVS event stream. Both false-color images use the same color scale, which is illustrated by the color wheel on the far right with the radial dimension representing DoLP.

OE_61_5_053101_f004.png

Based on the equations above, all pixels observing the QLED screen should show the same AoP regardless of brightness. Figure 4 shows general agreement with this, although the darker letters are still noticeable in the AoP images. The calculated AoP for the QLED pixels is 0.505±0.035  rad when using event data, compared with 0.405±0.031 for APS data (mean ± standard deviation). The 0.1-radian angle difference is due in part to the time delay between the first event and the first frame. In this case, the delay was about 17.4 ms and the polarizer rotation rate was 5  rad/s, which leads to about 0.09 rad of offset in the polarizer angle between the two datasets.

Note that AoP in this data includes an unknown (but constant) offset because we cannot directly measure θ(t). We estimate it from the motor speed as θ=ωt+ϕψ, where ϕ is a phase offset due to the polarizer angle at time t=0. The phase and the incident AoP should be constant for any pixel in the scene, but we cannot differentiate one from the other in this data without some a priori information about the scene. In this example, we know the QLED pixels are horizontally polarized, so we could subtract the mean angle to correct this offset.

3.3.

Complex Scene

The QLED screen was helpful for confirming the theory using a source with a high degree of polarization with a known state. The method was also applied to a more complex indoor scene. The scene includes a textbook, a small plant, a mug, a plastic stapler, and a steel tumbler. Figure 4 shows a side-by-side comparison of images computed from the two different data streams showing APS imagery on the left and dynamic vision sensor (DVS) event data on the right. The polarization images show general agreement, but the S0 image recovered from events is poor. This result is not surprising, given the challenge of measuring static images with EBCs. As the polarizer rotates, the S0 information produces no changes and no events. While Eq. (8) implies we can estimate S0 from the event stream, we are not measuring any information related to S0. This ambiguity is fundamentally tied to the constant of integration in Eq. (11).

The polarization data for Fig. 5 were computed using Eq. (8). Alternatively, the fit coefficients from Eq. (11) could be used as estimates of the phase and magnitude of the oscillation, which can be related to AoP and DoLP, respectively. We could also apply Fourier analysis to find (at each pixel) the magnitude and phase of the oscillations at the polarizer rotation frequency. In experiments so far, we find those methods to be less robust than the matrix inverse.

Fig. 5

Side-by-side comparison of images computed from the APS image data (a and c) to the same images computed from event data (b and d). Panels (a) and (b) show S0 images, while (c) and (d) show a polarization visualization with hue indicating AoP with saturation set by DoLP as in Fig. 4.

OE_61_5_053101_f005.png

3.4.

Analysis

When using Eq. (8), we found the results are actually more robust and consistent when applying the equation to N(t), which is proportional to log intensity, rather than directly to the intensity from Eq. (12). Because each pixel can generate at most one event per timestep, we estimate the event rate as the inverse of time between events, which results in a noisy estimate of n(t). The noise can be reduced by increasing the time step and also by counting events in a small neighborhood around each pixel (usually 3×3 or 5×5 pixels).

In our experiments so far, the value of S0 resulting from Eq. (8) has many negative values, which is an obviously non-physical result. The negative and near-zero values of S0 then also corrupt estimates of DoLP. To correct this, we shifted the distribution of S0 to be strictly positive by S0=S0min(S0)+ϵ, where ϵ is a regularization factor to avoid division by small noisy numbers when computing DoLP. In practice, we find that using S0=|S0|+ϵ is also effective. The choice of regularization constant has a large effect on S0 estimates. While it is easy to get plausible results by tuning ϵ, we hope to develop a data-driven way to estimate correct values of ϵ. Ultimately, this uncertainty arises from the constant integration in Eq. (11). Future work will include a set of experiments to see if this constant is repeatable from one collection to the next.

The APS circuit of the DAVIS camera allows us to develop “truth” estimates of the input state to verify the equations presented above. Figure 6 shows one example of this. Equation (10) predicts that the event rate should be proportional to DoLP (through b). The DoLP at each pixel was computed by applying Eq. (8) to the APS frames. The vertical axis of Fig. 6 shows a count of events (ignoring polarity) on each pixel as the polarizer rotated through 3π radians at the motor’s maximum rotation rate. The fit (blue dashed line) in Fig. 6 is based on summing unsigned events as predicted by Eq. (10), or

Eq. (13)

Ntot(D)=i|a0sin2θi(a1/D)+a2+cos2θi|Δθ,
where Δθ is the angular sample spacing, and the fit coefficients, ai were chosen to minimize Euclidean distance between Eq. (13) and the median points, with distances weighted by population at each value of Ntot. The fit line in Fig. 6 uses a0=3.15, a1=1.57, and a2=0.68, which gives a fit R2=0.87.

Fig. 6

A comparison of the pixel-level correlation between DoLP and the number of events at each pixel. For each value of the event count (vertical axis), the horizontal lines in (a) show the range of DoLP values for those pixels. The thin gray line represents the 0.1 through 0.9 quantile range, the thick gray line is the the interquartile (0.25 to 0.75 quantile) range, and the black square is the median. The dashed blue line is Eq. (13) fit to the median points weighted according the population shown by the histogram in (b).

OE_61_5_053101_f006.png

As with the event data, our image-based DoLP calculations include a regularization term in the denominator. Even though the APS image values are strictly positive, the S0 values computed from Eq. (8) can produce small or negative values. The values plotted in Fig. 6 used ϵ=1. As an alternative, we can also estimate DoLP as the contrast in pixel intensity as the polarizer rotates

Eq. (14)

DoLP=max(E)min(E)max(E)+min(E).
The matrix inversion should be more accurate, since it is a least-squared error solution using all the data, where Eq. (14) uses only two values for each pixel. In the current datasets, we find the two estimates are well-correlated (R4=0.93), but the contrast estimate is much faster to compute.

To summarize, we have four ways to estimate DoLP: two from images and two from events. These are

  • A. from S computed by applying Eq. (8) to image data [E(t)],

  • B. using image contrast, as in Eq. (14), or

  • C. from S computed by applying Eq. (8) to event data [N(t)], and

  • D. from the event count, by interpolating Eq. (13).

Figure 7 compares the results of all four methods applied to the same dataset, and the correlation of the resulting values is compared in Table 1. The first three methods (or all except the bottom right plot below) require a regularization term in the denominator to reduce the effects of small, noisy numbers in the denominator. For a fair comparison, all three plots used ϵ=1, though there is no a priori reason for that choice.

Fig. 7

A comparison of different DoLP estimates. Panels (a) and (c) are computed from APS image values, while panels (b) and (d) are computed from event data. Panels (a) and (b) are computed using Eq. (8); i.e., methods A (a) and C (b). Panel (c) shows image contrast from Eq. (14) (method B), and panel (d) shows event count as in Eq. (13) (method D). All images use the same color scale.

OE_61_5_053101_f007.png

Table 1

R2 correlation coefficients between DoLP estimation methods.

MethodABCD
A. Matrix (image)10.930.590.74
B. Image contrast10.550.69
C. Matrix (events)10.60
D. Event count1

4.

Results and Discussion

The method applied here is effective at recovering polarization information from a scene using an EBC. The DoLP images and AoP are useful for augmenting contrast between objects, but so far are not sufficiently accurate for quantitative polarimetry. Future experiments will focus on removing/estimating some of the unknown parameters that limit the accuracy in the current work, such as the value of the threshold.

The theory presented above suggests the possibility of recovering an accurate intensity (S0) image via Eq. (8), but experimental results have been disappointing. The accuracy of the S0 images reconstructed in this way is poor thus far, which aligns with the conventional wisdom about EBCs.

The Elliptec motor used for these experiments has several advantages (notably high speed and low cost) but imposes some limitations as well. For characterizing and calibrating the instrument, we require more precise angle data. Also, the motor is not intended to be operated continuously, so each rotation is a separate event in terms of the processing and analysis pipeline.

The algorithm has not yet been optimized and does not run in real-time. Estimating the DoLP and AoP for a full image takes over a minute at this point. As each pixel is treated independently by the algorithm, we expect vectorization and parallelization would speed it significantly. The slow algorithm obviously detracts from the promise of EBCs to speed up data collection, but the algorithm is very immature at this point. Using the event count at each pixel as a proxy for DoLP is much faster (20  ms), and is more closely correlated to image-based DoLP calculations. The matrix calculation is still necessary for estimating AoP, which is not affected by the uncertainty in S0.

These initial experiments show that the device has some promise for applications where polarization is used to enhance contrast or object detection. It may be less suited to applications that require precise polarization measurements. Future work will emphasize collecting data in more controlled conditions and over a broader range of conditions to determine which of the findings here are repeatable and robust.

Data, Materials, and Code Availability

The code and data are property of the U.S. Air Force. Contact the authors to arrange access.

References

1. 

G. Gallego et al., “Event-based vision: a survey,” IEEE Trans. Pattern Anal. Mach. Intell., 44 154 –180 (2022). https://doi.org/10.1109/TPAMI.2020.3008413 ITPIDJ 0162-8828 Google Scholar

2. 

R. W. Baldwin et al., “Time-ordered recent event (TORE) volumes for event cameras,” (2021). Google Scholar

3. 

F. Paredes-Vallés and G. C. H. E. de Croon, “Back to event basics: self-supervised learning of image reconstruction for event cameras via photometric constancy,” (2021). Google Scholar

4. 

C. Brandli et al., “A 240×180 130 db 3  μs latency global shutter spatiotemporal vision sensor,” IEEE J. Solid-State Circuits, 49 (10), 2333 –2341 (2014). https://doi.org/10.1109/JSSC.2014.2342715 IJSCBC 0018-9200 Google Scholar

5. 

Y. Hu, S.-C. Liu and T. Delbruck, “v2e: from video frames to realistic DVS events,” (2020). Google Scholar

6. 

J. A. Martin and K. C. Gross, “Estimating index of refraction for specular reflectors using passive polarimetric hyperspectral radiance measurements,” Opt. Eng., 56 (8), 081812 (2017). https://doi.org/10.1117/1.OE.56.8.081812 Google Scholar

7. 

D. B. Chenault et al., “Infrared polarimetric sensing of oil on water,” Mar. Technol. Soc. J., 52 (6), 13 –22 (2018). https://doi.org/10.4031/MTSJ.52.6.8 MTSJBB 0025-3324 Google Scholar

8. 

W. Zhang et al., “Review of passive polarimetric dehazing methods,” Opt. Eng., 60 (3), 030901 (2021). https://doi.org/10.1117/1.OE.60.3.030901 Google Scholar

9. 

W. Du, W. Jia and Z. Zhang, “Optimization of infrared stokes imaging polarimeter for detection of camouflaged target,” Proc. SPIE, 11196 111961W (2019). https://doi.org/10.1117/12.2539378 PSISDG 0277-786X Google Scholar

10. 

J. R. Schott, Fundamentals of Polarimetric Remote Sensing, SPIE Press, Bellingham, Washington, DC (2009). Google Scholar

11. 

D. Lemaster, “Fundamental estimation bounds for polarimetric imagery,” Opt. Express, 16 (16), 12018 –36 (2008). https://doi.org/10.1364/OE.16.012018 OPEXFF 1094-4087 Google Scholar

12. 

X. Lu et al., “Polarization imaging detection method based on dynamic vision sensor,” Proc. SPIE, 11763 117630Z (2021). https://doi.org/10.1117/12.2586190 PSISDG 0277-786X Google Scholar

13. 

G. Haessig et al., “Bio-inspired polarization event camera,” (2021). Google Scholar

14. 

M. Born and E. Wolf, Principles of Optics, 7th ed.Cambridge University Press, New York (1999). Google Scholar

15. 

W. H. McMaster, “Matrix representation of polarization,” Rev. Mod. Phys., 33 8 –28 (1961). https://doi.org/10.1103/RevModPhys.33.8 RMPHAT 0034-6861 Google Scholar

16. 

D. Goldstein, Polarized Light, 3rd ed.CRC Press, Boca Raton, Florida (2011). Google Scholar

17. 

E. Hecht, Optics, 4rd ed.Addison-Wesley, San Francisco (2002). Google Scholar

18. 

P. Lichtsteiner, C. Posch and T. Delbrück, “A 128 x 128, 120 dB, 15  μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, 43 566 –576 (2008). https://doi.org/10.1109/JSSC.2007.914337 IJSCBC 0018-9200 Google Scholar

19. 

J. S. Tyo, B. M. Ratliff and A. S. Alenin, “Adapting the HSV polarization-color mapping for regions with low irradiance and high polarization,” Opt. Lett., 41 (20), 4759 –4762 (2016). https://doi.org/10.1364/OL.41.004759 OPLEDP 0146-9592 Google Scholar

Biography

Michael Hawks is a research assistant professor at the Air Force Institute of Technology (AFIT). He received his BS degree in astrophysics from Michigan State University in 1991, his MS degree in engineering physics from AFIT in 1993, and his PhD in optical science from AFIT in 2006. His current research interests include computer vision, computational physics, physics-based modeling, and physics-constrained machine learning.

Michael Dexter is an assistant professor at the Air Force Institute of Technology (AFIT) and the director of AFIT’s Center for Technical Intelligence Studies & Research. He received his BS degree in applied physics from the University of Omaha, Nebraska, in 2004, his MS degree in engineering physics from AFIT in 2009, and his PhD in nuclear physics from AFIT in 2015. His current research interests include high energy/density plasma physics, nuclear weapons effects, remote sensing, laser physics, and digital image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Michael Hawks and Michael Dexter "Event-based imaging polarimeter," Optical Engineering 61(5), 053101 (18 May 2022). https://doi.org/10.1117/1.OE.61.5.053101
Received: 2 February 2022; Accepted: 29 March 2022; Published: 18 May 2022
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Polarization

Polarizers

Polarimetry

Sensors

Cameras

Optical engineering

Imaging systems

RELATED CONTENT


Back to Top