The design and initial experimental results from an event-based, rotating-polarizer, imaging polarimeter are presented. The speed of division-of-time imaging systems is traditionally constrained by the frame rate of the focal plane sensor and the need to sample several polarization angles during rotation. The asynchronous event reporting of event-based cameras (EBCs) frees these constraints but introduces challenges. The Stokes vectors used to describe polarization are based on irradiance, but EBCs are inherently change-detection systems that struggle to estimate irradiance. Two methods of estimating the polarization state without first recovering irradiance images are presented and compared. The DAVIS 346 sensor enables simultaneous recording of events with conventional frame-based images, enabling direct comparison of event-based polarization estimates to traditional techniques. |
1.IntroductionEvent-based cameras (EBCs) use in-pixel analog processing to detect changes in the photocurrent. When a pixel sees a change that exceeds a user-defined threshold, the pixel generates a signal that is used by a peripheral circuit to create an event report. Each pixel generates events independently of other pixels, with no fixed frame rate or integration time. This asynchronous operation mode offers many advantages including low latency, reduced power draw and data bandwidth, and high dynamic range.1 These benefits do come at a cost, of course. One issue is that EBCs struggle to estimate the scene irradiance, particularly for static portions of the scene. Algorithms to recover a conventional image from the event stream have been an area of active research.2,3 The active pixel sensor (APS) circuit in the DAVIS camera series is the most robust solution to this problem, but it introduces another set of issues.4,5 Another possible solution is to induce a controlled scene change. Anyone who has tapped on their EBC to see the scene while adjusting focus and pointing has already used this concept. Here, we propose the use of a rotating polarizer to induce changes. We expect the number of events generated in each pixel to depend on the degree of polarization in the scene as well as the polarizer rotation rate. Polarimetric images have many applications in remote sensing.6–9 The polarization state of each pixel is computed from multiple observations. With a rotating polarizer, the observations are made at different times; also known as a division-of-time polarimeter. (Schott10 presents a good comparison of other types.) Division-of-time instruments are generally considered the simplest way to make polarization measurements but are subject to artifacts caused by changes in the scene between observations.11 This can be mitigated by reducing the overall measurement time, which suggests the low latency of EBC pixels may be advantageous. The proposed device is similar to one demonstrated by Lu et al.,12 which showed that manmade objects (which are in general more strongly polarizing than natural surfaces) created more events. In this paper, we attempt to quantify the polarization and instrument performance more systematically. An alternative to division-of-time-based polarimetry is division-of-focal-plane. In this scheme, all measurements are made simultaneously, with different pixels measuring light of different polarizations. Haessig et al.13 have demonstrated an event-based polarimetric imager of this type by placing polarizing filters directly on the pixels similar to RGB color imaging. The polarization state at each pixel is computed from surrounding pixels by a deep neural network. They used a rotating polarizer to generate a changing scene rather than as part of the sensor. By changing the rotation speed of their target, they demonstrate a clear benefit for event-based sensors for fast-moving targets. In this work, we present and compare different analytic methods for a temporally-modulated signal rather than spatial demosaicing. 2.Theory2.1.PolarizationThe polarization state of incident light can be quantified using the Stokes vector notation14 where is the irradiance in polarization state . The subscripts correspond to the horizontal, vertical, , , right-hand circular, and left-hand circular polarization states.The change in polarization due to reflection or transmission can be described using Mueller matrices, which are matrices in which element describes how to input light in-state couples into output state , so that The Mueller matrix15 for an ideal linear polarizer with a transmission axis at an angle relative to the axis is Equations (2) and (3) will predict the polarization state of the light entering the camera, but the camera is only sensitive to the total energy (i.e., the component). The expected irradiance is, therefore Notice that this system is completely insensitive to circular polarization because for all . In many remote sensing applications, it is common to neglect and only measure the linear polarizations. This choice is made partly out of convenience—linear polarizations are simpler to measure—and partly because elliptical polarization is relatively uncommon in natural scenes. Henceforth, we will use the shortened (linear-only) notation, which omits and . In this case, the polarization state is also often described using the degree of linear polarization (DoLP) and angle of polarization (AoP), defined as below: In this notation, we can rewrite the Stokes vector as .16 Combining this with Eq. (4) yields For completely polarized light where , this equation reduces to the familiar form of Malus’ law.17The irradiance can be measured at multiple different rotation angles. We can use Eq. (4) to construct a system of equations where represents the irradiance observed while the polarizer is at angle , and is a scalar that accumulates all the constant multiplicative factors such as optics transmission and digital gain. Each row of matrix is simply the top row of the Mueller matrices for each polarizer angle.We can get a least-squared-error estimate of the polarization state from Eq. (7) by using the Moore–Penrose pseudoinverse If we consider a single rotation of the polarizer divided evenly into samples, the matrix is well-conditioned for all . The condition number decreases monotonically with increasing , so increasing the number of samples should only improve estimates.This result highlights one issue with division-of-time polarimeters—the need for a scene that is static over the time needed to collect multiple measurements. If the input changes during the collection of the samples, Eq. (8) is no longer correct. With conventional focal plane array (FPA) sensors, the goal of measuring more samples for accuracy conflicts with the frame rate and exposure time limitations of the FPA. Even with fast framing cameras, motion of the polarizer during integration will reduce the accuracy and sensitivity of the system.13 The use of an EBC frees us from the frame rate constraints, although the refractory period of the pixels still limits the effective bandwidth of the system.18 2.2.Event RateThe Stokes’ vector approach assumes we are measuring intensity, but event-based sensors do not directly report intensity. The pixel circuit reports changes in the log of the photocurrent. Using Lichtsteiner’s model,18 the event rate can be approximated as where is the comparator threshold. This continuous function obviously cannot exactly predict the discrete event output, but is a helpful model for the moment.Substituting Eq. (6) into Eq. (9), we expect the event rate for a rotating polarizer to be where is the angle between the polarizer axis and the polarization of the incident light, or . The constant is a bias term proportional to . In real measurements, will also include any other constant offset such as detector bias or leakage through the polarizer. For small values of , behaves like with a regularization term to avoid singularities where cosine goes to zero. For , the denominator is approximately constant and the event rate approaches a sine function as shown in Fig. 1. Values of cause singularities in Eq. (10) that no real camera can produce; limitations such as refractory period and buffer size will always produce event rates that look like the values in Fig. 1. This equation predicts a zero-mean event rate, which can only make sense if we use signed values for events to account for polarity. In the work below, all events are assigned a value of unless otherwise noted.The phase term in the definition of above is determined by the AoP relative to the polarizer axis at time . The experiments conducted so far do not have any absolute time reference, making arbitrary. The cumulative sum of events (considering events to have value to indicate polarity) is the discrete analog of computing the integral of a derivative, so that We can therefore recover an estimate of the photocurrent as The experiments described below were intended to verify these relationships and show an event-based sensor can be used to recover estimates of the polarization state.3.Experiment3.1.InstrumentWe assembled two event-based imaging polarimeters using a Prophesee EVK and a DAVIS 346 imaging through a 1-in. diameter thin-film polarizer (Thorlabs LPVIS 100) mounted in an Elliptec rotation stage (Thorlabs ELL14K). The results presented here all use the DAVIS 346 camera because the APS imaging mode enables a comparison between conventional and event-based polarization analysis. The rotating polarizer was mounted about 2 mm in front of the lens (Edmund Optics 67714; 16 mm, f/1.4). This gap enabled some light leakage around the polarizer, which is expected to contribute to the bias in . Figure 2 shows the prototype. The camera and rotation stage are mounted to a custom-printed plastic mount. 3.2.Highly Polarized TargetThe camera was pointed at a Samsung quantum-dot light emitting diode (QLED) monitor as an initial test to verify the equations above. The monitor is strongly polarized in the horizontal direction. Figure 3 shows the values reported from a single pixel as the polarizer rotates through . The APS video, the event rate and the event count all show good agreement with the theory above [Eqs. (6), (11), and (10), respectively]. Figure 4 shows the results of using Eq. (8) to compute the Stokes vector for each pixel in the scene. The hue in this visualization indicates AoP, with the saturation controlled by DoLP, so that white is unpolarized, pale pastels are weakly polarized, and bright colors are strongly polarized.19 We would expect all the QLED region to be strongly polarized with the same AoP, where the wall above the monitor should be unpolarized or weakly polarized. Based on the equations above, all pixels observing the QLED screen should show the same AoP regardless of brightness. Figure 4 shows general agreement with this, although the darker letters are still noticeable in the AoP images. The calculated AoP for the QLED pixels is when using event data, compared with for APS data (mean standard deviation). The 0.1-radian angle difference is due in part to the time delay between the first event and the first frame. In this case, the delay was about 17.4 ms and the polarizer rotation rate was , which leads to about 0.09 rad of offset in the polarizer angle between the two datasets. Note that AoP in this data includes an unknown (but constant) offset because we cannot directly measure . We estimate it from the motor speed as , where is a phase offset due to the polarizer angle at time . The phase and the incident AoP should be constant for any pixel in the scene, but we cannot differentiate one from the other in this data without some a priori information about the scene. In this example, we know the QLED pixels are horizontally polarized, so we could subtract the mean angle to correct this offset. 3.3.Complex SceneThe QLED screen was helpful for confirming the theory using a source with a high degree of polarization with a known state. The method was also applied to a more complex indoor scene. The scene includes a textbook, a small plant, a mug, a plastic stapler, and a steel tumbler. Figure 4 shows a side-by-side comparison of images computed from the two different data streams showing APS imagery on the left and dynamic vision sensor (DVS) event data on the right. The polarization images show general agreement, but the image recovered from events is poor. This result is not surprising, given the challenge of measuring static images with EBCs. As the polarizer rotates, the information produces no changes and no events. While Eq. (8) implies we can estimate from the event stream, we are not measuring any information related to . This ambiguity is fundamentally tied to the constant of integration in Eq. (11). The polarization data for Fig. 5 were computed using Eq. (8). Alternatively, the fit coefficients from Eq. (11) could be used as estimates of the phase and magnitude of the oscillation, which can be related to AoP and DoLP, respectively. We could also apply Fourier analysis to find (at each pixel) the magnitude and phase of the oscillations at the polarizer rotation frequency. In experiments so far, we find those methods to be less robust than the matrix inverse. 3.4.AnalysisWhen using Eq. (8), we found the results are actually more robust and consistent when applying the equation to , which is proportional to log intensity, rather than directly to the intensity from Eq. (12). Because each pixel can generate at most one event per timestep, we estimate the event rate as the inverse of time between events, which results in a noisy estimate of . The noise can be reduced by increasing the time step and also by counting events in a small neighborhood around each pixel (usually or pixels). In our experiments so far, the value of resulting from Eq. (8) has many negative values, which is an obviously non-physical result. The negative and near-zero values of then also corrupt estimates of DoLP. To correct this, we shifted the distribution of to be strictly positive by , where is a regularization factor to avoid division by small noisy numbers when computing DoLP. In practice, we find that using is also effective. The choice of regularization constant has a large effect on estimates. While it is easy to get plausible results by tuning , we hope to develop a data-driven way to estimate correct values of . Ultimately, this uncertainty arises from the constant integration in Eq. (11). Future work will include a set of experiments to see if this constant is repeatable from one collection to the next. The APS circuit of the DAVIS camera allows us to develop “truth” estimates of the input state to verify the equations presented above. Figure 6 shows one example of this. Equation (10) predicts that the event rate should be proportional to DoLP (through ). The DoLP at each pixel was computed by applying Eq. (8) to the APS frames. The vertical axis of Fig. 6 shows a count of events (ignoring polarity) on each pixel as the polarizer rotated through radians at the motor’s maximum rotation rate. The fit (blue dashed line) in Fig. 6 is based on summing unsigned events as predicted by Eq. (10), or where is the angular sample spacing, and the fit coefficients, were chosen to minimize Euclidean distance between Eq. (13) and the median points, with distances weighted by population at each value of . The fit line in Fig. 6 uses , , and , which gives a fit .As with the event data, our image-based DoLP calculations include a regularization term in the denominator. Even though the APS image values are strictly positive, the values computed from Eq. (8) can produce small or negative values. The values plotted in Fig. 6 used . As an alternative, we can also estimate DoLP as the contrast in pixel intensity as the polarizer rotates The matrix inversion should be more accurate, since it is a least-squared error solution using all the data, where Eq. (14) uses only two values for each pixel. In the current datasets, we find the two estimates are well-correlated (), but the contrast estimate is much faster to compute.To summarize, we have four ways to estimate DoLP: two from images and two from events. These are
Figure 7 compares the results of all four methods applied to the same dataset, and the correlation of the resulting values is compared in Table 1. The first three methods (or all except the bottom right plot below) require a regularization term in the denominator to reduce the effects of small, noisy numbers in the denominator. For a fair comparison, all three plots used , though there is no a priori reason for that choice. Table 1R2 correlation coefficients between DoLP estimation methods.
4.Results and DiscussionThe method applied here is effective at recovering polarization information from a scene using an EBC. The DoLP images and AoP are useful for augmenting contrast between objects, but so far are not sufficiently accurate for quantitative polarimetry. Future experiments will focus on removing/estimating some of the unknown parameters that limit the accuracy in the current work, such as the value of the threshold. The theory presented above suggests the possibility of recovering an accurate intensity () image via Eq. (8), but experimental results have been disappointing. The accuracy of the images reconstructed in this way is poor thus far, which aligns with the conventional wisdom about EBCs. The Elliptec motor used for these experiments has several advantages (notably high speed and low cost) but imposes some limitations as well. For characterizing and calibrating the instrument, we require more precise angle data. Also, the motor is not intended to be operated continuously, so each rotation is a separate event in terms of the processing and analysis pipeline. The algorithm has not yet been optimized and does not run in real-time. Estimating the DoLP and AoP for a full image takes over a minute at this point. As each pixel is treated independently by the algorithm, we expect vectorization and parallelization would speed it significantly. The slow algorithm obviously detracts from the promise of EBCs to speed up data collection, but the algorithm is very immature at this point. Using the event count at each pixel as a proxy for DoLP is much faster (), and is more closely correlated to image-based DoLP calculations. The matrix calculation is still necessary for estimating AoP, which is not affected by the uncertainty in . These initial experiments show that the device has some promise for applications where polarization is used to enhance contrast or object detection. It may be less suited to applications that require precise polarization measurements. Future work will emphasize collecting data in more controlled conditions and over a broader range of conditions to determine which of the findings here are repeatable and robust. Data, Materials, and Code AvailabilityThe code and data are property of the U.S. Air Force. Contact the authors to arrange access. ReferencesG. Gallego et al.,
“Event-based vision: a survey,”
IEEE Trans. Pattern Anal. Mach. Intell., 44 154
–180
(2022). https://doi.org/10.1109/TPAMI.2020.3008413 ITPIDJ 0162-8828 Google Scholar
R. W. Baldwin et al.,
“Time-ordered recent event (TORE) volumes for event cameras,”
(2021). Google Scholar
F. Paredes-Vallés and G. C. H. E. de Croon,
“Back to event basics: self-supervised learning of image reconstruction for event cameras via photometric constancy,”
(2021). Google Scholar
C. Brandli et al.,
“A 130 db latency global shutter spatiotemporal vision sensor,”
IEEE J. Solid-State Circuits, 49
(10), 2333
–2341
(2014). https://doi.org/10.1109/JSSC.2014.2342715 IJSCBC 0018-9200 Google Scholar
Y. Hu, S.-C. Liu and T. Delbruck,
“v2e: from video frames to realistic DVS events,”
(2020). Google Scholar
J. A. Martin and K. C. Gross,
“Estimating index of refraction for specular reflectors using passive polarimetric hyperspectral radiance measurements,”
Opt. Eng., 56
(8), 081812
(2017). https://doi.org/10.1117/1.OE.56.8.081812 Google Scholar
D. B. Chenault et al.,
“Infrared polarimetric sensing of oil on water,”
Mar. Technol. Soc. J., 52
(6), 13
–22
(2018). https://doi.org/10.4031/MTSJ.52.6.8 MTSJBB 0025-3324 Google Scholar
W. Zhang et al.,
“Review of passive polarimetric dehazing methods,”
Opt. Eng., 60
(3), 030901
(2021). https://doi.org/10.1117/1.OE.60.3.030901 Google Scholar
W. Du, W. Jia and Z. Zhang,
“Optimization of infrared stokes imaging polarimeter for detection of camouflaged target,”
Proc. SPIE, 11196 111961W
(2019). https://doi.org/10.1117/12.2539378 PSISDG 0277-786X Google Scholar
J. R. Schott, Fundamentals of Polarimetric Remote Sensing, SPIE Press, Bellingham, Washington, DC
(2009). Google Scholar
D. Lemaster,
“Fundamental estimation bounds for polarimetric imagery,”
Opt. Express, 16
(16), 12018
–36
(2008). https://doi.org/10.1364/OE.16.012018 OPEXFF 1094-4087 Google Scholar
X. Lu et al.,
“Polarization imaging detection method based on dynamic vision sensor,”
Proc. SPIE, 11763 117630Z
(2021). https://doi.org/10.1117/12.2586190 PSISDG 0277-786X Google Scholar
G. Haessig et al.,
“Bio-inspired polarization event camera,”
(2021). Google Scholar
M. Born and E. Wolf, Principles of Optics, 7th ed.Cambridge University Press, New York
(1999). Google Scholar
W. H. McMaster,
“Matrix representation of polarization,”
Rev. Mod. Phys., 33 8
–28
(1961). https://doi.org/10.1103/RevModPhys.33.8 RMPHAT 0034-6861 Google Scholar
D. Goldstein, Polarized Light, 3rd ed.CRC Press, Boca Raton, Florida
(2011). Google Scholar
E. Hecht, Optics, 4rd ed.Addison-Wesley, San Francisco
(2002). Google Scholar
P. Lichtsteiner, C. Posch and T. Delbrück,
“A 128 x 128, 120 dB, latency asynchronous temporal contrast vision sensor,”
IEEE J. Solid-State Circuits, 43 566
–576
(2008). https://doi.org/10.1109/JSSC.2007.914337 IJSCBC 0018-9200 Google Scholar
J. S. Tyo, B. M. Ratliff and A. S. Alenin,
“Adapting the HSV polarization-color mapping for regions with low irradiance and high polarization,”
Opt. Lett., 41
(20), 4759
–4762
(2016). https://doi.org/10.1364/OL.41.004759 OPLEDP 0146-9592 Google Scholar
BiographyMichael Hawks is a research assistant professor at the Air Force Institute of Technology (AFIT). He received his BS degree in astrophysics from Michigan State University in 1991, his MS degree in engineering physics from AFIT in 1993, and his PhD in optical science from AFIT in 2006. His current research interests include computer vision, computational physics, physics-based modeling, and physics-constrained machine learning. Michael Dexter is an assistant professor at the Air Force Institute of Technology (AFIT) and the director of AFIT’s Center for Technical Intelligence Studies & Research. He received his BS degree in applied physics from the University of Omaha, Nebraska, in 2004, his MS degree in engineering physics from AFIT in 2009, and his PhD in nuclear physics from AFIT in 2015. His current research interests include high energy/density plasma physics, nuclear weapons effects, remote sensing, laser physics, and digital image processing. |