Open Access
31 January 2022 Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems
Author Affiliations +
Abstract

In many applications, there is a great demand for reliable, small, and low-cost three-dimensional imaging systems. Promising systems for applications such as automotive applications as well as safe human robotic collaboration are light detection and ranging (lidar) systems based on the direct time-of-flight principle. Especially for covering a large field of view or long-range capabilities, the previously used polygon-scanners are replaced by microelectromechanical systems (MEMS)-scanners. A more recent development is to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). The combination of both technologies into a MEMS-based SPAD lidar system promises a significant performance increase and cost reduction compared with other approaches. To distinguish between signal and background/noise photons, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. In this article, a signal and data processing method is proposed, which considers the time-dependent scanning trajectory of the MEMS-scanner during the histogram formation. Based on known reconstruction processes used in stereo vision setups, an estimate for an accumulated time-resolved measurement is derived, which allows to classify it as signal or noise. In addition to the theoretical derivation of the signal and data processing, an implementation is experimentally verified in a proof-of-concept MEMS-based SPAD lidar system.

1.

Introduction

For the realization of reliable, small, and low-cost three-dimensional (3D) imaging systems, light detection and ranging (lidar) systems based on the direct time-of-flight (dtof) principle are considered to be one of the most promising technologies. Especially for automotive applications as well as safe human robotic collaboration, many proof-of-concept systems are currently built and tested extensively. A new trend of scanning lidar systems is the replacement of the bulky and expensive polygon scanners by microelectromechanical systems (MEMS)-scanners. These MEMS-scanners have the advantage that they can be fabricated using standard CMOS processes, and the incorporation of these scanners offers the opportunity to greatly reduce the overall size and cost of the system. Furthermore, in many systems, it is now being tested to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). A major advantage of SPADs is that they can also be fabricated using standard CMOS processes and their ability to be integrated into large photodetector matrices, which significantly increases the spatial resolution of the lidar system. Especially, in challenging applications that require long-range capabilities or a large field of view (FOV) to be covered, the combination of an MEMS-scanner and SPAD detector promises a significant performance increase. One of the most recent examples of a proof-of-concept MEMS-based SPAD lidar system was reported by Sony in Ref. 1 with a range up to 300 m.

Even though MEMS-based SPAD lidar systems are becoming more and more prominent, the authors are not aware of any prior publication that connects the statistical detection process necessary for the working principle of SPAD-based detectors with the time-dependent scanning trajectory of the MEMS-scanner. On the one hand, these systems often utilize a pointwise illumination of the FOV, especially for long-range applications. On the other hand, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. Since a SPAD cannot distinguish between a signal and a noise/background photon, further distinguishing criteria based on the time-dependent scanning trajectory must be considered during the formation of a histogram.

In Sec. 2, the acquisition statistics of a system utilizing an MEMS-scanner driven in resonance is formulated. Furthermore, the concepts used in the reconstruction process of triangulation-based sensors are briefly summarized, which will be used to derive an analogous concept for MEMS-based SPAD lidar systems.

Based on the results of Sec. 2, Sec. 3 describes a proof-of-concept MEMS-based SPAD lidar system and proposes an implementation of a signal and data processing chain for the formation of a histogram that exploits the biaxial system configuration and utilizes the time-dependent scanning trajectory of the MEMS-scanner to further discriminate signal and background photons.

Section 4 applies the proposed signal and data processing chain to measured values. For the validation, two different experiments with varying lighting conditions are conducted. Section 5 summarizes the results and provides an outlook for further improvements.

2.

Model and Method

The following extends the statistical detection process of SPADs to consider the time-dependent scanning trajectory of the MEMS-scanner. For simplicity, the geometry is reduced to a two-dimensional (2D) problem, but the same arguments and correspondences hold for the 3D case. Furthermore, the imaging optic is assumed to be distortion-free. After a brief overview of a reconstruction method used in triangulation-based stereo vision setups, an analogous concept is derived for the detection process of MEMS-scanner systems. This method combines the spatial and timing information of a SPAD with the time-dependent scanning trajectory of the MEMS-scanner. To provide a further distinguishing feature between signal and noise/background photons, these information are checked for consistency.

2.1.

Acquisition Statistics Considering the Time-Dependent Scanning Trajectory of MEMS-Scanners

SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. The statistical detection method for SPAD-based detectors is extensively covered in recent publications, see, e.g., Refs. 2 and 3.

In contrast to SPAD-based flash lidar systems, where the amount of accumulations per histogram in every pixel is simply given by the laser pulse repetition frequency frep and the frame rate, in a system utilizing a scanning illumination it also depends on the scan trajectory, the FOV, and the required spatial resolution. The definition and correspondences between the mechanical scan angle θmech(t), an initial rotation angle θ0, and the resulting scan angle θ(t) of an MEMS-scanner are shown in Fig. 1. In addition, the normalized direction of the laser emission dlaser, the normal vector nMEMS of the MEMS-scanner, and the resulting scan direction dscan are given.

Fig. 1

Definition and correspondences between the mechanical scan angle θmech(t), an initial rotation angle θ0, and the resulting scan angle θ(t) of an MEMS-scanner. In addition, the normal vector of the MEMS-scanner nMEMS(θ(t)), the direction of the laser emission dlaser, and the optical scan direction dscan are given.

JOM_2_1_011005_f001.png

The determination of the mean number of accumulations for a MEMS-scanner driven in resonance can be considered as a sampling problem. The sampling points of the mechanical scan angle θmech, which can be represented in the time domain by a periodic cosinusoidal oscillation, are given by the reciprocal of the laser pulse repetition frequency frep and may be expressed as

Eq. (1)

θmech,k=θmech,max·cos(2πfmech·kfrep+φ0),
where k is an integer, φ0 represents an arbitrary constant phase, and θmech,max is the maximum mechanical scan angle. In the case of an electrostatic driven scanner, the mechanical scan angle is a function of the geometry of the scanner, its driving voltage, and its scan frequency fmech. Using the sampled points, a distribution of the number of measurements may be stated or, by weighting it with the number of measurements, a mean number of accumulations per histogram and pixel may be obtained. For a one-dimensional oscillation, a frequency ratio of fmech/frep=0.0785 and 400 consecutive accumulations both representations are exemplarily shown in Fig. 2.

Fig. 2

Exemplary distribution of the amount of measurements considering the scan trajectory of an MEMS-scanner driven in resonance. The distribution is determined for a frequency ratio of fmech/frep=0.0785.

JOM_2_1_011005_f002.png

2.2.

3D reconstruction in Triangulation Sensors

The following gives a brief summary on the reconstruction of 3D points using a stereo camera setup. The reconstruction based on a purely geometric solution will be used here as an example because of its simplicity and to illustrate the basic concepts for the subsequent discussion. The basic geometric relations and notations required for the geometric solution are shown in Fig. 3. Assuming a 3D point x is observed from two different camera locations, where their projection centers O and O are separated by their baseline b. This point corresponds to two image points u=P(x) and u=P(x) in the image planes, where the correspondence is given by the camera-specific projection matrices P and P. The line equations l and l, which are given in Eqs. (2) and (3), in 3D space can be constructed. Their origin is the respective image point and their direction vectors d and d are determined using their respective projection center. If the epipolar constraint l·(b×l)=0 is fulfilled, both lines l and l intersect in the 3D space and an exact solution for the scalars k1 and k2 exists. If the epipolar constraint is violated, the geometrical solution solves the system of linear equations given in Eq. (4) and estimates the 3D point x as the mid-point of the shortest line segment that joins both lines l and l.4 More sophisticated estimates for the point x take into account uncertainties in the imaging process and can be shown to be statistically optimal. Examples for these estimators can be found in Refs. 4 and 5.

Fig. 3

Schematic representation of the setup for the 3D reconstruction process used in triangulation sensors based on the stereo matching principle.

JOM_2_1_011005_f003.png

Apart from the estimation of the intersection point, the epipolar constraint can be used to reduce the search space for correspondences in the image pairs from a 2D space to a one-dimensional (1D) space.4

Eq. (2)

l=u+k1d,

Eq. (3)

l=u+k2d,

Eq. (4)

[(u+k1duk2d)·d(u+k1duk2d)·d]=0.

2.3.

3D Reconstruction in dtof Measurement

In the following, a reconstruction method for MEMS-based scanning lidar systems utilizing the dtof method is outlined. The reconstruction process may be formulated analogously to the reconstruction presented in the previous subsection, where one of the projection centers is replaced by the MEMS-scanner. The top-view of this geometry is shown in Fig. 4.

Fig. 4

Schematic representation of the proof-of-concept lidar system and necessary geometric definitions.

JOM_2_1_011005_f004.png

As shown in Fig. 4, the global coordinate frame is fixed to the center of the sensor and the optical axis of the receiving optics coincides with the z axis. The optical axis of the transmitter is defined by the MEMS-scanner at a mechanical scan angle θmech of zero. The rotation angle θ0 of the MEMS-scanner is chosen such that the optical axes of the transmitter and receiver intersect at half the maximum distance, defined here as the working distance W. If the MEMS-scanner and the projection center are separated by a baseline b=[xM0,0,0], the rotation angle θ0 may be expressed as

Eq. (5)

θ0=π4arctan(xM02W).
Using the time-dependent deflection angle θ(t), the normal vector nMEMS=[cos(θ(t)),sin(θ(t))] may be defined. The normalized reflection direction dscan follows from the vector reflection law and is determined as

Eq. (6)

dscan=dlaser2·(nMEMS·dlaser)·nMEMS,
where dlaser is the normalized direction of the laser emission.

Using the approximation of an ideal projection with a thin lens, the line l may be expressed analogously to Eq. (2).

In the 1D pointwise scanning case with an active illumination and a single line sensor, the epipolar constraint must be fulfilled and may be stated as dscan·(b×l)=0. The importance of the epipolar constraint becomes obvious applying it to the case of a 2D pointwise scanning system with an active illumination and a 2D array detector. Since the current scan angle and therefore the scan direction dscan is known, only the pixels fulfilling this constraint must be readout, which greatly reduces the amount of data that needs to be transferred, stored, and processed.

In the absence of background radiation and noise in the sensor, invoking the epipolar constraint would be sufficient to uniquely specify the point x. Since this is usually not the case, a further criterion needs to be specified. A measurement utilizing the dtof method contains timing information for every pixel. Considering the geometry shown in Fig. 4, the time of flight tTOF must be equal to

Eq. (7)

c·tTOF=|xxscanner|+|Ox|,
where c is the speed of light through the medium and xscanner is the position of the scanner. Without any prior knowledge of the scene, an analytical solution for the distance Z can be derived for a given pixel position u, a measured time of flight tTOF, and scan angle θ. For the 2D geometry, this solution is given in Eq. (8). Combining the distance Z with the imaging equation, given in Eq. (9), the 2D point x=[X,Z] can be determined:

Eq. (8)

Z=((c·tTOF)2xM2)·(c·tTOF1+tanθxM·tanθ·(1sinθ))2[(c·tTOF)2(xM·sinθ)2],

Eq. (9)

X=ux·(fZ)f.

2.4.

Spatial Uncertainties in the dtof Measurement

Spatial uncertainties in the dtof measurement can arise from either spatial or temporal uncertainties. Spatial uncertainties arise from the uncertainty involved in the determination of the current scan angle and the finite pixel size. Temporal uncertainties arise from the pulse-to-pulse timing jitter between subsequent laser pulse emissions and the minimum resolvable time given by the time-to-digital converter of the detector. The current scan angle is monitored with a sampling frequency much higher than the oscillation frequency. Therefore, this uncertainty is neglected in the following. In addition, we consider the case where the pulse-to-pulse timing jitter between the laser pulse emissions is less than the minimum resolvable time, so this is also neglected.

The finite pixel size gives rise to a spatial uncertainty in the x direction of the received photon. Under the assumption of a lateral uniform pixel response, this yields a uniform distribution over the active pixel area. The mean μpix is equal to the center of the pixel and can be expressed for the sensor, which will be considered later, as

Eq. (10)

μpix=(96.5u)·wpix,
where u is the pixel coordinate and wpix is the spacing between two pixels as given schematically in Fig. 6. Its variance σpix2 is given as

Eq. (11)

σpix2=dSPAD212,
where dSPAD is the diameter of an SPAD. Usually, this assumption is not valid for conventional photodiodes,6 but for SPADs the uniformity of the response to photons impinging at different positions in the active area is a key parameter. Through careful design of the device, a uniform pixel response, in terms of the photon detection efficiency and the detection delay, may be achieved.7

The minimum resolvable time represented by the bin width tbin gives rise to a spatial uncertainty in the z direction of the received photon. The discretized time of flight tTOF is equal to the bin number Nbin multiplied by the minimum resolvable time tbin of the time-to-digital converter. This discretization causes a quantization error. In a time-to-digital converter, and with only minor assumptions about the underlying statistics of the photon detection, the time of arrival in a bin is uniformly distributed. A necessary and sufficient condition for this may be found in Ref. 8, and its application to a commonly used time-to-digital converter architecture may be found in Ref. 9. Therefore, the mean μTOF is the center of the bin given as

Eq. (12)

μTOF=(Nbin12)·tbin,
and its variance σTOF2 may be expressed as

Eq. (13)

σTOF2=tbin212.

In the following, the first-order second-moment method is used to propagate the uncertainties of the measured pixel coordinate u and its time of flight tTOF from the image into the object space. To achieve this, the point x=[X,Z] is first expressed in polar coordinates using the known correspondences. The mean of the 2D point in (r,φ) space is then

Eq. (14)

μr=X(μpix,μTOF)2+Z(μTOF)2,
and

Eq. (15)

φTOF=atan2[Z(μTOF),X(μpix,μTOF)],
where atan2 is the two-argument arctangent function. Under the assumption of negligible covariance between μpix and μTOF, which is the case if the pixel response is assumed to be uniform in terms of the photon detection efficiency and the detection delay, applying the first-order second-moment method to the point in (r,φ) space results in an uncertainty in r-direction of

Eq. (16)

σr2=(drdu)2·σpix2+(drdtTOF)2·σTOF2,
and in φ-direction of

Eq. (17)

σφ2=(dφTOFdu)2·σpix2+(dφTOFdtTOF)2·σTOF2.
As an example, resulting uncertainty bounds in Cartesian coordinates for a bin width tbin of 312.5 ps, different pixel numbers u and distances Z are shown in Fig. 5.

Fig. 5

Given exemplarily are uncertainty bounds in Cartesian coordinates. In (a), these are given for different pixel numbers u and bin numbers Nbin. The uncertainty bound for the pixel number u=120 and bin number Nbin=16 is given in (b).

JOM_2_1_011005_f005.png

3.

System Description and Implementation of the Signal and Data Processing

After a brief description of a proof-of-concept MEMS-based SPAD lidar system, the implementation of a signal and data processing based on the results of the previous section follows.

3.1.

Sensor and System Description

The sensor used here is the SPADEye2 from the Fraunhofer Institute for Microelectronic Circuits and Systems.10 It is a 2×192  pixel SPAD-based dtof line sensor, where only one of these lines is actively illuminated. For timing measurements, a time-to-digital converter with a resolution of tbin=312.5  ps and a full range of 1.28  μs, which corresponds to a total dtof detection range of 192 m, is implemented in each pixel. As schematically depicted in Fig. 6, each pixel consists of four vertically arranged SPADs with a diameter dSPAD of 12  μm. The height hpix of a pixel is 209.6  μm and its width wpix is 40.56  μm.11 The coordinate system for the following discussion is fixed to the center of the active sensor area. The relation between the Cartesian coordinates (x,y) and the pixel-units (u,v) is given as

Eq. (18)

(xy)=((96.5u)·wpix0),
where u is the pixel number in the range of 1 to 192. Since only the line in the center of the sensor is actively illuminated here, the y- or v-component of the vector is zero.

Fig. 6

Schematic representation of the SPAD-based dtof line sensor, its dimensions, and the placement of the origin of the reference coordinate frame.

JOM_2_1_011005_f006.png

To illuminate the scene, a collimated laser beam is deflected in the horizontal direction by a single axis resonant MEMS-scanner with electrostatic drive. The scanner is the Fovea3D sending mirror, and its driving and monitoring electronics SiMEDri are fabricated at the Fraunhofer Institute for Photonic Microsystems.12,13 The laser source is a pulsed laser diode with a center wavelength of 659 nm, an optical peak power of 80 nW, and a temporal pulse width of 15 ns. Although this center wavelength is not commonly used in lidar applications, it greatly simplifies the alignment procedure and the discussion is valid for arbitrary optical wavelengths. The overall lidar system is a biaxial arrangement, and the distance between the center of the sensor and the center of the MEMS scanner is 12.5 cm. A list summarizing the components parameters is given in Table 1.

Table 1

Summary of the components and their parameters used in the proof-of-concept system.

ComponentParameterSymbolValue
Laser sourcePulse repetition frequencyfrep20 kHz
Wavelengthλ659 nm
Peak optical power (pulsed)80 mW
Temporal pulse width14.9 ns
Detector SPADEye2Pixels2×192  pixels
SPAD diameterdSPAD12  μm
Pixel widthwpix40.56  μm
Pixel heighthpix209.6  μm
Bin widthtbin312.5 ps
TDC full range1.28  μs
MEMS-scanner Fovea3D sending mirror and SiMEDri-Driving ElectronicsMirror aperture3.3  mm×3.6  mm
Oscillation frequencyfmech1570 Hz
Mech. torsion amplitudeθmech,max11.9°
Reflectivity at 659 nm81.25%
Receiver opticsFocal lengthf8 mm
f-number2
Optical bandpass filter (FWHM)10 nm

3.2.

Signal and Data Processing

A block diagram of the proposed signal and data processing chain is shown in Fig. 7. For every measurement, the sensor outputs a vector Nbin that contains the measured bin values of every pixel and a measurement timestamp Tmeas, which is used to determine the current scan angle θ. The zero crossings of the mechanical scan angle θmech are monitored using a piezoresistive sensor placed on the torsional bar of the scanner. For the determination of the current scan angle θ, a vector containing these zero crossings ZCmech is used. As outlined in Sec. 2.4, the determined scan angle θ and the vector of measured bin values Nbin are used to estimate the mean distances μr and angles μφ as well as their respective variances σr and σφ for every pixel. These estimates are further tested for consistency using the estimated distance Z, the known position of the scanner xscanner, and its current scan angle θ. If the determined point lies within the uncertainty bound, the measured value is classified as signal S. Otherwise, the measured value is stored in a vector N and labeled as noise, which may further be used to estimate the background radiation impinging on the sensor for example.

Fig. 7

Block diagram showing the signal and data processing chain.

JOM_2_1_011005_f007.png

4.

Experimental Verification

To verify the signal and data processing, different experiments were conducted, which are described in the following. A first proof of concept measurement with room lighting as background radiation was conducted. The scene with annotated distances to the objects is shown in Fig. 8(a), and the lighting conditions are shown. For better visibility in the picture, the laser source was set to a constant optical output power of 10 mW. The raw measurement data of 800 consecutive accumulations are shown in Fig. 8(b). For better visibility, only the first 256 bins are displayed. As expected in room lighting conditions, the three different objects are clearly visible even without any further processing or classification. Applying the signal and data processing as outlined in Sec. 3 yields the histogram of the measured values classified as signal S shown in Fig. 8(c).

Fig. 8

(a) Scene and lighting conditions, (b) raw measurement data of 800 consecutive accumulations, (c) measurement data classified as signal, and (d) measurement data classified as noise.

JOM_2_1_011005_f008.png

The measurements labeled as noise N are shown in Fig. 8(d). Comparing Figs. 8(b) and 8(c), it can be seen that most of the noise is removed and only some spurious outliers that randomly satisfy the conditions of the uncertainty bounds are present. Furthermore, the total number of counts in the raw measurement data of 2737 is reduced to 879, which corresponds closely to the number of accumulations, while not altering the peak counts, for example, encountered in the pixels 14 and 81.

A second measurement was conducted with the same scene as shown in Fig. 8(a), but additionally a 1-kW halogen floodlight was used to artificially increase the background radiation. Using a maximum likelihood estimator, the average background rate generated by the additional illumination was estimated to be around 6 MHz per pixel. (For comparison without the additional illumination, a background rate of about 100 kHz per pixel was estimated.) For better visibility, 3200 consecutive accumulations were used and the raw measurement data are shown in Fig. 9(a). As can be seen, the scene is barely visible and dominated by the background noise. Figure 9(b) shows the histogram of the measured values classified as signal S. Before further processing both histograms with a simple peak detector, the same moving average filter and thresholding was applied to both histograms. The resulting filtered outputs are shown in Figs. 9(c) and 9(d), where the processed histogram using the data processing as outlined in Sec. 3 closely resembles the measurement without any background radiation as shown in Fig. 8(c).

Fig. 9

Same scene as before but additionally a 1-kW halogen floodlight was used to artificially increase the background illumination. (a) Raw measurement data of 3200 consecutive accumulations. (b) Measurement data classified as signal. (c) Thresholding and peak detection applied to (a). (d) Thresholding and peak detection applied to (b).

JOM_2_1_011005_f009.png

5.

Conclusion and Outlook

In the conclusion, an extension of the statistical detection process of SPAD-based detectors considering the time-dependent scanning trajectory of MEMS-scanners was derived. Based on this, a signal and data processing strategy was presented using principles known from the 3D reconstruction process of triangulation sensors to distinguish between signal and noise/background photons. The signal and data processing strategy was implemented in a proof-of-concept MEMS-based SPAD lidar system and its functionality was verified experimentally. Furthermore, it was shown that the influence of strong background radiation is largely attenuated, and an evaluation of the data was still possible.

Utilizing an SPAD-based 2D array detector and an MEMS-scanner with a 2D scan trajectory, the presented method may be easily extended to distinguish between signal and noise/background photons in the 3D case. As briefly mentioned in Sec. 2.3, by checking the epipolar constraint, the amount of data to be transmitted, stored, and processed can be greatly reduced.

Acknowledgment

The authors declare no conflict of interest.

References

1. 

O. Kumagai et al., “7.3 A 189 × 600 back-illuminated stacked SPAD direct time-of-flight depth sensor for automotive LiDAR systems,” in IEEE Int. Solid- State Circuits Conf. (ISSCC), 110 –112 (2021). https://doi.org/10.1109/ISSCC42613.2021.9365961 Google Scholar

2. 

B. E. A. Saleh, M. C. Teich and J. W. Goodman, “Fundamentals of Photonics,” 2nd ed.John Wiley & Sons, Chicester (20131991). Google Scholar

3. 

M. Fox, Quantum Optics: An Introduction, Oxford University Press, Oxford and New York (2007). Google Scholar

4. 

W. Förstner and B. Wrobel, “Photogrammetric Computer Vision: Statistics, Geometry, Orientation, and Reconstruction,” Springer, Cham (2016). Google Scholar

5. 

R. I. Hartley and P. Sturm, “Triangulation,” Comput. Vision Image Understanding, 68 (2), 146 –157 (1997). https://doi.org/10.1006/cviu.1997.0547 Google Scholar

6. 

P. C. D. Hobbs, Building Electro-Optical Systems: Making It All Work, S.l. 1st ed.Wiley-Interscience(2000). Google Scholar

7. 

I. Prochazka et al., “Photon counting timing uniformity–unique feature of the silicon avalanche photodiodes K14,” J. Mod. Opt., 54 (2–3), 141 –149 (2007). https://doi.org/10.1080/09500340600791814 JMOPEW 0950-0340 Google Scholar

8. 

A. Sripad and D. Snyder, “A necessary and sufficient condition for quantization errors to be uniform and white,” IEEE Trans. Acoust. Speech Signal Process., 25 (5), 442 –448 (1977). https://doi.org/10.1109/TASSP.1977.1162977 IETABA 0096-3518 Google Scholar

9. 

T. Maeda and T. Tokairin, “Analytical expression of quantization noise in time-to-digital converter based on the fourier series analysis,” IEEE Trans. Circuits Syst. I: Regular Pap., 57 (7), 1538 –1548 (2010). https://doi.org/10.1109/TCSI.2009.2035411 Google Scholar

11. 

M. Beer et al., “SPAD-based flash LiDAR sensor with high ambient light rejection for automotive applications,” Proc. SPIE, 10540 105402G (2018). https://doi.org/10.1117/12.2286879 PSISDG 0277-786X Google Scholar

12. 

T. Sandner et al., “Hybrid assembled micro scanner array with large aperture and their system integration for a 3D ToF laser camera,” Proc. SPIE, 9375 937505 (2015). https://doi.org/10.1117/12.2076440 PSISDG 0277-786X Google Scholar

13. 

, “Driving electronics for the evaluation of 1D and 2D resonant MEMS scanners,” (2017) https://www.ipms.fraunhofer.de/content/dam/ipms/common/products/AMS/simedri-e.pdf Google Scholar

Biography

Roman Burkard received his BSc and MSc degrees in electrical engineering and information technology from the University of Duisburg-Essen in 2016 and 2018, respectively, where he is currently working toward his PhD in electrical engineering at the Chair of Electronic Components and Circuits. His research focuses on the development of light detection and ranging systems and on the challenges posed by the combination of single-photon avalanche diodes and a scanning illumination.

Manuel Ligges received his diploma and doctorate in physics from the University of Duisburg-Essen. Until 2019, he worked as a research assistant and assistant professor in the field of solid-state physics. Currently, he leads the group of optical systems at the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg.

Thilo Sandner studied electrical engineering at the Technical University of Dresden, Germany, where he received his doctorate in 2003. Since 2003, he has been working as a scientist at the Fraunhofer IPMS, where he headed the R&D group for MEMS scanning mirrors for more than 10 years. Currently, he works as a project manager and key researcher for the development of innovative MOEMS components, system design, and new applications of photonic microsystems such as MEMS-based LiDAR.

Reinhard Viga received his diploma degree in electrical engineering and the Dr.-Ing. from Gerhard Mercator University of Duisburg in 1990 and 2003, respectively. Since 1990, he was with the Chair of Electromechanical System Design working on medical sensor system typologies and application aspects. Currently, he is the group manager in the Chair of Electronic Components and Ciruits of the University of Duisburg-Essen. Besides sensor technology, his research interests cover the design of embedded systems for medical diagnostics and medical image processing.

Anton Grabmaier studied physics at the University of Stuttgart and specialized in semiconductor physics and measurement technology. His dissertation was focused on laser diodes. Since 2006, he has been a professor at the University of Duisburg-Essen and is working as the director of the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg.

André Merten: Biography is not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Roman Burkard, Manuel Ligges, André Merten, Thilo Sandner, Reinhard Viga, and Anton Grabmaier "Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems," Journal of Optical Microsystems 2(1), 011005 (31 January 2022). https://doi.org/10.1117/1.JOM.2.1.011005
Received: 29 July 2021; Accepted: 6 January 2022; Published: 31 January 2022
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Sensors

LIDAR

Signal processing

Data processing

Denoising

Photons

Microelectromechanical systems

Back to Top