Proceedings Article | 27 April 2020
KEYWORDS: Cameras, Optical filters, Matrices, CMOS sensors, Imaging systems, System integration, RGB color model, Image sensors, Image acquisition, Sensors
The combination of pixel-assigned spectral filter matrices and standardized CMOS sensors enables the production and application of miniaturized spatial and spectral resolving sensors which, when used in snapshot-mosaic cameras, represent an innovative solution in comparison to whiskbroom, staring or pushbroom cameras. These cameras are characterized by CMOS sensors where spectral filters are applied on the CMOS sensor in a matrix which is multiplied in the x- and y-direction over the entire CMOS sensor surface. Current multispectral resolving filter-on-chip snapshot-mosaic cameras, available on the market, work with 1.3, 2.0- or 4.0-megapixel CMOS sensors, which are equipped with 4, 9, 16 or 25 different spectrally selective filters in the visible (VIS) or near infrared (NIR) spectral range. The combination of pixel-assigned spectral filter matrices on CMOS sensors increases the integration density and system complexity of multispectral resolving snapshot-mosaic cameras many times compared to established cameras with monochromatic or RGB Bayer Pattern image sensors. For the objective comparison of multispectral resolving snapshot-mosaic cameras, it is necessary to describe their pixel-related spectral wavelength depended image acquisition channels by suitable parameters. Here especially the method for determination of spectral sensitivity curves in accordance with the EMVA1288 standard will be shown and explained. This method will be applied on different kinds of snapshot-mosaic cameras called monolithic and hybrid. The method also will be extended by multiple measurements, comparisons and evaluations of spectral sensitivity curves from different areas of the sensor. The paper will provide a systematic presentation of how to measure the spectral sensitivity curves from different multispectral resolving cameras, how to compare measured results and how to evaluate the results to choose possible more appropriate camera for desired applications. The EMVA1288 standard, developed by camera manufacturers and research institutes, distinguishes itself from other standards by considering the camera as a linear model. The camera is treated as a black box of which only pixel size and exposure time must be known. The recording of standardized test images is also omitted, allowing the camera to be described without optics. The only input variable of the linear camera model of the EMVA 1288 standard is the number of photons that hit a pixel of the image sensor during the exposure time. Therefore, the correct determination of the photon count is essential to calculate important camera parameters from the linear camera model, such as quantum efficiency or signal-to-noise ratio. To determine the number of photons, the irradiance of the radiation incident on the image sensor must be measured. This is usually accomplished using a radiometer instead of the camera. The number of photons per pixel during the exposure time can then be calculated from the irradiance, considering constants like the wavelength of the incident radiation, the area of the pixel and the exposure time of the camera.