Open Access
17 November 2022 Methods of visual analysis in the design of the stray light protection of optical devices
Author Affiliations +
Abstract

We provide an analysis of the existing methods for calculating parasitic illumination, methods for its analysis, and visualization. To search for and analyze the sources of stray light in optical devices, a software model of the beam propagation criterion in an optical device is proposed, which makes it possible to select beams that satisfy a given criterion. We consider the possibility of using the method of bidirectional progressive stochastic ray tracing with inverse photon maps to calculate the stray illumination of an image and analyze the causes of its occurrence. To analyze the sources of stray illumination on the radiation detector, it was proposed to use progressive photon maps, which calculated the caustic and secondary components of the illumination and fixed the image illumination on a static regular mesh attached to the surfaces of the optical device. Special visualization tools allow the display of this illumination over the image of an optical device and determination of the elements of the device that have the greatest impact on the level of stray illumination of the image. We present examples of calculation, analysis, and visualization of parasitic illumination of a number of optical systems of lenses with real mechanical structures.

1.

Introduction

Designing systems for protecting optical devices from background radiation or stray light falling on the image formed by an optical device is one of the most important tasks in the process of designing the image-forming optical devices. The components of the design of systems for protection against stray radiation are computer simulation of the stray light propagation in an optical device and methods for visualizing the sources of stray light that form stray illumination in the image.

To solve these problems, computational optics tools based on nondeterministic ray tracing methods are used. In this case, diffraction phenomena, as a rule, are considered as a factor that determines the image quality, but not as a contribution to stray illumination caused by diffractive scattering. A new approach to solving the problems of computer simulation of the stray light propagation in optical devices is the use of realistic rendering methods, which makes it possible to synthesize and visualize images formed by optically complex scenes containing a large number of objects. These rendering methods ensure not only the realism of the generated image but also the physical correctness of computer simulation result.

There are many approaches that allow you to optimally solve the rendering problem in relation to specific types of scenes. A number of solutions are optimized for calculating the indirect (diffuse) lighting component; another series of solutions are focused on calculation of the caustic lighting component, and there are solutions optimized for minimal use of computing resources, etc. Image flare calculation can be considered as one of the tasks of realistic rendering in which the stray flare image (the distribution of stray light illumination over the image surface) is the target of rendering. In this case, the image is formed both by the directly visible luminance of stray radiation objects (for example, ghost or glare in lens systems or direct illumination in catadioptric systems) and the luminance of light scattered on the elements of an optical device (on diaphragms, lens frames, nonworking surfaces of lenses, lens defects, inside lens materials, for example, due to the phenomenon of fluorescence, etc.). All these effects can be physically correctly simulated with the modern programs for realistic rendering.

Another issue is the visualization of the stray light sources leading to the appearance of parasitic illumination of the image. The rendering software systems usually do not pay much attention to this point because their main task is to synthesize a realistic image.

Within the framework of this study, an attempt was made to combine algorithmic and software solutions of computational optics and realistic rendering to solve the problem of computer simulation of the stray light propagation in an optical device and visual representation of its effect on the level of background illumination of an image. For synthesizing the background illumination image, two main methods are supposed to be used: the direct stochastic ray tracing method1 and the bidirectional stochastic ray tracing method based on the use of progressive photon maps.2,3 Both methods make it possible to form an image physically correctly from stray light sources in an optical device and visualize possible sources of stray light on the elements of an optical device. Each of these methods has its own advantages and disadvantages.

The method of forward stochastic ray tracing allows you to effectively form an image caused by direct illumination or glare in the optical system. In addition, the visualization of paths of forward rays that satisfy a given cause of stray light allows you to find sources of light scattering and evaluate their effect on the level of illumination at the radiation receiver. However, the forward stochastic ray tracing methods are inefficient in solving problems of estimating the stray light caused by a single or multiple diffuse scattering on the elements of an optical device.

In this case, the most suitable solution would be the bidirectional ray tracing method using photon maps. This method allows not only to calculate the distribution of stray radiation efficiently and physically correctly on the image surface but also to visualize the sources of this radiation. To analyze the causes of stray light, it is proposed to visualize the illumination maps of the optical device elements that form the background illumination in the image. In this method, only those traces of ray paths that have formed stray illumination on the receiver are visualized. In this case, it is not the illumination on the elements of the optical device that is accumulated, but the illumination that has formed on the image caused by this act of light scattering. Because this value represents the illuminance, it is not related to the observation conditions and this map can be visualized at any camera position. The visual brightness of this map indicates the influence of the scene element on the level of stray illumination of the image and allows the designer of the light protection device to pay special attention to these bright areas in the image of the optical device.

2.

Related Works

As noted in Sec. 1, one of the most important points in the design of an optical system is its protection from stray radiation. Leaving aside the effects of diffraction scattering, the most appropriate solutions for computer simulation of the background illumination that occurs at the image sensor are radiation transfer methods based on stochastic ray tracing.4

The simplest and most effective solutions are the methods of unidirectional (forward or backward)1,5 ray tracing, which solve the radiation transfer equations or calculate the radiance using Monte Carlo integration methods,6 based on algorithms for generating pseudorandom variables. Unidirectional Monte Carlo ray tracing methods are well known7 and are a simple and reliable way of computer simulation of stray light propagation in an optical device and calculation of stray illumination in the image plane. In the case of forward ray tracing, rays are randomly emitted by light sources, as a rule, in accordance with the probability density corresponding to the spatial distribution density of radiation intensity. When rays hit the optical device, they are transformed, and the new direction of the beam, which is determined, as a rule, by the importance sampling method8 in accordance with the bidirectional scattering distribution function (BSDF),9,10 from the surface point on the optical device forms its further trajectory. If the surface is a receiver of radiation, then the energy carried by the ray forms the image. If the radiation source has a large spatial and angular size, then a more efficient solution is ray tracing from the radiation receiver toward the light source (backward ray tracing). Backward ray tracing by the Monte Carlo method is performed similarly to forward ray tracing with the only difference: when transforming rays on a surface, it is necessary to observe the condition of reversibility of ray traces.11 When the ray hits the surface of the radiation source, its luminance is transformed into illuminance at the receiver. In Ref. 7, a method for accelerating the calculation is proposed, which consists in the forced direction of the scattered rays to the radiation receiver. This solution is achieved by creating a virtual image of the radiation receiver, which determines the significance function for calculating the direction of the rays scattered on the surface of the optical device. Such creation of virtual images must be performed for all device surfaces for which it is necessary to analyze the effect of scattered light on image quality. Although this method improves computational efficiency, it has a number of disadvantages, particularly the lack of automatic selection and imaging of virtual detectors. Another disadvantage is that a physically correct analysis of the level of scattered light is difficult because the construction of the probability density function of light scattering in the direction of a virtual detector is a nontrivial task that requires significant computational resources, and solutions based on the rejection sampling method significantly reduce the efficiency of this approach.

In addition, in Ref. 7, a method was proposed to determine the illuminated and critical surfaces to analyze the influence of scattered light of the first order. In this method, first, a backward ray tracing is performed, and a list of surfaces that receive flux from the image plane is compiled, and then forward ray tracing from an external light source forms a list of surfaces to which the light flux has come. This makes it possible to find the intersections of illumination and observation regions but does not solve the problem of physically correct estimation of stray illumination in the image plane because the BSDF parameters of the areas of illumination and observation intersection cannot be used to calculate the value of stray illumination. In addition, this method is limited to direct and caustic illumination and observation models and does not allow one to estimate the effect of secondary illumination on the level of stray illumination.

A convenient and efficient tool for analyzing scattered light sources in an optical device is the visualization of ray paths. However, visualization of all ray paths may be unacceptable if the transmission of the optical device for stray radiation is 105 or lower. To solve this problem, one can use the ray path selection criterion for the visualization problem.12 In accordance with the specified criterion, the ray paths (both forward and backward) traced by the Monte Carlo method through the optical device are analyzed for compliance with the specified criterion, for example, double reflection from lens surfaces (second-order flare) or single scattering on lens frames (caustic illumination), and the rays that meet the specified criteria are selected and rendered. As a result, instead of millions of rays, the designer sees units or dozens of rays, which can indicate the cause of stray lighting in the image. This method is very convenient for searching for the causes of stray light and building systems for protection against stray radiation. The main disadvantage of this approach is the inability to estimate the level of stray illumination at the radiation receiver. The designer sees the possible causes of stray radiation, but quantitative analysis is not available in this solution.

Physically correct and efficient solutions to the problem of numerical calculation of the parasitic radiation level can be found in modern approaches used in computer graphics. Methods for calculating global illumination used in computer graphics make it possible to collect the integral surface illumination from illumination sources over the entire sphere. In addition, these methods make it possible to physically correctly consider the secondary illumination that occurs on surfaces with a complex BSDF representation. As with stray light calculations, computer graphics use Monte Carlo methods; however, these methods are not limited to unidirectional stochastic ray tracing. Several classifications can be built for these methods, for example, according to the method of integration (using ordinary Monte Carlo integration or integration according to the Markov chain scheme) or according to the principles of ray representation in radiance calculation schemes (working with thin rays in terms of radiance or rays that form elements of a finite size and operating in terms of radiant flux).4

The main methods organized according to the scheme of Markov chains are the methods of light path trace mutations built on the Metropolis principle.1317 The main idea of this method is to mutate the ray paths that form the illumination of the image. This method bypasses the areas of the scene that creates the final illumination of the image, but, as a rule, does not find these areas itself. Therefore, the Metropolis method works in tandem with methods for calculating the illumination of an image built according to the ordinary Monte Carlo integration scheme. In the Metropolis methods, special attention should be paid to the choice of the correct strategy for ray mutation18 because the effectiveness of this method depends on this. Choosing the right ray mutation strategy in the Metropolis method for rendering scenes with complex lighting provides an advantage in convergence speed compared with ordinary Monte Carlo integration methods. However, choosing the right strategy is a rather difficult task, especially in the case of calculating the stray illumination of an image, and the application of this method is difficult to solve the problem of calculating scattered light in optical devices. In addition, the Metropolis method does not always provide good convergence of the calculation of caustic illumination, the level of which may dominate in the stray illumination. Another disadvantage of the Metropolis method is that it must work in tandem with ordinary Monte Carlo integration methods, at first, to search for the original ray traces that it will mutate, and, secondly, to normalize the result obtained to absolute image illumination values.

Monte Carlo methods that work with thin beams in terms of radiance can also be inefficient in calculating the effect of stray light on the level of stray illumination. These methods include methods based on backward path tracing and bidirectional ray tracing.19 Bidirectional ray tracing is an attempt to combine the advantages of forward and backward stochastic ray tracing. Forward and backward ray paths are connected by shadow rays at points located on surfaces with diffuse properties. As a result, the radiance from the radiation receiver to the light source is calculated from the total ray path, which is then converted into irradiance in the image. This method makes it possible to calculate the irradiance generated by secondary illumination with high efficiency, but it is not suitable for calculating the caustic illumination radiance, especially if the radiant source has a small angular or spatial size. The high efficiency of this method can be ensured by the correct choice of weighting factors when averaging the visible radiance of the light source obtained by connecting forward and backward ray paths at different points of their trajectory. However, the correct choice of weight coefficients is a nontrivial task, which is practically unattainable in cases of rendering complex scenes, such as, for example, calculating the illumination of stray lighting in optical devices.

The most suitable method for physically correct estimation of irradiance caused by stray illumination is methods that work with elements of a finite size in terms of radiant flux. The method of direct photon maps20 can be considered as the main method. The main advantage of these methods is that they allow computing complex scenes with global illumination, including caustic illumination, and quickly calculate the image irradiance estimate. The method consists of two stages. At the first stage, light sources emit rays that propagate across the scene and form the distribution of photons on the scene surfaces (photon maps). At the second stage, the radiation receiver emits backward rays, captures photons that fall into the integration sphere of backward rays, and reads the irradiance from photons captured by the sphere integration.21 The disadvantage of this method is that it forms a “grainy” image structure and distorts the irradiance at the boundaries of objects. To eliminate obscuration at the boundaries of objects, a final collection method was proposed22 in which random rays are emitted from the observation point on the surface of the optical device, and photons are collected at the points of intersection of these rays with the scene surfaces.23 This method allows you to remove the graininess of the image, but significantly slows down the algorithm. To speed up the convergence of this method, the illumination cache method24 was proposed, which interpolates the secondary illumination between surface points. This allows, during the final collection process, to obtain secondary radiance from the cache of irradiance of the corresponding surfaces. Illumination map methods are well suited for the analysis of stray light in an optical device, but their main disadvantage is the redundancy of the data that must be stored in these maps.

An alternative version of the method of forward photon maps is backward maps or scene visibility maps.25 These maps determine the direction of the light rays, which guarantee that the light beam hits the radiation receiver. However, the map data are a multidimensional spatial function and its practical application is difficult in complex scenes, such as, for example, an optical device.

Based on the presented methods, the method of progressive backward photon maps26 was developed, which combines the advantages of the methods presented above. In this method, the collection of irradiance is carried out on scene visibility maps, and the visibility may not be direct, but may be carried out after a series of diffuse scattering events. In this case, the optimal number of diffuse scattering events can be selected based on an analysis of the lighting conditions and observation conditions at the secondary illumination radiance collection point. In addition, the progressiveness of calculations was implemented in this method, that is, the entire calculation is divided into phases that provide the backward ray tracing with simultaneous calculation of forward illumination, formation of visibility maps, light ray tracing, and calculation of secondary and caustic illumination at scene visibility points. The phases are repeated many times, and the final illumination is the result of averaging all the phases of the calculation. An important advantage of the method of backward photon maps in comparison with the method of forward photon maps is the possibility of determining the optimal radii of integration spheres in the process of generating visibility maps, which can significantly speed up the process of calculating the radiance of secondary and caustic illumination. Partially, the use of the photon mapping methods for stray light analysis in its initial form was presented in the SPIE conference.27

It should be noted that the methods based on photon maps are biased;20 however, the correct choice of the radius of the integration sphere and the collection point of secondary illumination (shift of the integration point on a larger surface, selection of integration points on surfaces with smoother bidirectional reflectance distribution function (BRDF), and far from bright light sources) allows to reduce the bias error of this method to a minimum.

The efficiency of progressive backward photon maps can be significantly improved using parallel and distributed computing. A multilevel system for parallel computing28 can effectively use all the resources of multiprocessor workstations connected to a local network, which is extremely important for calculating and analyzing the effect of scattered radiation on the level of stray illumination.

3.

Selection and Visualization of Ray Paths

Because the task of designing the protection of optical devices from stray radiation is extremely complex, approaches are used to solve it, which make it possible to reduce the number of traced rays while maintaining the physical correctness of the calculation results. One of the most reliable and simple methods for calculating stray radiation in an image is stochastic unidirectional ray tracing. In this study, we restrict ourselves to the method of forward tracing of stochastic rays from the source to the radiation receiver. This method is most effective for analyzing direct glare and ghosts if the light source has a small spatial or angular size. For the analysis of diffuse scattered light, it is proposed to use the ray path selection method, that is, the implementation of special criteria that allow you to select rays that have certain properties and are the subject of analysis of light propagation in an optical system. In this case, the ray propagation criterion is a special software object that can analyze the history of ray propagation in the optical system and decide whether the ray satisfies the specified conditions or not. If the ray path satisfies the specified condition, then it is marked accordingly and its trajectory is saved if necessary. Beam path selection conditions can be formed based on the execution of several events that occur with the ray during its propagation in the optical system.

  • The ray starts from the specified light source.

  • The ray hits the specified radiation receiver.

  • The ray hits the specified surfaces of the optical system.

  • The ray experiences a certain transformation on the surfaces of the optical system (e.g., diffuse scattering, specular reflection, refraction, reflection according to a given BRDF, etc.). The number of given events can be fixed, for example, two specular reflections in a lens system can reveal second order glare.

  • The ray experiences special effects in the medium or on the surface (absorbed due to incorrect interaction with the elements of an optical device, experiences volumetric scattering in the optical material, etc.).

These elementary events can be combined using logical operations. An example of combining a series of events with the AND operation and visualizing a series of rays that meet this criterion is shown in Fig. 1.

Fig. 1

An example of visualization of ray paths that satisfy the criterion of ray emission from a stray radiation source AND hitting the lens entrance window AND hitting the image receiver.

OE_62_2_021002_f001.png

Elementary events can be combined using the logical operations OR and AND, and the unary negation operator NOT can also be used. In addition, events can be combined not only by purely logical operations, but also by a temporal sequence of elementary events. In this case, for the criterion to be fulfilled, one event or group of events must precede another event or group of events. For example, it is possible to prescribe a sequence of passage through several surfaces of an optical device and thereby form conditions for analyzing the effect of light scattering that occurs on a given ray path. An example of the formation of a criterion for analyzing the ghost that occurs between the surfaces of the first lens is shown in Fig. 2.

Fig. 2

An example of visualization of ray paths that satisfy a sequence of two criteria where the first criterion is the emission of rays from the source of stray radiation AND hitting the input window of the lens AND hitting the first lens surface, and the second criterion is rehitting the first lens surface AND hitting the image receiver.

OE_62_2_021002_f002.png

To effectively implement the interaction of the ray propagation criterion with its propagation history, the criterion is implemented as a tree, the leaves of which are elementary events that can happen to the ray during its tracing, and the nodes of this tree are logical operations that determine the logical connection between elementary events. Elementary events are similar to the events of the ray propagation history in an optical system, which simplifies the application of the ray propagation criterion to its history. The criterion tree for the second case of the formation of stray illumination is shown in Fig. 3.

Fig. 3

An example of building a ray trace criterion tree.

OE_62_2_021002_f003.png

In this case, two time sequences are formed. First, the condition of ray emission from the stray light source must be satisfied: AND hitting the input window of the optical system AND hitting the first surface of part one (the order of these events is arbitrary). After the successful execution of the first event, the second event must be executed, consisting in hitting the first surface of the lens one AND hitting the image. This sequence of events is possible when a glare from the first lens occurs. In the case of successful successive execution of two events, the ray path is selected and can be visualized.

4.

Calculation of the Illumination Distribution for Selected Ray Paths

In some cases, the visualization of ray paths that satisfy a given condition for the formation of stray illumination at the radiation receiver is insufficient because it does not answer the question about the nature of the distribution of stray radiation and its absolute value. Therefore, an important point in modeling is the calculation of the illumination distribution at the radiation receiver. Direct stochastic ray tracing by the Monte Carlo method makes it possible to calculate the illumination distribution physically correctly, and if the ray selection criterion is applied in the illumination calculation, the result will contain only the illumination distribution that was formed by the selected rays. This approach makes it possible to physically correctly determine the effect of a given event on the level and distribution of stray illumination. However, this solution is rather inefficient because the transmittance of stray radiation for an optical system with good light protection can reach 106 and 107, and analysis of the influence of several sources of scattered light on the level of stray radiation may take considerable time. To eliminate this shortcoming, an approach was proposed that makes it possible to evaluate all possible sources of scattered light during one calculation. The essence of the approach lies in the fact that a set of identical radiation receivers is set in the optical system (all characteristics of the receivers are the same). The only difference is that each receiver has its own ray tracing criterion. During the calculation, each ray that hits the radiation receiver is analyzed for compliance with the ray path criteria specified for that receiver. Rays that satisfy the criterion are fixed only at a given radiation receiver. This approach makes it possible, during one calculation, to collect on the receivers the results corresponding to all the studied sources of stray radiation.

Figure 4 shows the result of simultaneous modeling of the main image formed by the two-lens optical system (left side of the figure), calculation of stray illumination caused by light scattering on the diaphragm (right side of the figure) and the total result corresponding to light scattering on the diaphragm, and the result of the formation of the main image (central part of the figure).

Fig. 4

The result of the simultaneous calculation of the image illuminance for three different criteria. From left to right: main image, main image and stray illumination caused by light scattering on the aperture, and stray illumination caused by light scattering on the aperture.

OE_62_2_021002_f004.png

During the calculation, the lens surfaces did not have antireflective coatings and their reflectance was determined by Fresnel formulas. The optical properties of the structural elements had a reflectance of 10% and a Gaussian scattering pattern with a halfwidth of 5 deg.

It should be noted that the application of individual ray tracing criteria to each radiation receiver practically does not slow down the calculation. Thus, the use of 10 criteria applied to 10 radiation detectors gives a total slowdown of about 10%, which makes it possible to achieve 10 (or more when using a larger number of criteria) times acceleration of the calculation and analysis of scattered light for complex optical devices.

5.

Using the Method of Backward Photon Maps for the Analysis of Scattered Light in Optical Devices

The use of unidirectional ray tracing methods for calculating and analyzing image stray illumination in optical devices is not always an effective solution. The use of multicriteria for visualizing paths of stray rays and calculating the corresponding images is, on the one hand, a rather laborious task, and on the other hand, it allows one to find only those sources of scattered light in an optical device that was listed in the ray path selection criteria. Those sources that were not listed are not included in the calculation and cannot participate in the analysis of the causes of scattered light. Therefore, for a full-fledged automatic analysis of the sources of parasitic radiation and the calculation of scattered light, it is advisable to use methods based on bidirectional ray tracing. Because the classical unbiased methods of bidirectional stochastic ray tracing do not allow one to effectively solve the problems of calculating the caustic illumination radiance, methods based on the use of photon maps will be the most suitable for stray light analysis.

5.1.

Photon Mapping Method for the Analysis of Stray Light in Optical Devices

The classical method of forward photon maps forms the distribution of photons (forward rays emitted by a light source are shown in red color in Fig. 5) on diffuse scene objects (optical parts and structural elements). These photons, within some given sphere of integration, form the distribution of caustic and secondary illumination on the elements of the optical device. In Fig. 5, the spheres of integration of forward photons are shown in red. The centers of the spheres of integration are located at the points of scattering of forward rays on the elements of the optical device. When observing an optical device from the side of the radiation receiver (in Fig. 5, the backward rays coming from the radiation receiver are shown in blue), the photon irradiance is read when the backward ray enters the integration sphere. The read value is then converted into the radiance visible by the radiation receiver (in Fig. 5, the border of integration spheres that hit by the backward rays are marked with a bold red outline), and then the radiance is integrated at each point over the lens output aperture and converted into irradiance of the radiation receiver surface (to the points that emitted the corresponding backward rays). Because the number of photons required for an adequate assessment of the level of stray illumination is quite large, the method of progressive photon maps is used in which the phases of photon formation and collection of their irradiance by the radiation detector are periodically repeated, saving the accumulated result at the radiation detector. Naturally, photon maps (spheres highlighted in red in Fig. 5) are updated at each phase of the calculation. Schematically, the method of forward photon maps is shown in Fig. 5.

Fig. 5

Stray light analysis using the forward photon maps method.

OE_62_2_021002_f005.png

This solution has several disadvantages. First, the difficulty of choosing the correct value of the radius of the sphere of integration. The radius of the integration sphere can only be determined based on the geometric features of the object on which it is located because the solid angle at which the receiver sees this photon is unknown. Second, an excess number of integration spheres is formed because not all photons will be visible to the radiation detector. Third, the depth of direct photon tracing can be very large due to multiple diffuse reflections inside the body of the optical device.

Therefore, we propose to use the method of progressive backward photon mapping to solve problems of parasitic radiation analysis in optical devices. The method of backward progressive photon maps is practically the mirror method of forward progressive photon maps and has the form shown schematically in Fig. 6.

Fig. 6

Stray light analysis using the backward photon maps method.

OE_62_2_021002_f006.png

The backward rays emitted by the radiation receiver (shown in blue in Fig. 6) propagate in the optical device and form a scene visibility map on the scattering surfaces of the optical device. The backward ray tracing depth in an optical device can vary from 0 (the first diffuse scattering event) to infinity. The scene visibility map is a set of integration spheres (marked in blue in Fig. 6). The centers of the spheres of integration are located at the points of intersection of the backward rays with the scattering surfaces of the optical device. Then, the forward rays emitted from the light sources are traced in the optical device, and some of them intersect the spheres of integration of the backward rays (in Fig. 6, these spheres are highlighted with a bold blue outline). For these spheres, the illuminance is calculated at the image points from which the corresponding backward rays were emitted. In the case of a progressive calculation, visibility maps and light paths are updated at each phase of the calculation.

The progressive backward photon mapping method is free from the shortcomings of the forward photon mapping method. This method generates only those photons that can form illumination at the radiation receiver. In addition, the radius of the integration sphere can be chosen not only on the basis of the specific design features of the optical device but also on the basis of their solid angle at which the radiation receiver sees the collection point of the radiance of secondary or caustic illumination. And, finally, the depth of the backward ray path can be limited by the number of diffuse events, which makes it possible to significantly reduce the volume of photon maps and speed up the entire process of calculating stray illumination.

The calculation of illumination in the image plane is carried out by means of a series of sequential actions. First, the local illuminance on the surface inside the sphere of integration from light rays that are inside its radius R is calculated

Eq. (1)

dEl(il,c)=dFl(il,c)πR2,
where dEl(il,c) is the local illuminance of the surface from the l’th light ray propagating in the direction il and having a spectral distribution of the flux dFl(il,c) by wavelengths c.

Next, the local luminance of the image point that forms the given integration sphere on the ray emitted from this image point to specified output aperture of the optical device is calculated as follows:

Eq. (2)

dLj(il,vj,c)=1πdEl(il,c)BRDF(il,vj,c),
where BRDF(il,vj,c) is the BSDF represented as a luminance factor of surface when it is illuminated in the direction il and observed in the direction vj for the spectral composition of the radiation c.

For axisymmetric systems, the recalculation of the local luminance of the image into its local illuminance is carried out using the following normalization:

Eq. (3)

dEl(pl,c)=Ad2dLl(il,vj,c)(a·(wlpl)|wlpl|)4,
where dEl(pl,c) is the local illuminance of the image at the point pl from the backward ray l, A is the area of the output aperture, d is the distance from the image to the output aperture along the optical axis, a is the direction of the optical axis, and wl is the point on the output aperture (vector wlpl forms the initial direction of the ray).

Within one phase of the calculation (when a portion of rays is emitted from the image, an inverse photon map is formed, a beam of direct rays is emitted, and the local luminance and illuminance of the image are calculated), the illuminance values E(p,c) of image points p are averaged

Eq. (4)

E(p,c)=FΣ(c)l=1MdEl(pl,c)NFNBp,
where FΣ(c) is the total luminous flux emitted by all light sources, M is the number of “successful” (leading to a nonzero illuminance value) intersections of the paths of forward rays with integration spheres, NF is the total number of emitted forward light rays, and NBp is the total number of rays emitted from image points p.

It should be noted that each image cell can emit its own number of rays, which depends on a number of factors, for example, local calculation error in a given area of the image. In addition, when emitting rays from a light source, importance sampling is performed. This leads to the fact that all local luminous fluxes dFl(il,c) from Eq. (1) emitted by light sources are equal to 1. This solution makes it possible to avoid significant jumps in local illuminance values during the calculation process.

Because the calculation uses the method of progressive photon maps, the result is averaged over all K phases of the calculation, and the final illuminance value takes the following form:

Eq. (5)

E(p,c)Σ=FΣ(c)l=1n=1KMndEl(pl,c)n=1KNFnNBnp.

5.2.

Results of the Stray Light Simulation in Short and Long Focal Length Lenses with Use of the Photon Mapping Method

Using the method of progressive backward photon maps, calculations were made for a number of camera lenses. This paper presents the results of the calculation of stray illumination for optical devices of a long-focus lens and a lens with a variable focal length.

In the first case, a long-focus six-lens optical system with 457-mm focal length (f/1.8) and ±3.5  deg rectangular field of view (FOV) is shown in Fig. 7. An out-of-field source of stray light (the Sun) illuminates the lens entrance pupil under an incident angle ωSun=5  deg.

Fig. 7

Diagram of a long-focus six-element lens.

OE_62_2_021002_f007.png

Parasitic illumination can reach the radiation receiver only after scattering on diffuse objects of the device (lens ends and structural elements) or after re-reflecting on the optical surfaces. In this work, we studied the scattering of light on the diffuse surfaces of an optical device; therefore, for all optical surfaces, the properties corresponding to an ideal antireflection coating with zero reflection coefficient were set.

Figure 8(a) shows simulation results of the caustic and indirect illuminance distribution caused by the Sun light scattering on the optical system lens mounts. In this simulation, all lens mounts and lens barrel reflectance are set to 10% Lambertian. The reflectivity was set to zero on the lens surfaces. Figure 8(b) shows simulation results of the caustic and indirect illuminance distribution from the Sun light scattering on the lens mounts. In this simulation, the reflectance for all lens mounts and lens barrel reflectance are set to 5% Gaussian with 5 deg halfwidth. Here, the reflectance value means the total integrated scatter (TIS = total scattered power/incident power). Gaussian distribution assumed to be around the specular reflected ray. In both cases, the calculation time was 1 h. Note that the calculation was performed on the central processor in the parallel computing mode. All calculations were performed in computer representation of numbers with double precision; therefore, the GPU working with single precision (float) was not used. Calculations were performed on a workstation with dual AMD EPYC 7281 16-core processors operating at 2.10 GHz in hyper-threading (HT) mode, and 256 GB of random-access memory (RAM). The calculation involved 64 parallel threads.

Fig. 8

Stray light illumination distribution for long-focus six-element lens presented in Fig. 7 in the case of (a) Lambertian scattering on the lens mounts and (b) Gaussian scattering on the lens mounts.

OE_62_2_021002_f008.png

The simulation results showed that the level of stray illumination depends on the optical properties of the surfaces on which scattering occurs, and the difference is determined not only by the level of average illumination but also by the nature of its distribution. In the case of quasi-mirror surface properties described by the Gaussian function, the level of stray illumination is on average two orders of magnitude lower than in the case of materials with optical properties close to Lambert’s law.

The second example represents the high-aperture zoom lens (f/1.5) with 14- to 146-mm focal length and 30 deg to 3 deg FOV. The lens layout for the short focal length configuration is shown in Fig. 9.

Fig. 9

Schematic of a fast zoom lens (short focal length configuration).

OE_62_2_021002_f009.png

Two simulations were carried out for different properties of structural elements. In the first case, the Lambertian properties of surfaces with 10% of TIS were specified. And in the second case, the optical properties were given by the Gauss function with TIS = 5% and with 5 deg of the full width at half maximum. The results of modeling the distribution of illumination from a parasitic out-of-field radiation source on the image plane are shown in Fig. 10. In both cases, the calculation time was 1 h. Calculations were performed on a workstation with dual AMD EPYC 7281 16-core processors operating at 2.10 GHz in hyper-threading (HT) mode, and 256 GB of random-access memory (RAM). The calculation involved 64 parallel threads.

Fig. 10

Stray light illumination distribution for zoom lens presented in Fig. 9 in the case of (a) Lambertian scattering on the lens mounts and (b) Gaussian scattering on the lens mounts.

OE_62_2_021002_f010.png

In the first case (the lens barrel has Lambertian properties), there is a fairly uniform distribution of stray illumination in the image plane. In the case of using quasi-mirror optical properties of structural elements, the distribution of stray illumination becomes more concentrated and structured. However, the stray illumination level is much lower than in the case of materials with Lambertian surface properties.

6.

Visualization of Stray Radiation Sources on the Image Plane

Obviously, the direct simulation of the stray illumination can only answer the question about the level of this illumination but says nothing about the source of its formation. The method of bidirectional stochastic ray tracing using photon maps allows you to save traces of the propagation of light rays in an optical device. If these traces are associated with some local area, for example, the area of the geometric primitive into which the ray hits, then the light flux carried by the ray can be converted into local illumination of the surface, and then we can accumulate and visualize this illumination after tracing a large number of rays. This solution makes it possible to visualize stray illumination on the diffuse surfaces of an optical device. However, such a solution does not allow one to isolate the stray illumination on the surfaces of the optical device, which affects the illumination of the image. Moreover, even if this stray illumination does affect the illumination of the image, its contribution to the level of this stray illumination is absolutely unclear. A more appropriate value would be the brightness of the diffuse elements of the optical device in the direction of the radiation receiver. However, the calculation of brightness is technically very complicated, because when a diffuse element is illuminated, as a rule, the conditions for observing this element from the side of the radiation receiver are unknown.

To solve the problem of visualization of sources of light scattering on the elements of an optical device, causing stray illumination on the radiation receiver, we propose to accumulate the stray illumination of the radiation receiver on the elements of the optical device that creates this illumination. The method of bidirectional stochastic ray tracing with progressive inverse photon maps is the most suitable for solving this problem. The map of visible elements of the scene, created at the first phase of the calculation in the process of backward ray tracing, forms the conditions for observing the optical device from the side of the radiation receiver. The next phase of the calculation invokes direct ray tracing from light sources and creates an illuminance map of the optical device in the areas of its overlap with the observation map calculated in the previous phase. Because both the illumination conditions and the conditions of observation of optical elements are preserved in the map overlap area, the conversion of the light flux carried by the forward ray into the visible luminance at the radiation receiver will be physically correct. Next, the apparent luminance is recalculated into the local illuminance of the radiation receiver element. Because the calculation of scattered light requires the tracing of billions of rays, observation and illumination maps are constantly updated during the calculation process, and a static map is formed to store static information about the effect of optical elements on the level of stray illumination, as shown in Fig. 11.

Fig. 11

Method for generating a static map of the distribution of stray illumination of an image on objects of an optical device.

OE_62_2_021002_f011.png

A static map is a regular grid in the nodes of which the accumulation of this illuminance takes place. When a light ray enters the ray integration sphere stored in the observation map, its light flux is recalculated into the local illumination of the image, and this illumination is accumulated in the grid node determined by the coordinates of the observation ray. To provide a more compact storage of illuminance values, the cells of a regular grid are stored in hash tables. This makes it possible to exclude the storage of information related to empty cells that do not contain illuminance values, the share of which is >90% of the total grid volume. After the calculation is completed, the results stored in the cells of the regular grid are reduced to physical values (there is a normalization for the number of traced rays, the luminous flux of light sources, and the parameters of the optical system). Because the results accumulated in the cells of this grid represent illuminance, their value does not depend on the direction of observation and these can be visualized from any direction suitable for analyzing and searching for sources of stray illumination.

Normalization of illuminance values for cells of a regular grid is carried out using equations similar to Eqs. (1)–(5). The main two differences are as follows.

  • 1. The result is accumulated not only in the cells of the image but also in the cells of the regular grid.

  • 2. The number of beams emitted by each image cell must be constant for all image cells within one phase.

The last requirement is explained by the fact that the cell accumulates and averages all image points and cannot perform individual normalization to the number of rays emitted from the image cell. Therefore, Eqs. (4) and (5) somewhat change their form

Eq. (6)

E(r,c)=FΣ(c)l=1MdEl(pl,c)NFNB,
where r is the image cell and NB is the number of backward rays emitted from each image cell.

Equation (5) changes in a similar way

Eq. (7)

E(r,c)Σ=  FΣ(c)l=1n=1KMndEl(pl,c)n=1KNFnNBn.

To visualize sources of stray illumination, we used standard computer graphics tools. The visualization of the optical device was carried out using OpenGL, and the distribution of stray illumination, stored in the nodes of a regular grid, was projected onto this image. In this case, cells containing a zero illuminance value are considered transparent for observation, and a nonzero value is considered opaque. Figure 12 visually demonstrates the sources of stray illumination and their influence on the illumination of the radiation detector (the brighter the area is, the greater its influence on the level of stray illumination of the radiation detector is) for the optical device shown in Fig. 7.

Fig. 12

Visualization of the distribution of the stray illumination sources on the elements of an optical device.

OE_62_2_021002_f012.png

The stray illumination is represented in the levels of green. In the user interface, you can select the color in which the stray light distribution will be presented. In this case, the green color was chosen for better contrast with the color of the optical device parts, which were drawn in gray. The visible color of this map depends linearly on the amount of illuminance, that is, the higher the illuminance is, the brighter the color is (green). The scale bar of the color levels allows estimating the relative illuminance distribution on the receiver caused by stray light on the lens parts. To form this color map, a special algorithm is used that determines the optimal values of the minimum and maximum illuminance to visualize its distribution. In addition, these values can be selected manually. This is the average illuminance of the image received from the scene element and accumulated in some spatial cell of the scene, covering the given scene element. Normalization to the size of the scene cell for which this average illuminance is calculated is not performed. Because all the scene cells in this solution are the same size, size normalization does not make much sense when we evaluate the effect of a scene element on the amount of stray light.

For the convenience of analyzing stray light sources, the optical device can be arbitrarily oriented relative to the observer. Figure 13 shows the result of the same calculation, but observed from a different angle.

Fig. 13

Visualization of the distribution of the stray illumination sources on the elements of an optical device for a different observation angle.

OE_62_2_021002_f013.png

As a simple example of finding and eliminating the causes of stray illumination, the doublet lens in a frame, shown in Fig. 1, was considered. The proposed method determined the lens aperture stop as the main source of stray illumination. Figure 14 allows you to visualize the source of stray light.

Fig. 14

Visualization of a stray light source in the doublet lens.

OE_62_2_021002_f014.png

Despite the fact that the visualization of ray paths using the acceptance criterion also allows you to determine the source of stray light, this method works in automatic mode and does not require the formation of special criteria to search for sources of stray illumination of the image.

As a check of the correctness of the found reason for the occurrence of stray illumination, a test calculation of the image illumination from an out-of-field light source, shown in Fig. 1, was performed. In this calculation, the reflectance of the surfaces of the lenses and the diaphragm were set to zero (all other optical properties of the device elements were preserved). The result of the calculation is shown in Fig. 15. The left image corresponds to the initial parameters of the optical device, whereas the right image corresponds to the parameters of the optical device with lens and aperture reflections set to zero.

Fig. 15

Stray illuminance of the image. The left image corresponds to the original parameters of the optical device, whereas the right image corresponds to the parameters of the optical device with zero reflection values of the lenses and aperture stop.

OE_62_2_021002_f015.png

Even though this method will automatically determine the sources of stray illuminance of the image and detect their influence on the level of this illuminance, it does not make it possible to determine the influence of the elements of an optical device on the level of local illuminance. The evaluation is made on the level of illuminance, averaged over the entire image. To take into account the distribution of the level of stray illuminance on the elements of an optical device on image coordinates, one can either select local zones in the image by performing a preliminary calculation of the distribution of stray illuminance on the image surface and then perform the calculation for the local areas of interest in the image, or create a four-dimensional grid of the stray illuminance distribution in the space of the optical device by adding two dimensions corresponding to the image coordinates.

7.

Conclusion

The proposed methods for calculating and visualizing the stray illumination of an image can be effectively used in designing the light protection of optical systems. The use of criteria for selection of ray paths and means of their visualization makes it possible to analyze possible sources of stray illumination on the image receiver and find means to eliminate them. Multiple assignments of radiation receivers with individual criteria for selection of ray paths accumulated at the receiver allow you to analyze several different conditions for the formation of stray illumination during one calculation. In addition, the proposed methods of bidirectional stochastic ray tracing with progressive photon maps can significantly speed up the process of calculating image stray illumination caused by caustic or secondary light scattering on structural elements and defects in the optical elements of the device. The use of methods based on progressive photon maps makes it possible not only to significantly speed up the process of calculating stray illumination but also to detect and physically correctly assess their effect on the level of stray illumination of the image and visualize against the background of an optical device model. A further extension of the proposed approach should include the study of the possibility of assessing the influence of the distribution of stray illumination of the image on the corresponding distribution of illumination caused by selected zones on the elements of optical device.

Acknowledgment

The work was supported by the Russian Science Foundation (Grant No. 22-11-00145).

References

1. 

S. Chattopadhyay, A. Fujimoto, “Bi-directional ray tracing,” Computer Graphics 1987, 335 –343 Springer, Tokyo (1987). Google Scholar

2. 

W. Jarosz, H. W. Jensen and C. Donner, “Advanced global illumination using photon mapping,” in ACM SIGGRAPH 2008 classes (SIGGRAPH ’08), 1 –112 (2008). https://doi.org/10.1145/1401132.1401136 Google Scholar

3. 

I. Georgiev et al., “Light transport simulation with vertex connection and merging,” ACM Trans. Graph., 31 (6), 1 –10 https://doi.org/10.1145/2366145.2366211 ATGRDF 0730-0301 (2012). Google Scholar

4. 

V. A. Frolov et al., “Light transport in realistic rendering: state-of-the-art simulation methods,” Program. Comput. Softw., 47 298 –326 https://doi.org/10.1134/S0361768821040034 PCSODA 0361-7688 (2021). Google Scholar

5. 

J. T. Kajiya, “The rendering equation,” in ACM SIGGRAPH Comput. Graph., 143 –150 (1986). https://doi.org/10.1145/15886.15902 Google Scholar

6. 

B. K. Barladyan et al., “Illumination modeling and generation of realistic images using internet technologies,” Program. Comput. Softw., 31 (5), 282 –291 https://doi.org/10.1007/s11086-005-0037-1 PCSODA 0361-7688 (2005). Google Scholar

7. 

E. C. Fest, Stray Light Analysis and Control, PM229 SPIE Press, Bellingham, Washington (2013). Google Scholar

8. 

E. Veach, “Robust Monte Carlo methods for light transport simulation,” (1998). Google Scholar

9. 

R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACM Trans. Graph., 1 (1), 7 –24 https://doi.org/10.1145/357290.357293 ATGRDF 0730-0301 (1982). Google Scholar

10. 

J. F. Hughes et al., Computer Graphics: Principles and Practice, 1262 3rd ed.Addison-Wesley Professional, Boston, MA (2013). Google Scholar

11. 

D. D. Zhdanov and A. D. Zhdanov, “The backward photon mapping for the realistic image rendering,” in CEUR Workshop Proc., 1 –12 (2020). https://doi.org/10.51130/graphicon-2020-2-3-8 Google Scholar

12. 

D. D. Zhdanov et al., “Use of computer graphics methods for efficient stray light analysis in optical design,” Proc. SPIE, 10690 523 –534 https://doi.org/10.1117/12.2312429 PSISDG 0277-786X (2018). Google Scholar

13. 

E. Veach and L. J. Guibas, “Metropolis light transport,” in Proc. 24th Annu. Conf. Comput. Graph. and Interactive Tech., SIGGRAPH ’97, 65 –76 (1997). https://doi.org/10.1145/258734.258775 Google Scholar

14. 

J. Wenzel, “Light transport on path-space manifolds,” (2013). Google Scholar

15. 

A. S. Kaplanyan, J. Hanika and C. Dachsbacher, “The natural-constraint representation of the path space for efficient light transport simulation,” ACM Trans. Graph., 33 (4), 1 –13 https://doi.org/10.1145/2601097.2601108 ATGRDF 0730-0301 (2014). Google Scholar

16. 

B. Bitterli et al., “Reversible jump metropolis light transport using inverse mappings,” (2017). Google Scholar

17. 

A. Gruson, R. West and T. Hachisuka, “Stratified Markov chain Monte Carlo light transport,” Comput. Graph. Forum, 39 (2), 351 –362 https://doi.org/10.1111/cgf.13935 CGFODY 0167-7055 (2020). Google Scholar

18. 

C. Kelemen et al., “A simple and robust mutation strategy for the metropolis light transport algorithm,” Comput. Graph. Forum, 21 (3), 531 –540 https://doi.org/10.1111/1467-8659.t01-1-00703 CGFODY 0167-7055 (2002). Google Scholar

19. 

E. P. Lafortune and Y. D. Willems, “Bi-directional path tracing,” in Proc. Third Int. Conf. Computational Graph. and Visual. Tech. (COMPUGRAPHICS ’93), 145 –153 (1993). Google Scholar

20. 

H. W. Jensen, “Global illumination using photon maps,” in Eurograph. Workshop on Rendering Tech., 21 –30 (1996). Google Scholar

21. 

H. W. Jensen and P. Christensen, “High quality rendering using ray tracing and photon mapping,” in ACM SIGGRAPH 2007 Courses, 116 (2007). https://doi.org/10.1145/1281500.1281593 Google Scholar

22. 

P. Christensen, “Photon mapping tricks,” in SIGGRAPH 2002, Course Notes No. 43, A Practical Guide to Global Illumination using Photon Mapping organized by Jensen, H.W., 93 –121 (2002). Google Scholar

23. 

T. Kato et al., “Photon mapping in Kilauea,” in SIGGRAPH 2002, Course Notes No. 43, A Practical Guide to Global Illumination using Photon Mapping, 122 –191 (2002). Google Scholar

24. 

G. J. Ward and P. S. Heckbert, “Irradiance gradients,” in Third Eurograph. on Rendering Workshop, 18 (1992). https://doi.org/10.1145/1401132.1401225 Google Scholar

25. 

I. Peter and G. Pietrek, “Importance driven construction of photon maps,” in Eurograph. Workshop on Rendering Tech., 269 –280 (1998). https://doi.org/10.1007/978-3-7091-6453-2_25 Google Scholar

26. 

H. W. Jensen and A. K. Peters, Realistic Image Synthesis Using Photon Mapping, CRC Press( (2001). Google Scholar

27. 

I. Kinev et al., “Use of the photon mapping methods for the optical systems stray light analysis,” Proc. SPIE, 11895 118951H https://doi.org/10.1117/12.2601631 PSISDG 0277-786X (2021). Google Scholar

28. 

A. D. Zhdanov, D. D. Zhdanov and M. I. Sorokin, “The virtual prototyping of complex optical systems on multiprocessor workstations,” Proc. SPIE, 11875 118750C https://doi.org/10.1117/12.2597178 PSISDG 0277-786X (2021). Google Scholar

Biography

Dmitry Zhdanov received his PhD in applied mathematics from Keldysh Institute of Applied Mathematics (KIAM) RAS in 2006. He is working as an associate professor at ITMO University. Also, he cooperates with Vavilov State Optical Institute and KIAM RAS. His research interests include photonics, computer graphics, photorealistic rendering, virtual prototyping, illumination system design, and stray light analysis. He is the author of more than 90 scientific publications in the areas of photonics, optics, and computer graphics.

Igor Potemin received his PhD in optical and optoelectronic instruments from Saint-Petersburg Electrotechnical University in 2015. He is working as an associate professor at ITMO University. Also, he cooperates with the Keldysh Institute of Applied Mathematics and Vavilov State Optical Institute. His research interests include photonics, optical design and testing, computer graphics, virtual prototyping, illumination optics design, and stray light analysis. He is the author of more than 50 scientific publications in the areas of photonics, optical design, and computer graphics.

Andrey Zhdanov received his BS and MS degrees in applied mathematics and informatics from ITMO University in 2006 and 2008, respectively, and his PhD in computer systems from ITMO University in 2020. He is working as an associate professor at ITMO University. His research interests include realistic rendering, virtual prototyping, virtual and mixed reality, and optical systems design. He is the author of more than 70 journal papers. He is a member of SPIE.

Igor Kinev received his BS degree in computer science from Perm National Research University in 2019 and his MS degree in applied mathematics and informatics from ITMO University in 2022. He is working as a research engineer at ITMO University. His current research interests include computer graphics, optical modeling, and virtual and augmented reality technologies. He is the author of three journal papers.

CC BY: © 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
Dmitry Zhdanov, Igor Potemin, Andrey Zhdanov, and Igor Kinev "Methods of visual analysis in the design of the stray light protection of optical devices," Optical Engineering 62(2), 021002 (17 November 2022). https://doi.org/10.1117/1.OE.62.2.021002
Received: 27 July 2022; Accepted: 3 October 2022; Published: 17 November 2022
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Light sources and illumination

Optical components

Visualization

Ray tracing

Stray light

Receivers

Light scattering

Back to Top