Open Access
14 June 2023 Ultra-fast aerial image simulation algorithm using wavelength scaling and fast Fourier transformation to speed up calculation by more than three orders of magnitude
Author Affiliations +
Abstract

An ultra-fast image simulation algorithm is proposed. The new algorithm uses full fast-Fourier-transform (FFT) to calculate the aerial image intensity. The wavelength, 193 nm, was scaled to a number of powers of 2, through scaling the mask with a scaling factor derived from the discrete Fourier transform (FT) format. The mask can then be transformed to the diffraction spectrum in terms of spatial frequency using the FFT algorithm. Similarly, this mask diffraction spectrum can be inverse transformed to the aerial-image by using the inverse-FFT algorithm. The image is finally scaled back to the original image amplitude of the original wavelength and squared to the image intensity. Comparing to the original FT, there is a 4000 × to 5000 × computation speed improvement with only about 3% intensity deviation. This algorithm provides an efficient engine for lithography optimization.

1.

Introduction

Simulations are intensively used in semiconductor lithography processes, no matter during process development or mass production of IC chips. The computational lithography, generally consisting of optical proximity correction (OPC),17 source mask optimization (SMO),814 inverse lithography (ILT),1517 and other simulation packages,1820 is widely and accurately applied during different phases of process development. At the path-finding stage, new types of phase-shifting mask, resist materials, and resolution-enhancement techniques, should be thoroughly studied and simulated to find opportunities for pushing the patterning resolution. After moving to the process development stage, we need to simulate the process windows to establish the design rules for each masking layer. Before that, the optimum numerical aperture (NA) and the partial coherence parameter σ of the exposure tools have to be evaluated. Moreover, for modern lithography, it needs SMO, OPC, pattern splitting (for multiple patterning), and even ILT, if the resolution scaling coefficient k1 is beyond the control of conventional lithography process.21,22 Even at the mass production stage, especially for the foundry with product-layout diversity, to prevent the degradation of wafer yield from patterning hotspots, sophisticated simulations are needed to detune OPC or SMO to remove the hotspots encountered.

There are two performance indicators for the widely used simulations in lithography technology: accuracy and speed. For cutting-edge semiconductor technology, the systematic critical-dimension (CD) error tolerance is less than a few nanometers. For the 12-nm gate length, the CD error specification is just ±1.2  nm, 10% of the patterning CD. Similar but looser CD tolerance are required for the metal and the contact layers. The mask CD must just be located at the center of the wafer random CD-error distribution. A desirable OPC package is used to render the simulated mask CD accuracy by the model and rules derived from all the predictable patterning effects, such as optical, resist, etching and even chemical–mechanical polish.23 In addition, since there are billions of transistors in a single chip, a speedy simulation is necessary. For example, the 10-nm technology node requires the simulation software to complete the simulation and correction work in less than a week. The computer CPU cores involved could be more than several tens of thousands for multiple products handled simultaneously.

The algorithm presented in this paper can speed up the transformation between the physical space and the spatial frequency space by making typical imaging simulations containing the wavelength parameter to fit the fast-Fourier-transform (FFT) scheme. Depending on the number of integrations needed, typical imaging simulations can be sped up by more than three orders of magnitude.

2.

Theory

2.1.

Basic Simulation

The basic lithographic simulation, no matter the purpose, must include aerial image simulation. For Fourier optics, it is the FT of the amplitude and phase distribution M(x), through the mask; then, inverse-FT back to the spatial domain after multiplying with a low-pass pupil function, P(k). The aerial image intensity I(x) of a point coherent light source.2426 is defined as

Eq. (1)

I(x)=|P(k)e2πi(k·x)dkM(x)e2πi(k·x)dx|2,    
where x is the spatial position, k is the wave-vector with amplitude 1λ, and λ is the wavelength of the imaging light. For partially coherent light with distribution σ(k), the final aerial image intensity is the summation of the individual intensity produced by each coherent source point k and is governed by Eq. (2), where Hopkin’s approximation is used:

Eq. (2)

I(x)=σσ(k)dk|P(kk)e2πi(k·x)dkM(x)e2πi(k·x)dx|2    

The low-pass-filter pupil function, P(k), as a function of the illumination distribution σ(k) can be expressed as the following:

Eq. (3)

P(k)={=1,(kx2+ky2)1/2<NAλ=0,(kx2+ky2)1/2>NAλ,
where NA, the numerical aperture, is defined as nsinθ, n is the refractive index of the imaging medium, and sinθ is the maximum allowed directional component of k to pass through the lens pupil to contribute to the aerial image. On the other hand, the illumination distribution σ(k) can also be expressed as

Eq. (4)

σ(k)={=1,(kx2+ky2)1/2<σλ=0,(kx2+ky2)1/2>σλ,
where σ is the partial coherence parameter of the illumination system of an exposure tool. The illumination distribution in Eq. (4) is a conventional disk illumination. For advanced lithography, the illumination distribution is customized for lithography performance and can be an arbitrary distribution within unity, i.e., σ=nsinθiNA<1. Here, nsinθi is the illumination NA, and θi is the angle between the incident beam to the mask normal. The intensity obtained from Eq. (2) is based on the TE polarization, linear polarization with the direction of the electric field tangential to the wafer surface, and the diffraction amplitudes are scalar summed. For the TM polarization, Eq. (2) needs to be corrected with the vector summation of the individual ampltitude.19,24 The pupil function P(k) in Eq. (3) is an ideal pupil function. Equation (5) shows the real pupil function P(k), which is the ideal pupil function multiplied by a wavefront error function ei2πW:19,24

Eq. (5)

P(k)=P(k)ei2πW,
where

Eq. (6)

W=i=0ZiFi,
Zi is the Zernike coefficient and Fi is the Zernike Polynomial. Among those terms, Z3 is the defocus term, which needs to be scaled along with other terms based on the new algorithm and will be discussed later. The computation of Eq. (2) is the core step of lithography simulation. Regardless of the application, be it for phase-shifting mask, mask three-dimensional effect, aberration, defocus, or even ILT, there is a need for a fast aerial image calculation as a base to build advanced applications for development of the realistic semiconductor technology.

2.2.

Numerical Calculation

There are several ways to numerically calculate the result of Eq. (2). The direct calculations consist of the Abbe and the transmission cross coefficient (TCC) approaches.24,25,27 The former approach traces each illumination point source and incoherently adds up the intensities. The latter approach calculates the transmission cross coefficient (TCC) matrix in advance. The TCC matrix is the integral of the product of the illumination distribution, pupil function, and the complex conjugate of the pupil function, which is shifted by the incident wave vector. The advantage of the TCC approach is that the TCC needs to be calculated only once after fixing the illumination and the pupil function. Therefore, this method is very suitable for varied mask layout during each simulation. However, direct computation is very time consuming, due to the deep loops of FT. Another approach is to diagonize the TCC matrix in the k space or the x space to get the eigenvalues and eigenfunctions. The final aerial image intensity can be obtained by summing the convolutions of the eigenfunction and mask with multiplying the eigenvalues. This leads to the sum of coherence system (SOCS) method.25,27 It is most widely adopted in commercial software packages, especially for the OPC software.

2.3.

Fourier Transform and Fast Fourier Transform

The FT and inverse FT of M and A are as follows:

Eq. (7)

A(kx,ky)=M(x,y)e2πi(kxx+kyy)dxdyM(x,y)=A(kx,ky)e2πi(kxx+kyy)dkxdky.

The FFT is a well-developed methodology for quick evaluation of discrete FT. Since invented28 in 1965, the scheme has been implemented in many applications and with several representations.29 The basic and original algorithm uses radix-2. Equation (6) shows the discrete FT pair

Eq. (8)

A(p,q)=m=0,n=0N1M(m,n)e2πi(pm+qn)NM(m,n)=p=0,q=0N1A(p,q)e2πi(pm+qn)N,
where A(p,q) and M(m,n) form a discrete FT and inverse-FT pair; p, q, m, n, and N are integers. If N is a number of powers of 2, N=2r, in Eq. (8), one of the series can be divided into two parts, with even and odd order components. We use the one-dimensional (1-D) case for convenience of illustration:

Eq. (9)

A(p)=n=0N1M(n)e2πipn/N=n=0N21M(2n)e2πip(2n)/N+n=0N21M(2n+1)e2πip(2n+1)/N=n=0N21M(2n)e2πip(2n)/N+e2πip/Nn=0N21M(2n+1)e2πip(2n)/N=CN/22np+e2πip/NDN/22np,
where CN/22np and DN/22np are two sub-series of A(p). Based on Eq. (8), we can derive A(p+N/2) by substituting p with p+N/2 and remember that p and p+N/2 are both integers:

Eq. (10)

A(p+N2)=CN/22npe2πip/NDN/22np.

Equation (8) shows that it can save half of the computation. This is because when A(p) is calculated, A(p+N/2) is known by using Eq. (10). If N is a number of powers of 2, the procedure can be iteratively executed, until only two terms left. If an FT needs N×N computations, the FFT can reduce the computations to Nlog2N.

To directly calculate the aerial image intensity distribution of Eq. (2), when we set 64 pixels in 1-D, it needs 644 computations to calculate the two-dimensional (2-D) mask spectrum plus 646 computations for the final aerial image intensity. An extra 642 computations is needed for scanning the illumination. It is an enormous task, if no other algorithm is used to boost up the computation. This paper uses the FFT to compute the aerial image, as an in-house development engine for research purpose at the expense of a slight loss of accuracy. When the wavelength is not an integer of the powers of 2, it is scaled to such an integer to enable FFT, then scaled back to the original value after all the operations in FFT and inverse FFT. This procedure induced the aforementioned slight loss of accuracy. For schools or research organizations with limited resource to purchase speedy computation equipment or to access industrial simulation packages, this paper provides a powerful algorithm to convert the traditional FT to FFT to save the enormous computational efforts.

2.4.

Fourier Transformation in Summation Form

It is advantageous to convert the FT of the integral form to the summation form. We use the 1-D case for the mask distribution M(x) and the spatial frequency spectrum A(k) to demonstrate our methodology:

Eq. (11)

A(k)=M(x)e2πi(k·x)dxM(x)=A(k)e2πi(k·x)dk.

We convert Eq. (11) into the summation form with discrete sampling

Eq. (12)

A(k)=x=M(x)e2πiλ(kxx)M(x)=kx=A(k)e2πiλ(kxx).

Equation (6) cannot be directly used to calculate Eq. (12) for the following two reasons. First, the summation limit of Eq. (8) is N, whereas the summation limit of Eq. (12) is infinite. Second, the Fourier pair of Eq. (8) is symmetrical, but those are not symmetrical of Eq. (12). Because the clear-tone masks, transparent patterns and opaque background, are mostly used for EUV lithography and the pupil function to cut off high-order frequencies is multiplied to Eq. (12), the infinity summation limit in Eq. (12) can be replaced by the finite range of interest. On the other hand, the sampling of kx and x is very asymmetrical. For ArF exposure tools, the exposure wavelength λ is 193 nm. When we set the sampling pixel number to 64 and the pixel size to 25 nm, the sampling range in spatial coordinate x is 1600 nm. In the k space, the kx, the direction component of wave vector with wavelength λ separated, scan range is normally ±2 by considering the maximum NA and illumination partial coherence parameter, both are ±1. But, the scan variables m,n and p,q in Eq. (7) are integers ranging from 0 to N1, which are symmetrical. More over, the challenging part to use FFT to boost the computation speed for this system is that λ is 193, definitely not in powers of 2. However, Eq. (12) can be converted to the format of Eq. (8). Let us use 25 nm for the sampling interval Δx,464 for Δkx and 193 nm for λ to substitute into the first part of Eq. (12), we obtain Eq. (13) with pixel number 64:

Eq. (13)

A(rΔkx)=mM(mΔx)e2πi1123.52(rm).

Now, r and m are both integers ranging from 1 to 64. Equation (13) is the same format as Eq. (8).

Consider two masks, M(x) and M(x), illuminated with wavelengths λ and N, respectively. If these two systems can produce the same mask diffraction spectrum A(k), what is the relationship between M(x) and M(x)? In the following, we omit the ± limits in the integral for clarity:

Eq. (14a)

A(k)=M(x)e2πi1λ(kxx)dx,

Eq. (14b)

=M(x)e2πi1N(kxx)dx.

With the inverse Fourier transform (FT), we can get M(x) with Eq. (14)

Eq. (15)

M(x)=A(k)e2πi1N(kxx)dkx.

By substituting A(k) in Eq. (14a) into Eq. (15), we obtain the relationship between M(x) and M(x):

Eq. (16)

M(x)=A(k)e2πi1N(kxx)dkx=(M(x)e2πi1λ(kxx)dx)e2πi1N(kxx)dkx.

After changing the integral order of dkx and dx, Eq. (15) becomes

Eq. (17)

M(x)=(e2πi(xλxN)kxdkx)M(x)dx=δ(xλxN)M(x)dx=CM(λxN),
where δ(x) is the delta function. Equation (15) indicates that the two masks differ by a scaling factor λN and an intensity proportional constant C.

Foregoing is a basic concept of diffraction physics. Consider the double-slit diffraction consisting of the slit distance d, wavelength λ and the diffraction angle φ as shown in Fig. 1, the diffraction equation is

Eq. (18)

2dλ=nsinφ.

Fig. 1

The double-slit diffraction shows the scaled wavelength and slit geometry to produce the same diffraction spectrum.

JM3_22_2_023201_f001.png

The right side of Eq. (18) shows the mask diffraction spectrum. To maintain the same mask spectrum, the slit distance should be scaled with the wavelength used. Figure 1 shows the scaling of the double slit to produce an identical diffraction spectrum.

In the same way, we now check the image formation after obtaining the mask spectrum. If the same mask spectrum A(k) and pupil function P(k) form the image with different wavelengths and spatial dimensions as shown in Eq. (19), what is the relationship between the image amplitude distribution E(x) and E(x)?

Eq. (19)

E(x)=P(k)A(k)e2πi1λ(kxx)dkxE(x)=P(k)A(k)e2πi1N(kxx)dkx.

Inversing the second part of Eq. (19), we get

Eq. (20)

P(k)A(k)=E(x)e2πi1N(kxx)dx.

Substituting Eq. (20) into the first part of Eq. (19), we get

Eq. (21)

E(x)=[E(x)e2πi1N(kxx)dx]e2πi1λ(kxx)dkx.

Following the same procedure in the derivation of Eq. (17), we perform the integration of dkx before the integration of dx in Eq. (21), resulting in

Eq. (22)

E(x)=[e2πi(xNxλ)kxdkx]E(x)dx=δ(xNxλ)E(x)dx=1CE(Nxλ).

By squaring E in Eq. (22), the image intensity distribution for wavelength λ is the same as what we get from wavelength N, except for a scaling factor, which is the reciprocal of Eq. (17). The intensity proportional constant C is (Nλ)2, which will be discussed later. In this paper, the traditional FT can be converted to the discrete FT format in Eq. (12). The discrete FT for mask diffraction spectrum obtained is then calculated by FFT with the mask dimension scaled based on the wavelength ratio from Eq. (17). The mask spectrum obtained with scaled mask is used to calculate the image amplitude by FFT and this amplitude is finally scaled back to the actual image amplitude based on Eq. (22).

3.

Simulation

3.1.

Simulation Flow

This paper uses Matlab®, which is popular and commercially available, as the simulation development platform. The simulation flow aims to approach the conventional FT with the FFT format and algorithm. There are three key steps for the simulation.

  • First step: Pick N

Set pixel number in 1-D as 64; wavelength, 193 nm; Δx, 25 nm; and Δkx, 464. The number 4 is the full range of the k value. With normally incident illumination, k ranges from 1 to +1. With off-axis illumination in both directions, the full range of k is now from 2 to +2. From Eq. (13), 2πi1λ(rΔx)(mΔkx)=2πirm123.52=2πirmβ, pick the nearest integer of β that is the powers of 2. In this case, set N to 128.

  • Second step: Scale the mask

From the first step, the scaling factor is ε=Nβ=128123.52. The mask m(x) is scaled up with ε. Because the total pixel number of the mask distribution is 64×64, but N is 128, we need to pad zeros to match the row and column elements of the mask matrix to 128×128. Now, the new mask matrix is ready for FFT.

  • Third step: Scale the image

We use the mask spectrum A(k), from the FFT of the scaled mask M(x), to execute the inverse-FFT for the image amplitude distribution E(x). The image E(x) is scaled with the scaling factor 1ε to get the final image E(x).

Figure 2 shows the complete simulation flow, including conventional FT and the wavelength-scaling FFT flows for comparison. The conventional flow follows the FT and inverse-FT based on Eq. (2), and the wavelength-scaling FFT includes the following:

  • (a) Scale up the original mask M(x) to the new mask M(x) based on the first and second steps.

  • (b) Execute FFT to M(x) and obtain the mask spectrum A(k).

  • (c) Multiply A(k) with pupil function P(k) and execute inverse-FFT to get the image amplitude E(x).

  • (d) Scale back the intensity E(x) to E(x) based on the third step.

Fig. 2

The simulation flow for conventional FT and wavelength-scaling FFT, with (a) the original mask, (b) the scaled mask, (c) the mask diffraction spectrum, (d) the aerial image before scaling, and (e) the final aerial image after scaling.

JM3_22_2_023201_f002.png

Although there are two more steps for the new algorithm, the computation speed is much faster than the conventional method.

3.2.

Simulation Results

Two cases are used to verify the simulation, a line-and-space pattern and a hole pattern. The critical dimensions and simulation conditions are listed in Table 1.

Table 1

The simulation condition of (a) hole case and (b) line/space case.

(a) Hole case(b) Line/space case
Layout conditionHole size 150 nm, hole pitch 425 nm, Fig. 3(a)Line size 100 nm, line pitch 225 nm, Fig. 3(b)
Optical conditionλ=193  nm, NA=0.5, σ=0.8λ=193  nm, NA=0.7, σ=0.8

We use 193-nm wavelength, 64 pixels in 1-D, 464 Δkx pixel size, 25-nm Δx. The resulting artificial wavelength for the FFT. Figure 3 shows the simulation results, including the original mask, the mask after scaling, the comparison of the 2-D intensity contours and the 1-D intensity cutlines.

The comparison of the simulation results, shown in Fig. 3, consists of the 2-D contours and the 1-D intensity cut lines, evaluated with conventional FT and the wavelength-scaled FFT. They are almost indistinguishable. There is only a slight deviation at the intensity peaks, which is not of the highest interest for lithography. The deviations are mostly from the grid snapping or the resolution of the mask governed by the number of pixels. The number of pixels of these simulations, including the mask, is 64×64 and the scaling factor ε is 1.0363. The resolution cannot faithfully reflect the accuracy of the scaling factor during the mask scale up. For example, the mask width of the simulation case in Table 1(b) is 100 nm and the scaling factor ε is 1.03463. The mask width after scaling is about 103.46 nm but the pixel size is 25 nm for the mask. The mask size can only be 100 or 125 nm. To reduce the error, the mask edge was smoothened out with the bilinear interpolation, the transmission becomes not exactly 0 or 1 at the edge of a binary mask. Figure 4 shows the mask intensity distribution after the scaling in contrast to the original binary mask. It is relatively easy to increase the number of pixels to enhance the resolution for the new method. However, it is prohibitive to compare between FT and wavelength-scaled FFT, because the deep loop of conventional FT takes too long to complete, especially when the number of pixels is larger than 100 in each dimension. Table 2 lists the speed and intensity error evaluated with conventional FT and wavelength-scaled FFT for cases (a) and (b). The intensity error here is defined by comparing the integrated intensity under the cut line of the FT and wavelength-FFT, the last part of Fig. 3.

Fig. 3

The simulation results of (a) hole patterns and (b) line and space patterns. FT, conventional Fourier transform; FFT, fast Fourier transform. The intensity cut line is the horizontal line at the center of the 2-D contour.

JM3_22_2_023201_f003.png

Fig. 4

(a) The original binary mask with transmission 1 and 0. (b) The scaled mask with the transmission approximated by bilinear interpolation.

JM3_22_2_023201_f004.png

Table 2

Comparison of simulation results between the conventional FT and the scaled-FFT methods.

(a) Hole case(b) Line and space case
Execution time (s)Conventional FTWavelength-scaling FFTConventional FTWavelength-scaling FFT
3600.0776900.16
Intensity error2.85%3.2%

For the real pupil function P, pupil function with aberrated wave front and the Zernike coefficient need to be scaled with the new algorithm. Take Z3, defocus, as an example19,24

Eq. (23)

Z3=ΔδNA24λ,

Eq. (24)

F3=2R21,

Eq. (25)

R=λNAkx2+ky2,
where Δδ is the focus shift. For the conventional FT, the aerial image intensity can be calculated by replacing the ideal pupil function P in Eq. (2) with the real pupil function P in Eq. (5) and the Zernike coefficient Z3, polynomial F3, respectively, in Eqs. (23) and (24). On the other hand, for the wavelength scaling FFT, Eq. (22) is still valid. But, it needs to scale the Zernike coefficient Z3 to Z3, with Z3=Z3/ε. This scaling for the wavelength-FFT calculation of the defocus image intensity matches well with the result of FT.

The computation time of the wavelength-scaling FFT method and that of the conventional FT are listed in Table 2. Although there are two extra mask scaling and image amplitude scaling steps, the computation time of the new method way exceeds expectation. In theory, with 64×64  pixels for the 2-D map, mask or intensity, it needs 644 computations for the conventional FT. If converted to the wavelength scaled FFT, the computations become (64log264)×(64log264). Roughly, it is about 100× runtime saving. There are two major portions for the computation depicted in Fig. 2. Those are step A to C for the first portion and steps C to E for the second portion. For the first portion computation, it is just FT of the mask. It takes 2.7 s for the conventional FT, whereas wavelength FFT only takes 0.025 s, although with one additional step to scale up the mask. It is roughly 108× faster for the new algorithm. For the second portion computation, it needs to scan the whole illumination to sum up the intensities originated from the individual illumination pixel. The inverse FT is executed in this portion. In the FT method, the computation time is just 130×2.7 and 250×2.7  s for the two illumination settings involved in the two cases listed in Table 1. On the other hand, during the illumination scanning, the repeated computation quickly drops from 0.025 s to 5×104  s for the wavelength-scaling FFT. The overall runtime becomes 130×5×104 and 250×5×104 plus the overhead runtime for the new algorithm. It is possibly due to the new method using very little deep-loop computations with only matrix operations and Matlab® built-in functions, such as fft2, ifft2, fftshift, ifftshift… In conclusion, the conversion from FT to FFT saves 100× runtime. And, because of the coding efficiency of matrix operation, extra 40× to 50× runtime can be saved. It is an extra benefit to convert the coding with matrix operations.

From Table 2, the simulation speed is improved by 4000× to 5000× with the new wavelength-scaled FFT method. It improves the simulation speed at the expense of intensity deviation of the order of 3%. The proportional constants in Eqs. (17) and (22) can be derived by considering the consistency of the flux passing through the mask per unit area. Since the mask was scaled with ε in 1-D, the ratio of the total transparent area in the mask is increased by ε.2 To keep the mask spectrum consistent after the mask scaling, the mask transmittance after scaling is normalized by 1ε2. In the same way, we can normalize the final aerial image intensity. Once the normalization constants were fixed, we need to calibrate the results of wavelength-scaled FFT with those of conventional FT only once.

4.

Conclusion

This paper reports a new methodology that converts the aerial image calculation from the time-consuming traditional method in deep loops to a 4000× to 5000× improvement in computing speed, at the expense of a 2% to 3% intensity error due to grid snapping. The performance of the new algorithm was tested with several cases and the results with the improvement and error are consistent. Coding is very easy and straightforward on popularly commercially available platforms. Although there are also methodologies that can greatly save the computation time, such as the widely used SOCS, the method reported in this paper is more straightforward and can be easily adopted. SOCS needs to setup and solve the eigenvalues and eigenfunctions, whenever a new illumination or pupil function is adopted. This method only needs to set up the converting pixel and scaling factor once for all. It is very suitable to be adopted as the running engine of optimization problems. The potential of using this method for lithography process development is expected to be high.

Acknowledgments

This paper was funded by the Taiwan National Science and Technology Council with Project Number NSTC 111-2622-E-007-018.

Source Code and Data Access

The source codes and simulation data of this paper can be accessed through Code Ocean with the capsule “wavelength-scaling FFT.”

References

1. 

J. P. Stirniman and M. L. Reiger, “Fast proximity correction with zone sampling,” Proc. SPIE, 2197 294 –301 https://doi.org/10.1117/12.175423 PSISDG 0277-786X (1994). Google Scholar

2. 

N. B. Cobb, “Fast optical and process proximity correction algorithm for integrated circuit manufacturing,” U. C. Berkeley, (1998). Google Scholar

3. 

M. L. Rieger et al., “Enriching design intent for optimal OPC and RET,” Proc. SPIE, 4754 132 –137 https://doi.org/10.1117/12.476939 PSISDG 0277-786X (2002). Google Scholar

4. 

N. B. Cobb and Y. Granik, “Model-based OPC using the MEEF matrix,” Proc. SPIE, 4889 1281 –1292 https://doi.org/10.1117/12.467435 PSISDG 0277-786X (2002). Google Scholar

5. 

Y. Zhang et al., “A focus exposure matrix model for full chip lithography manufacturability check and optical proximity correction,” Proc. SPIE, 6283 62830W https://doi.org/10.1117/12.681856 PSISDG 0277-786X (2006). Google Scholar

6. 

J. R. Gao et al., “MOSAIC: mask optimizing solution with process window aware inverse correction,” in 51st ACM/EDAC/IEEE Design Autom. Conf. (DAC), 1 –6 (2014). Google Scholar

7. 

H. J. Levinson, Computational Lithography for EUV, SPIE Press, Bellingham, Washington (2020). Google Scholar

8. 

A. Erdmann, Optical and EUV Lithography: A Modeling Perspective, SPIE Press, Bellingham, Washington (2021). Google Scholar

9. 

A. E. Rosenbluth et al., “Optimum mask and source patterns to print a given shape,” Proc. SPIE, 4346 486 –502 https://doi.org/10.1117/12.435748 PSISDG 0277-786X (2001). Google Scholar

10. 

A. Erdmann et al., “Toward automatic mask and source optimization for optical lithography,” Proc. SPIE, 5377 646 –657 https://doi.org/10.1117/12.533215 PSISDG 0277-786X (2004). Google Scholar

11. 

R. Socha, X. Shi and D. Lehoty, “Simultaneous source mask optimization (SMO),” Proc. SPIE, 5853 180 –193 https://doi.org/10.1117/12.617431 PSISDG 0277-786X (2005). Google Scholar

12. 

R. Socha et al., “Design compliant source mask optimization (SMO),” Proc. SPIE, 7748 77480T https://doi.org/10.1117/12.865781 PSISDG 0277-786X (2010). Google Scholar

13. 

J. Pomplun et al, “Reduced basis method for source mask optimization,” Proc. SPIE, 7823 78230E https://doi.org/10.1117/12.866101 PSISDG 0277-786X (2010). Google Scholar

14. 

N. Jia and E. Y. Lam, “Pixelated source mask optimization for process robustness in optical lithography,” Opt. Express, 19 (20), 19384 –19398 https://doi.org/10.1364/OE.19.019384 OPEXFF 1094-4087 (2011). Google Scholar

15. 

A. Poonawala and P. Milanfa, “OPC and PSM design using inverse lithography: a nonlinear optimization approach,” Proc. SPIE, 6154 61543H https://doi.org/10.1117/12.655904 PSISDG 0277-786X (2006). Google Scholar

16. 

L. Pang et al., “Inverse lithography technology (ILT): a natural solution for model-based SRAF at 45-nm and 32-nm,” Proc. SPIE, 6607 660739 https://doi.org/10.1117/12.729028 PSISDG 0277-786X (2007). Google Scholar

17. 

Y. Shen et al., “Level-set-based inverse lithography for photomask synthesis,” Opt. Express, 17 (26), 23690 –23701 https://doi.org/10.1364/OE.17.023690 OPEXFF 1094-4087 (2009). Google Scholar

18. 

C. A. Mack, “PROLITH: a comprehensive optical lithography model,” Proc. SPIE, 0538 207 –220 https://doi.org/10.1117/12.947767 PSISDG 0277-786X (1985). Google Scholar

19. 

C. A. Mack, “Advanced topics in lithography modeling,” Proc. SPIE, 0631 276 –285 https://doi.org/10.1117/12.963652 PSISDG 0277-786X (1986). Google Scholar

20. 

S. Sherwin et al., “Modeling high-efficiency extreme ultraviolet etched multilayer phase-shift masks,” J. Micro/Nanolithogr. MEMS, MOEMS, 16 (4), 041012 https://doi.org/10.1117/1.JMM.16.4.041012 (2017). Google Scholar

21. 

J. Finders et al., “Low-k1 imaging: how low can we go?,” Proc. SPIE, 4226 1 –15 https://doi.org/10.1117/12.404849 PSISDG 0277-786X (2000). Google Scholar

22. 

L. W. Liebmann, “Resolution enhancement techniques in optical lithography: it’s not just a mask problem,” Proc. SPIE, 4409 23 –32 https://doi.org/10.1117/12.438332 PSISDG 0277-786X (2010). Google Scholar

23. 

S. Y. Fang et al., “Simultaneous OPC- and CMP-aware routing based on accurate closed-form modeling,” in Proc. 2013 ACM Int. Symp. Phys. Des., 77 –84 (2013). Google Scholar

24. 

B. J. Lin, Optical Lithography: Here Is Why, 2nd ed.SPIE Press, Bellingham, Washington (2021). Google Scholar

25. 

C. Mack, Fundamental Principles of Optical Lithography: The Science of Microfabrication, John Wiley and Sons( (2007). Google Scholar

26. 

J. W. Goodman, Introduction to Fourier Optics, Roberts and Company Publishers( (2005). Google Scholar

27. 

N. B. Cobb et al., “Mathematical and CAD framework for proximity correction,” Proc. SPIE, 2726 208 –222 https://doi.org/10.1117/12.240907 PSISDG 0277-786X (1996). Google Scholar

28. 

J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Math. Comput., 19 297 –301 https://doi.org/10.1090/S0025-5718-1965-0178586-1 MCMPAF 0025-5718 (1965). Google Scholar

29. 

E. O. Brigham, The Fast Fourier Transform and Its Application, Prentice Hall( (1988). Google Scholar

Biography

Tsai-Sheng Gau received his PhD in physics at Taiwan Tsinghua University in 1994. He spent 2 years in the same university for the postdoctoral research on x-ray diffraction of synchrotron radiation, and then joined ITRI for 0.25  μm deep-submicron semiconductor research. He joined tsmc R&D lithography group in 2000. In tsmc, he implemented 193-nm immersion technology to production and he developed and delivered OPC technologies from 28 to 3 nm generations. He led R&D OPC and mask groups and delivered EUV mask technology before his retirement from tsmc in 2021. He is the distinguished professor in the college of semiconductor research in NTHU now.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Tsai-Sheng Gau, Po-Hsiung Chen, Burn J. Lin, Fu-Hsiang Ko, Chun-Kung Chen, and Anthony Yen "Ultra-fast aerial image simulation algorithm using wavelength scaling and fast Fourier transformation to speed up calculation by more than three orders of magnitude," Journal of Micro/Nanopatterning, Materials, and Metrology 22(2), 023201 (14 June 2023). https://doi.org/10.1117/1.JMM.22.2.023201
Received: 18 March 2023; Accepted: 30 May 2023; Published: 14 June 2023
Advertisement
Advertisement
KEYWORDS
Fourier transforms

Computer simulations

Light sources and illumination

Diffraction

Lithography

Ultrafast phenomena

Matrices

Back to Top