Open Access
2 January 2024 Accelerating extreme ultraviolet lithography simulation with weakly guiding approximation and source position dependent transmission cross coefficient formula
Author Affiliations +
Abstract

Background

Mask three-dimensional (3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In a previous work, we developed a convolutional neural network (CNN) that predicted distorted diffraction amplitudes very fast from input mask patterns.

Aim

In this work, we reduce both the time for preparing the training data and the time for image intensity integration.

Approach

We reduce the time for preparing the training data by applying weakly guiding approximation to 3D waveguide model. The model solves Helmholtz type coupled vector wave equations of two polarizations. The approximation decomposes the coupled vector wave equations into two scalar wave equations, reducing the computation time to solve the equations. Regarding the image intensity integration, Abbe’s theory has been used in electromagnetic (EM) simulations. The transmission cross coefficient (TCC) formula is known to be faster than Abbe’s theory, but the TCC formula cannot be applied to source position dependent diffraction amplitudes in EM simulations. We derive source position dependent TCC (STCC) formula starting from Abbe’s theory to reduce the image intensity integration time.

Results

Weakly guiding approximation reduces the time of EM simulation by a factor of 5, from 50 to 10 min. STCC formula reduces the time of the image intensity integration by a factor of 140, from 10 to 0.07 s.

Conclusions

The total time of the image intensity prediction for 512 nm×512 nm area on a wafer is 0.1 s. A remaining issue is the accuracy of the CNN.

1.

Introduction

High-aspect absorbers used in extreme ultraviolet (EUV) masks induce several mask three-dimensional (3D) (M3D) effects such as critical dimension (CD) error and edge placement error.1,2 It is necessary to include M3D effects in EUV lithography simulations. M3D effects are caused by the distorted diffraction amplitude from an EUV mask. The diffraction amplitude can be calculated rigorously by using electromagnetic (EM) simulators.35 However, these calculations are highly time-consuming, especially for optical proximity correction (OPC) applications.

To speed up the EM simulations, several approximation models such as “domain decomposition method” 6,7 and “M3D filter”8,9 were proposed, which decomposed a mask pattern into two-dimensional (2D), one-dimensional (1D), and zero-dimensional (0D) patterns. In these models, the EM field of a mask pattern was calculated by superposing the EM fields of 2D, 1D, and 0D patterns. These models are currently used in many EUV lithography simulators.911

An implicit assumption of these models is that the mask pattern is large and isolated. However, EM interaction is nonlocal. The amplitude is affected by the surrounding patterns. In some approximation models, the first-order cross talks between neighboring edges are included, but higher-order cross talks required in the “rigorous domain decomposition method”12 are neglected. In the case of OPC masks, the pattern densities are high because the main pattern is decorated by many serifs and assist features. Also, advanced OPC mask patterns are curvilinear. It could be difficult to apply these approximation models to OPC masks.

Recently, many attempts have been made to simulate the M3D effects using deep neural network (DNN) such as convolutional neural network (CNN) or generative adversarial network (GAN). They are classified into three types depending on the target of DNN: the near-field amplitude at the object plane, the image intensity at the image plane, and the far-field amplitude at the pupil plane.

From the early stage of DNN adaptions, many models have been developed where the near-field amplitude is the target of CNN.1316 However, the near-field amplitude has local oscillation, which makes it difficult to define the loss function of CNN. Also, since the near-field amplitude depends on the incident angle, these models need many CNNs for different source positions.

The image intensity is a natural target of DNN, and GANs have been applied to reproduce the image intensity from the input mask pattern.17,18 However, in these models, the source shape and the aberrations including the defocus are fixed. In OPC applications, the model needs to be reconstructed when the source shape is changed.

In our previous works,1921 a CNN model has been developed that predicts the far-field diffraction amplitude from the input mask pattern. Our CNN model can be applied to arbitrary mask patterns. Although CNN prediction time is very short, preparing training data by EM simulation takes a long time. In this work, we apply weakly guiding approximation to 3D waveguide model,5 one of the EM simulation models, which solves Helmholtz-type coupled vector wave equations. By using the weakly guiding approximation, the coupled vector wave equations are decomposed into two scalar wave equations, reducing the computation time to solve the equations.

The diffraction amplitudes calculated by EM simulations depend on the source position. Hopkins’ transmission cross coefficient (TCC) formula, which is conventionally used in optical lithography simulations, cannot handle the source position-dependent diffraction amplitudes.22 Therefore, in EUV lithography simulations, Abbe’s theory has been used to calculate the image intensity. However, the computation time using the TCC formula is much shorter than the time using Abbe’s theory, because TCC can be precalculated before the image intensity integration. Since the diffraction amplitude in our model is described in frequency space, the model can be easily incorporated with the TCC formula. In this work, we derive a source position-dependent TCC (STCC) formula starting from Abbe’s theory to reduce the image intensity integration time.

In Sec. 2, we explain the architecture of our CNN. In Sec. 3, we apply the weakly guiding approximation to 3D waveguide model. In Sec. 4, we derive STCC formula. Section 5 is the summary.

2.

CNN for Fast EUV Simulation

In this section, we explain the architecture of our CNN used for fast EUV lithography simulation.1921 Figure 1 is the schematic view of the diffraction amplitudes A(l,m;ls,ms) from an EUV mask. We show here the vector potential A. Inside the vacuum the vector potential is converted to the electric field E by the following equation:19

Eq. (1)

E=ikAik(k·A)k,
where k and k represent the wave vector and the wave number, respectively. The diffraction amplitude A(l,m;ls,ms) is divided into the thin mask amplitude [Fourier transform (FT) of the mask pattern] AFT(l,m) and the M3D amplitude A3D(l,m;ls,ms)

Eq. (2)

A(l,m;ls,ms)=AFT(l,m)+A3D(l,m;ls,ms).

Fig. 1

Schematic view of light diffraction by an EUV mask. Diffraction amplitudes depend on both the diffraction order and the source position.

JM3_23_1_014201_f001.png

The M3D amplitude for each diffraction order (l,m) smoothly depends on the source position (ls,ms) as shown in Fig. 2. We assume periodic boundary condition with the mask size L. When the mask pattern is periodic, the momentum (or spatial frequency) of the far-field diffraction amplitude (kx,ky) has discrete number (kx,ky)=2π/L(l,m). For the convenience of numerical calculation, we also discretize the source position (sx,sy)=2π/L(ls,ms).

Fig. 2

Source position dependence of M3D amplitude. The diffraction order of the amplitude is (l,m). The center of the overlapping area between the source and the pupil is (ls,ms) = (l/2,m/2).

JM3_23_1_014201_f002.png

We assume the maximum source size σ=1. The source area σ>1 corresponds to the dark-field illumination, but we do not use this area in lithography. The source position and the diffraction order are restricted by the source shape and the pupil shape as follows:

Eq. (3)

ls2+ms2NA4Lλ,

Eq. (4)

(l+ls)2+(m+ms)2NA4Lλ,
where NA = 0.33 is the numerical aperture of the projection optics and λ=13.5  nm is the wavelength. The magnification of the projection optics is 1/4.

Only the overlapping area between the pupil and the source has possibility to contribute to the image intensity. At the center of the overlapping area, (ls,ms) = (l/2,m/2) as shown in Fig. 2.

We approximate the M3D amplitude by a linear function of source position as follows:

Eq. (5)

Ax3D(l,m;ls,ms)a0(l,m)+ax(l,m)(ls+l/2)+ay(l,m)(ms+m/2),
where a0 is the average of the amplitude in the overlapping area, and ax and ay are the slopes of the amplitude in x and y directions, respectively. We call these three numbers as M3D parameters. In Eq. (5), Ax3D(l,m;ls,ms) is expanded at the center of the overlapping area (l/2,m/2). Therefore, inside the overlapping area, ax(l,m) (ls+l/2)+ay(l,m) (ms+m/2) is small. This improves the accuracy of STCC formula in Sec. 4.

There is another reason using Eq. (5). Mask 3D parameters are derived by least square fitting to the amplitudes at the grid points inside the overlapping area in Fig. 2. The larger (l,m) is, the smaller the number of the grid points inside the overlapping area. If the number of the grid points is too small, the overlapping area becomes a line or just a point. In such case, we approximate the amplitude in the area by using only a0(l,m) as the average of the amplitude and do not use ax(l,m) and ay(l,m). Therefore, a0(l,m) should represent the average of the amplitude in the overlapping area.

M3D parameters are determined by the mask pattern. Recently, CNN is widely used as pattern recognition technique. In the previous works,1921 we constructed a CNN that predicted M3D parameters from an input mask pattern (Fig. 3).

Fig. 3

CNN that connects a mask pattern and M3D parameters.

JM3_23_1_014201_f003.png

3.

Weakly Guiding Approximation of 3D Waveguide Model

3.1.

Mask Clip Size for High Coherent Illumination

In the previous works,20,21 we used a periodic mask pattern with 256  nm×256  nm area on the wafer. We assumed that the area was clipped from large mask data. We should not use the edges of the clipped area to avoid the influence of the neighboring mask pattern. According to Ref. 23, the optical interaction range Ropt is calculated by the following equation:

Eq. (6)

Ropt=1.12λσNA,
where λ,σ, and NA represent the wavelength, coherence factor, and numerical aperture of the scanner, respectively. The wavelength of EUV light is 13.5 nm and the numerical aperture of the current EUV scanner is 0.33. The coherence factor depends on the illumination setting.

Equation (6) can be used for conventional illumination. Here, we confirm the validity of the equation for high coherent illumination, such as dipole illumination. Figure 4 shows the pitch dependence of 20 nm line CD for conventional and dipole illumination. We use a simple threshold model fixing the threshold intensity value at 40 nm pitch. CD varies depending on the pattern pitch, but it becomes stable at larger pitches where the line is isolated from the neighbor lines. The minimum pitch where CD becomes stable depends on the illumination. The minimum pitch physically corresponds to the optical interaction range. From Fig. 4, the optical interaction range for σ0.5 is 100  nm and that for dipole illumination (outer σ 0.7, inner σ0.3, and open angle 90 deg) is 150  nm. The value for σ0.5 is close to Ropt90  nm in Eq. (6). In the case of the dipole illumination, the size of each monopole is0.3. From Eq. (6), Ropt for σ 0.3 is 150  nm, and it is same as the value derived from Fig. 4.

Fig. 4

Pitch dependence of 20 nm line CD.

JM3_23_1_014201_f004.png

Figure 5 shows the usable mask area excluding the area influenced by the neighboring mask pattern. The mask clip size L should be larger than 2×Ropt to get usable mask area. Therefore, when we use high coherent illumination, the mask clip size L needs to be larger than 300 nm. In the previous works, we used L=256  nm, but it was not large enough for high coherent illumination. In this work, we enlarge the mask clip size to 512 nm to obtain usable mask area for high coherent illumination. The usable area on the wafer is 200  nm×200  nm.

Fig. 5

Mask clip and usable area.

JM3_23_1_014201_f005.png

3.2.

Weakly Guiding Approximation

When we enlarge the mask clip size, the computation time for EM simulations increases. We use 3D waveguide model5 to solve Maxwell’s equations. The calculation time for a 256  nm×256  nm mask clip is 146 s and the time for a 512  nm×512  nm mask clip is 2,850 s by using Core i9-10940 central processing unit. The model slices an EUV mask into multilayers including absorber layers and Mo/Si reflective layers. Inside each layer Maxwell’s equations are reduced to the Helmholtz type coupled vector wave equations as follows:

Eq. (7)

ΔAx+k2εAxlogεx(Axx+Ayy)=0,

Eq. (8)

ΔAy+k2εAylogεy(Axx+Ayy)=0,
where Ax and Ay are the x and y components of the vector potential. ε is the complex dielectric constant of the absorber layers or reflective layers. Inside each layer, the complex dielectric constant ε is uniform in the z direction. In this case, gauge transformation freedom allows fixing Az to be zero.24 Then Az is omitted from the coupled vector wave equations, Eqs. (7) and (8). Inside absorber layers, ε is a function of x and y because the absorber is patterned. Inside reflective layers, ε is uniform in the x and y directions, so Eqs. (7) and (8) can be solved analytically.

Two variables, Ax and Ay, correspond to two polarizations. Equation (1) indicates that the electric fields E of Ax and Ay polarizations are almost parallel to x and y axes because kx,kyk near the optical axis. Figure 6 is an example of diffraction amplitudes calculated by solving Eqs. (7) and (8) (for the details, see Ref. 19). The result shows that the polarization change between the incident wave and the outgoing wave is very small. This is because the complex dielectric constant of EUV absorber is close to one. Similar phenomenon is known as “weakly guiding approximation” in optical fiber,25 where two polarizations are decoupled.

Fig. 6

Polarization dependence of the diffraction amplitudes calculated by 3D waveguide model.

JM3_23_1_014201_f006.png

We apply the weakly guiding approximation to 3D waveguide model and decompose the coupled vector wave equations. Each equation becomes a scalar wave equation as follows:

Eq. (9)

ΔAx+k2εAxlogεxAxx=0,

Eq. (10)

ΔAy+k2εAylogεyAyy=0.

Each equation can be solved independently, and it takes 289 s for a 512  nm×512  nm mask clip. Solving two equations take 578 s and it is 1/5 of the time for solving original 3D waveguide model.

We confirm the accuracy of the weakly guiding approximation. Figure 7 compares the image intensities calculated by using the 3D waveguide model and the weakly guiding approximation. We assume the conventional Ta absorber, where the complex refractive index (n,k)=ε=(0.9567,  0.0343).26 It is close to the complex refractive index of the vacuum (1, 0). The difference between the 3D waveguide model and the weakly guiding approximation is very small, <0.1%. Polarization changes due to the EUV mask are negligible. However, there is small difference between Ax and Ay polarizations. It is expected that the difference becomes larger for high NA scanners where the incident angle becomes large. The result here shows the polarization effect on the mask. We do not include the polarization effect at the exit pupil of the projection optics, which is significant in high NA optics.

Fig. 7

Image intensities calculated by the 3D waveguide model and the weakly guiding approximation. The dipole illumination has σin/σout=0.55/0.9 and open angle = 90 deg. We use a 60 nm thick Ta absorber.

JM3_23_1_014201_f007.png

Figure 8 shows the results when a low-n absorber TP1 in Ref. 27 is used. The complex refractive index of TP1 absorber is (0.91, 0.032). As shown in Fig. 8, the difference between the 3D waveguide model and the weakly guiding approximation becomes large, at most 1.4%. The accuracy of the weakly guiding approximation is deteriorated when low-n absorbers are used. Note that low-n absorbers are still under development and the mask process has not yet been established as discussed in Ref. 27.

Fig. 8

Image intensities calculated by the 3D waveguide model and the weakly guiding approximation. We use a 45 nm thick TP1 absorber.

JM3_23_1_014201_f008.png

4.

STCC Formula

4.1.

Thin Mask Model and Thick Mask Model

According to the Abbe’s theory, the image intensity of the thin mask model IThin is calculated by the following equation:

Eq. (11)

IThin(x)=S(s)|EFT(k)P(k+s)eik·xdk|2ds,
where S is the effective source and P is the pupil function of the projection optics. P is a matrix for high NA optics,28 but we assume here as a scalar function. The electric field of the thin mask EFT is calculated from the vector potential of the thin mask AFT by using Eq. (1).

Hopkins’ TCC formula22 is derived by interchanging the order of the integration in Eq. (11).

Eq. (12)

IThin(x)=TCC(k;k  )·EFT(k)EFT(k)*ei(kk)·xdkdk,
where

Eq. (13)

TCC(k;k)=S(s)P(k+s)P*(k+s)ds.

TCC does not depend on the mask pattern. The calculation time of the image intensity is reduced by precomputing and tabulating TCC.

Abbe’s theory is valid in the thick mask model, and the image intensity IThick is calculated as follows:

Eq. (14)

IThick(x)=S(s)|E(k;s)P(k+s)eik·xdk|2ds.
The electric field of the thick mask E is calculated from the vector potential of the thick mask A in Eq. (2). The electric field depends on the source position s. Therefore, we cannot interchange the order of the integrations in Eq. (14). We cannot apply Hopkins’ TCC formula to the thick mask model. Figure 9 compares the image intensities of the thick mask model and the thin mask model. We use the same mask pattern, the same illumination, and the same absorber as shown in Fig. 7. The maximum difference between the thick mask model and the thin mask model is large, 5.2%. The shadowing effect at the edges of the absorber is clearly seen in Fig. 9.

Fig. 9

Image intensities calculated by the thick mask model and the thin mask model.

JM3_23_1_014201_f009.png

4.2.

Linear Approximation of the Thick Mask Model

According to Eqs. (2) and (5), the vector potential of the thick mask model is approximated by a linear function of the source position. The electric field of the thick mask model is also approximated by a linear function of the source position as follows:

Eq. (15)

E(k;s)E(k)+sxE(k)(sx+kx/2)+syE(k)(sy+ky/2)

Inserting Eq. (15) into Eq. (14), we obtain the image intensity ILinear for the linear approximation of the thick mask model as follows:

Eq. (16)

ILinear(x)=S(s)|(E(k)+sxE(k)(sx+kx/2)+syE(k)(sy+ky/2))P(k+s)eik·xdk|2ds.

Figure 10 compares the image intensities of the thick mask model and its linear approximation. The maximum difference is 1.4% which is about 1/4 of the difference between the thick mask model and the thin mask model. Linear approximation is a good approximation as a starting point to include the M3D effects, but we might need to include higher order terms if higher accuracy is required.

Fig. 10

Image intensities calculated by the thick mask model and its linear approximation.

JM3_23_1_014201_f010.png

4.3.

STCC Formula

The STCC formula that includes the source position dependence of the diffraction amplitudes is derived by interchanging the order of integrals in Eq. (16) as follows:

Eq. (17)

ISTCC(x)=TCC(k;k)E(k)·E(k)*ei(kk)·xdkdk+2Re{TCC(k;k)E(k)·(sxE(k)kx/2+syE(k)ky/2)*ei(kk)·xdkdk}+2Re{TCCx(k;k)E(k)·(sxE(k)*ei(kk)·xdkdk}+2Re{TCCy(k;k)E(k)·(syE(k)*ei(kk)·xdkdk},
where TCCx and TCCy are defined by the following equations:

Eq. (18)

TCCx(k;k)=sxS(s)P(k+s)P*(k+s)ds,

Eq. (19)

TCCy(k;k)=syS(s)P(k+s)P*(k+s)ds.

In Eq. (17), we ignore the second-order term of sxE(k)(sx+kx/2)+syE(k)(sy+ky/2) because the contribution is small inside the overlapping area of the source and the pupil, as discussed in Sec. 2.

Sum of coherent systems (SOCS) model29 is conventionally used in optical lithography simulations to speed up the image intensity integration. SOCS model decomposes TCC into eigen functions and sums up only small number of the eigen modes to calculate the image intensity. SOCS model can also be applied to TCCx and TCCy because they are Hermitian matrices. Then, three TCCs are written as

Eq. (20)

TCC(k;k)  =nαnφn(k)φn*(k),

Eq. (21)

TCCx(k;k)=nβnϕn(k)ϕn*(k),

Eq. (22)

TCCy(k;k)=nγnψn(k)ψn*(k),
where αn,βn, and γn are eigen values, and φn,ϕn, and ψn are eigen functions. These eigen values are real numbers.

Figure 11 compares the image intensities calculated by the linear approximation of the thick mask model using Abbe’s theory and STCC formula. In the case of STCC formula, we use SOCS model with 100 eigen modes for TCC, and 20 eigen modes for TCCx and TCCy. The difference between the image intensities calculated by the two formulas is very small, <0.3%.

Fig. 11

Image intensities calculated by the linear approximation of the thick mask model (Abbe’s theory) and STCC formula.

JM3_23_1_014201_f011.png

STCC formula reduces the computation time. The computation time of image intensity integration by Abbe’s theory is 10 s for 512×512 points. On the other hand, the computation by STCC formula takes only 0.07 s excluding the time for the eigen value decomposition by SOCS model.

5.

Summary

Figure 12 shows the estimation of the runtime for CNN data preparation, training, and prediction. Weakly guiding approximation reduces the time of EM simulation by a factor of 5, from 50 to 10 min. STCC formula reduces the time of the image intensity integration by a factor of 140, from 10 to 0.07 s. The total time of the image intensity prediction for 512 nm × 512 nm area on wafer (usable area: 200  nm×200  nm) is 0.1  s.

Fig. 12

Run time of CNN data preparation, training, and prediction for 512  nm×512  nm area.

JM3_23_1_014201_f012.png

In this work, we accelerated the EUV lithography simulation based on the CNN model. A remaining big issue is the accuracy of the CNN.21 The accuracy depends on the quality and quantity of the training data. We hope that large-scale training mask data will improve the accuracy of the CNN. This work is based on the prior SPIE proceedings paper.30

Code and Data Availability

The data supporting the findings of this study are available within the article.

References

1. 

V. Philipsen, “Mask is key to unlock full EUV potential,” Proc. SPIE, 11609 1160904 https://doi.org/10.1117/12.2584583 PSISDG 0277-786X (2021). Google Scholar

2. 

A. Erdmann et al., “3D mask effects in high NA EUV imaging,” Proc. SPIE, 10957 109570Z https://doi.org/10.1117/12.2515678 PSISDG 0277-786X (2019). Google Scholar

3. 

A. Wong, “TEMPEST users’ guide,” (1994). Google Scholar

4. 

M. G. Moharam and T. K. Gaylord, “Rigorous coupled-wave analysis of planar-grating diffraction,” J. Opt. Soc. Am., 71 811 https://doi.org/10.1364/JOSA.71.000811 JOSAAH 0030-3941 (1981). Google Scholar

5. 

K. D. Lucas, H. Tanabe and A.J. Strojwas, “Efficient and rigorous three-dimensional model for optical lithography simulation,” J. Opt. Soc. Am. A, 13 2187 https://doi.org/10.1364/JOSAA.13.002187 JOAOD6 0740-3232 (1996). Google Scholar

6. 

K. Adam and A. R. Neureuther, “Simplified models for edge transitions in rigorous mask modeling,” Proc. SPIE, 4346 331 –344 https://doi.org/10.1117/12.435733 PSISDG 0277-786X (2001). Google Scholar

7. 

A. Erdmann et al., “Efficient simulation of light from 3-dimensional EUV-masks using field decomposition techniques,” Proc. SPIE, 5037 482 –493 https://doi.org/10.1117/12.482744 PSISDG 0277-786X (2003). Google Scholar

8. 

P. Liu et al., “Fast and accurate 3D mask model for full-chip OPC and verification,” Proc. SPIE, 6520 65200R https://doi.org/10.1117/12.712171 PSISDG 0277-786X (2007). Google Scholar

9. 

P. Liu et al., “Fast 3D thick mask model for full-chip EUVL simulations,” Proc. SPIE, 8679 86790W https://doi.org/10.1117/12.2010818 PSISDG 0277-786X (2013). Google Scholar

10. 

J. Word et al., “OPC modeling and correction solutions for EUV lithography,” Proc. SPIE, 8166 81660Q https://doi.org/10.1117/12.899591 PSISDG 0277-786X (2011). Google Scholar

11. 

V. Domnenko et al., “EUV computational lithography using accelerated topographic mask simulation,” Proc. SPIE, 10962 109620O https://doi.org/10.1117/12.2515668 PSISDG 0277-786X (2019). Google Scholar

12. 

L. Zschiedrich et al., “Rigorous finite-element domain decomposition method for electromagnetic near field simulations,” Proc. SPIE, 6924 692450 https://doi.org/10.1117/12.771989 PSISDG 0277-786X (2008). Google Scholar

13. 

S. Lan et al., “Deep learning assisted fast mask optimization,” Proc. SPIE, 10587 105870H https://doi.org/10.1117/12.2297514 PSISDG 0277-786X (2018). Google Scholar

14. 

P. Liu, “Mask synthesis using machine learning software and hardware platforms,” Proc. SPIE, 11327 1132707 https://doi.org/10.1117/12.2551816 PSISDG 0277-786X (2020). Google Scholar

15. 

R. Pearman et al., “Fast all-angle mask 3D ILT patterning,” Proc. SPIE, 11327 113270F https://doi.org/10.1117/12.2554856 PSISDG 0277-786X (2020). Google Scholar

16. 

J. Lin et al., “Fast mask near-field calculation using fully convolution network,” in Int. Workshop Adv. Patterning Solut., (2020). Google Scholar

17. 

W. Ye et al., “TEMPO: fast mask topography effect modeling with deep learning,” in Int. Symp. Phys. Design ’20, 127 (2020). Google Scholar

18. 

A. Awad et al., “Accurate prediction of EUV lithographic images and 3D mask effects using generative networks,” J. Micro/Nanopatterning Mater. Metrol., 20 043201 https://doi.org/10.1117/1.JMM.20.4.043201 (2021). Google Scholar

19. 

H. Tanabe, S. Sato and A. Takahashi, “Fast EUV lithography simulation using convolutional neural network,” J. Micro/Nanopatterning Mater. Metrol., 20 041202 https://doi.org/10.1117/1.JMM.20.4.041202 (2021). Google Scholar

20. 

H. Tanabe and A. Takahashi, “Data augmentation in extreme ultraviolet lithography simulation using convolutional neural network,” J. Micro/Nanopatterning Mater. Metrol., 21 041602 https://doi.org/10.1117/1.JMM.21.4.041602 (2022). Google Scholar

21. 

H. Tanabe, A. Jinguji and A. Takahashi, “Evaluation of convolutional neural network for fast extreme violet lithography simulation using 3 nm node mask patterns,” J. Micro/Nanopatterning Mater. Metrol., 22 024201 https://doi.org/10.1117/1.JMM.22.2.024201 (2023). Google Scholar

22. 

M. Born and E. Wolf, Principles of Optics, 7th ed.Cambridge University Press( (1999). Google Scholar

23. 

A. Wong, Resolution Enhancement Techniques in Optical Lithography, SPIE Press, Bellingham, Washington (2001). Google Scholar

24. 

H. Tanabe, “Modeling of optical images in resist by vector potentials,” Proc. SPIE, 1674 637 https://doi.org/10.1117/12.130360 PSISDG 0277-786X (1992). Google Scholar

25. 

D. Gloge, “Weakly guiding fibers,” Appl. Opt., 10 2252 https://doi.org/10.1364/AO.10.002252 APOPAI 0003-6935 (1971). Google Scholar

26. 

E. Gullikson, “CXRO X-ray database,” https://henke.lbl.gov/optical_constants/ (). Google Scholar

27. 

S. Lin et al., “EUV APSM mask prospects and challenges,” Proc. SPIE, 12751 127510N https://doi.org/10.1117/12.2688111 PSISDG 0277-786X (2023). Google Scholar

28. 

M. Yeung, “Modeling high numerical aperture optical lithography,” Proc. SPIE, 0922 149 https://doi.org/10.1117/12.968409 PSISDG 0277-786X (1988). Google Scholar

29. 

N. B. Cobb, “Fast optical and process proximity correction algorithms for integrated circuit manufacturing,” (1998). Google Scholar

30. 

H. Tanabe, A. Jinguji and A. Takahashi, “Accelerating EUV lithography simulation with weakly guiding approximation and STCC formula,” Proc. SPIE, 12750 127500D https://doi.org/10.1117/12.2688029 PSISDG 0277-786X (2023). Google Scholar

Biography

Hiroyoshi Tanabe is a researcher at Tokyo Institute of Technology. He received his PhD in physics from University of Tokyo in 1986. He has more than 30 years of experience in optical and EUV lithography. He is the author of more than 30 papers. He was the program committee chair of Photomask Japan in 2003 and 2004. His current research interests include EUV masks and lithography simulation. He is a member of SPIE.

Akira Jinguji is an assistant professor at Tokyo Institute of Technology. He received his PhD in computer architecture from Tokyo Institute of Technology in 2022. His research focuses on high-performance computers with dedicated architectures. In particular, he designs high-speed digital circuits for deep learning computation. He is a member of IEEE and IEICE.

Atsushi Takahashi received his BE, ME, and DE degrees in electrical and electronic engineering from Tokyo Institute of Technology, Tokyo, Japan, in 1989, 1991, and 1996, respectively. He is currently a professor in the Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology. His research interests are in VLSI layout design and combinational algorithms. He is a fellow of IEICE, a senior member of IEEE and IPSJ, and a member of ACM.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hiroyoshi Tanabe, Akira Jinguji, and Atsushi Takahashi "Accelerating extreme ultraviolet lithography simulation with weakly guiding approximation and source position dependent transmission cross coefficient formula," Journal of Micro/Nanopatterning, Materials, and Metrology 23(1), 014201 (2 January 2024). https://doi.org/10.1117/1.JMM.23.1.014201
Received: 28 September 2023; Accepted: 14 December 2023; Published: 2 January 2024
Advertisement
Advertisement
KEYWORDS
3D modeling

3D mask effects

Diffraction

Extreme ultraviolet lithography

Light sources and illumination

Waveguide modes

Waveguides

Back to Top