Open Access
15 June 2023 Evaluation of convolutional neural network for fast extreme ultraviolet lithography simulation using imec 3 nm node mask patterns
Author Affiliations +
Abstract

Background

Mask 3D (M3D) effects distort diffraction amplitudes from extreme ultraviolet masks. In our previous work, we developed a convolutional neural network (CNN) that very quickly predicted the distorted diffraction amplitudes from input mask patterns. The mask patterns were restricted to Manhattan patterns.

Aim

We verify the potentials and the limitations of CNN using imec 3 nm node (iN3) mask patterns.

Approach

We apply the same CNN architecture in the previous work to mask patterns, which mimic iN3 logic metal or via layers. In addition, to study more general mask patterns, we apply the architecture to iN3 metal/via patterns with optical proximity correction (OPC) and curvilinear via patterns. In total, we train five different CNNs: metal patterns w/wo OPC, via patterns w/wo OPC, and curvilinear via patterns. After the training, we validate each CNN using validation data with the above five different characteristics.

Results

When we use the training and validation data with the same characteristics, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to iN3 metal and via layers. The architecture has the capability to recognize curvilinear mask patterns. On the other hand, using the training and validation data with different characteristics will lead to large validation loss. The selection of training data is very important for obtaining high accuracy. We examine the impact of M3D effects on iN3 metal layers. A large difference is observed in the tip to tip (T2T) critical dimension calculated by the thin mask model and thick mask model. This is due to the mask shadowing effect at T2T slits.

Conclusions

The selection of training data is very important for obtaining high accuracy. Our test results suggest that layer specific CNN could be constructed, but further development of CNN architecture could be required.

1.

Introduction

High aspect absorbers used in extreme ultraviolet (EUV) masks induce several mask 3D (M3D) effects, such as critical dimension (CD) error and edge placement error (EPE).1,2 It is necessary to include the M3D effects in EUV lithography simulation. M3D effects can be calculated rigorously using electromagnetic (EM) simulators.38 However, these simulators are highly time consuming, especially for optical proximity correction (OPC) applications.

There are two types of EM simulation methods. The finite-difference time-domain (FDTD) method solves Maxwell’s equations in coordinate space, and the solution is near-field diffraction amplitudes.35 The calculation times shown in the literatures are 322 s for 500  nm×500  nm4 and 3.64 s for 1.5  μm×1.5  μm.5 Because the FDTD method calculates near-field diffraction amplitudes in coordinate space, the calculation needs to be repeated for all source points, which is typically more than 100 points.

Rigorous coupled-wave analysis6 and the 3D waveguide model7 solve Maxwell’s equation in momentum (or frequency) space, and the solution is far-field diffraction amplitudes. These models solve coupled wave equations in momentum space, and all relations between the incident momentum and the outgoing momentum are calculated simultaneously. Therefore, these models do not need to repeat the calculation for different source points. The computation time is 122 s for 256  nm×256  nm.9

To speed up the EM simulations, several models that decomposed a 2D mask pattern into 1D patterns were proposed.1012 In these models, the EM field of 2D mask pattern was approximately calculated by superposing the EM fields of 1D patterns. These models are currently used in many EUV lithography simulators.1315 An implicit assumption of the pattern decomposition method in the models is that the mask pattern is large and isolated. However, OPC masks are decorated by small patterns [serifs and assist features (AF)], and the pattern densities are high. Also, advanced OPC mask patterns are curvilinear. It is not clear if the pattern decomposition method can be applied to OPC masks.

Recently, many attempts have been made to simulate the M3D effects using deep neural networks, such as convolutional neural network (CNN) or generative adversarial network. They are classified into three types depending on the target: near-field amplitude on the mask,1619 far-field diffraction amplitude at the pupil,20 and image intensity on the wafer.21,22 In our model,20 a CNN is used to predict the far-field diffraction amplitude from the input mask pattern. Although the training of CNN takes a very long time (more than 1 day), the prediction time is very short, 0.05 s for 256  nm×256  nm.9

Our model is a natural extension of the optical simulation, in which the far-field diffraction amplitude [Fourier transformation (FT) of a mask pattern] is used to calculate the image intensity. As shown in Ref. 20, because our model is described in frequency space, it can easily incorporate the transmission mission cross coefficient method23 and sum of coherent systems model24 conventionally used in optical simulations to speed up the image intensity integration.

In this work, we apply the CNN architecture developed in the previous work to mask patterns that mimic imec 3 nm node (iN3) logic metal or via layers.25,26 In addition, to study more general mask patterns, we train the CNN using iN3 metal/via patterns with OPC or curvilinear via patterns. In total, we develop five different CNNs using different mask patterns for the training data. We examine the potential and the limitation of these CNNs using five different mask patterns for the validation data.

As mentioned at the beginning of this section, M3D effects of EUV masks have a large influence on CD and EPE. The motivation of this work is to include M3D effects in EUV lithography simulation. We find that the iN3 metal layer is a good example to show large M3D effects. We verify the accuracy of CNN by calculating the image intensity of iN3 metal layer mask patterns.

In Sec. 2, we explain our model to calculate the diffraction amplitudes from EUV masks. In Sec. 3, we explain the architecture of our CNN. In Sec. 4, we examine the potential and limitation of CNN using iN3 mask patterns. In Sec. 5, we study M3D effects on the iN3 metal layer. Section 6 is the summary.

2.

Diffraction Amplitudes from an EUV Mask

In optical lithography simulation, a thin mask model is conventionally used, and the diffraction amplitude is the FT of a mask pattern. However, an EUV absorber is thick, and the diffraction amplitudes from an EUV mask need to be calculated by rigorous EM simulations. Figure 1 shows the schematic view of the diffraction amplitudes A(l,m;ls,ms) from an EUV mask. We show here the vector potential A. Inside the vacuum, the vector potential is converted to the electric field E by20

Eq. (1)

E=ikAik(k·A)k,
where k represents the wave vector. It can be easily shown that the electric field is perpendicular to the wave vector:

Eq. (2)

E·k=0.

Fig. 1

Schematic view of light diffraction by an EUV mask. Diffraction amplitudes depend on both the diffraction order and the source position.

JM3_22_2_024201_f001.png

The electric field depends on both the diffraction order (l,m) and the source position (ls,ms). This is the basic difference from thin mask model in which the diffraction amplitude (FT of a mask pattern) depends only on the diffraction order (l,m). The image intensity on wafer I is calculated by Abbe’s theory as follows:

Eq. (3)

I(x,y)=ls,msS(ls,ms)|l,mE(l,m;ls,ms)P(l+ls,m+ms)eik(lx+my)|2,
where S and P are the effective source and the pupil function, respectively.

We use the 3D waveguide model7 to calculate the diffraction amplitude A from an EUV mask. The model solves the following coupled wave equations for Ax and Ay as

Eq. (4)

ΔAx+k2ϵAxlogϵx(Axx+Ayy)=0,

Eq. (5)

ΔAy+k2ϵAylogϵy(Axx+Ayy)=0,
where ϵ is the complex dielectric constant of the patterned absorber. Gauge transformation freedom allows Az to be fixed at zero.8

Two variables Ax and Ay correspond to two polarizations. Equation (1) indicates that the electric fields E of Ax and Ay polarizations are almost parallel to the x and y axes because kx,kyk near the optical axis.20 Figure 2 is an example of diffraction amplitudes calculated by solving Eqs. (4) and (5) (for the details, see Ref. 20). The result shows that the polarization change between the incident wave and the outgoing wave is very small. This is because the complex dielectric constant of the EUV absorber is close to 1. A similar phenomenon is known as “weakly guiding approximation” in optical fiber,27 in which two polarizations are decoupled. We therefore focus on the diffraction amplitudes in which both the incident and outgoing waves have Ax polarization.

Fig. 2

Polarization dependence of the diffraction amplitude. The polarization change between the incident wave and the outgoing wave is very small.

JM3_22_2_024201_f002.png

The diffraction amplitude Ax(l,m;ls,ms)   is divided into the thin mask amplitude AxFT(l,m  ) and M3D amplitude Ax3D(l,m;ls,ms) and is given as

Eq. (6)

Ax(l,m;ls,ms)=AxFT(l,m)+Ax3D(l,m;ls,ms).

The thin mask amplitude AxFT is calculated by the FT of the mask pattern using the reflection coefficients of the absorber and the multilayer. It only depends on the diffraction order (l,m). M3D amplitude Ax3D is defined as the difference between the thick mask amplitude Ax  and the thin mask amplitude AxFT. The M3D amplitude depends on the source position (ls,ms), which causes incident-angle dependent M3D effects.

As shown in Fig. 3, the contribution of the thin mask amplitude is dominant. The amplitude does not depend on the source position. The contribution of the M3D amplitude is small but not negligible.

Fig. 3

Decomposition of the diffraction amplitude. Both the thick mask amplitude and M3D amplitude depend on the source position (ls,ms), but the thin mask amplitude does not.

JM3_22_2_024201_f003.png

3.

CNN Architecture

The M3D amplitude gradually changes depending on the source position (ls,ms). We parametrize the M3D amplitude at each diffraction order (l,m) as a linear function of the source position (ls,ms) (Fig. 4) as

Eq. (7)

Ax3D(l,m;ls,ms)a0(l,m)+ax(l,m)(ls+l/2)+ay(l,m)(ms+m/2),
where a0 is the average of the amplitude and ax and ay are the slopes of the amplitude in the x and y directions, respectively. We call these three numbers the M3D parameters. In Fig. 4, we consider the area where the maximum effective source (σ=1) and the projection pupil overlap. Only this area has the possibility of contributing to the image intensity. The center of the overlapping area between the source and the pupil is (ls,ms)=(l/2,m/2).

Fig. 4

Source position dependence of M3D amplitude. The diffraction order of the amplitude is (l,m). The center of the overlapping area between the source (σ=1) and the pupil is (ls,ms)=(l/2,m/2).

JM3_22_2_024201_f004.png

M3D parameters are determined by the mask pattern and the absorber. In this work, the absorber is assumed to be Ta with a 60 nm thickness. The input mask pattern has a 1024  nm×1024  nm area. We construct a set of CNNs to predict the M3D parameters from the input mask pattern. Figure 5 shows the architecture of our CNNs. Six independent CNNs, CNN1-6, are used for the real and imaginary parts of the three M3D parameters, a0,ax, and ay. 1024×1024 binary data are averaged to 256×256 float data before being input to the CNNs. We repeat convolution/max pooling/batch normalization five times. The number of the free fitting parameters for each CNN is 69.6 M. Six CNNs are merged to one model after the training.

Fig. 5

Architecture of our CNN. Six independent CNNs, CNN1-6, are used for the real and imaginary parts of the three M3D parameters, a0,ax, and ay. Six CNNs are merged to one model after the training.

JM3_22_2_024201_f005.png

One of the issues in CNN is that it requires a huge amount of training data. To get a high prediction accuracy, it takes a long time to prepare a large amount of training data. We applied data augmentation techniques to circumvent this issue.9 Assuming a periodic boundary condition, the diffraction amplitude of a shifted mask pattern can be calculated easily by multiplying a phase factor due to the pattern shift by the diffraction amplitude of the original mask pattern. In this way, we do not need to repeat time-consuming EM simulations to calculate the diffraction amplitudes of shifted mask patterns.

Figures 6(a) and 6(b) show the loss functions of training and validation data for Real(a0) without/with data augmentation, respectively. We use Manhattan mask patterns in Ref. 9. The number of the original data for training is 2000, and the number of the data for validation is 1000. With data augmentation, the original data is shifted by 103 nm increments in both the X and Y directions. Therefore, the number of the training data after the data augmentation is multiplied by 100 to 200,000.

Fig. 6

Training and validation loss for Real(a0) (a) without and (b) with data augmentation.

JM3_22_2_024201_f006.png

Without data augmentation, the training loss decreases during the training, but the validation loss does not. This is a typical overfitting phenomenon. With data augmentation, both the training loss and the validation loss decrease during the training.

4.

Potential and Limitation of CNN

In the previous work,9 the input mask patterns were Manhattan patterns, as shown in Fig. 5. In general, the accuracy of neural networks depends on their training data. The CNN trained by Manhattan patterns cannot be used to general mask patterns. However, our CNN architecture contains 70 M parameters, and the architecture itself could be applied to general mask patterns. In this work, we apply the same CNN architecture to mask patterns that mimic iN3 logic metal or via layers. The design rules of iN3 metal and via layers are as follows.25,26 The minimum pitch of the metal 1 layer is 28 nm, and the minimum tip to tip (T2T) CD is 20 nm. The via layer has different pitches and CDs in the X and Y directions: pitch X/Y is 42/36 nm and CD X/Y is 26/18 nm.

Figure 7(a) shows the metal pattern and the result of CNN training. We show here the result for CNN1 in Fig. 5, but similar results are obtained for CNN2-6. We use 200,000 random mask patterns and corresponding M3D parameters for the training dataset. Both the training loss and the validation loss decrease rapidly as the training proceeds. The maximum value of the data for each diffraction order is normalized to 1 before training. The loss is calculated by averaging the mean square errors for all diffraction orders. The validation loss after 100 epochs is 0.0006. It is a very small number, and we expect that the difference between the image intensity of CNN prediction and that of EM simulation is small. We confirm the accuracy of the CD calculated by CNN in Sec. 5.

Fig. 7

Input mask pattern and CNN training result: (a) metal pattern and (b) metal pattern with AF and HH.

JM3_22_2_024201_f007.png

To study more general mask patterns, rule-based OPC is applied to iN3 metal patterns. Figure 7(b) shows the metal pattern with AFs (5 nm) and hammer head (HH, 3 nm). Both the training loss and validation loss decrease, but the speed is slightly slower than that of Fig. 7(a). Training speed depends on the complexity of mask patterns. The training loss at 10 epochs is 0.0027 for Fig. 7(a) and 0.0036 for Fig. 7(b).

Figures 8(a)8(c) show iN3 via mask pattern, via pattern with AF (6 nm) and curvilinear via pattern, respectively. In all cases, the training loss and the validation loss decrease during the training. The training speed depends on the complexity of the mask patterns. The training loss at 10 epochs is 0.0011 for Fig. 8(a), 0.0035 for Fig. 8(b), and 0.0042 for Fig. 8(c). Curvilinear via patterns can also be trained by CNN. Our CNN architecture has the capability to recognize curvilinear mask patterns.

Fig. 8

Input mask pattern and CNN training result: (a) via pattern, (b) via pattern with AF, and (c) curvilinear via pattern.

JM3_22_2_024201_f008.png

In general, CNN is only as good as the data we feed it. Figure 9 verifies this rule. When we use the same kind of training data and validation data, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to any iN3 metal or via layer. However, the validation loss is large when we use different kinds of training data and validation data. The selection of training data is very important for obtaining high accuracy. Real mask data, such as test element group (TEG) mask patterns, are desirable. The data could involve a diversity of patterns not included in this work.

Fig. 9

Validation loss using various training and validation data.

JM3_22_2_024201_f009.png

5.

M3D Effect on iN3 Metal Layer

In this section, we study the impact of the M3D effects on the iN3 metal layer, especially the shadowing effect on the T2T CD. We follow the design rule of the metal 1 layer25 as follows. The line CD is 14 nm (1×), and the T2T CD is 20 nm. We perform three types of simulations: (a) thick mask model (EM: EM simulation), (b) CNN prediction (CNN), and (c) thin mask model (FT: Fourier transform of the mask pattern). In the simulations, we use the following optical settings: wavelength 13.5 nm, NA 0.33, and dipole illumination D90× (open angle 90 deg, dipole in X direction), σ=0.9/0.55. We assume Ta absorber with a 60 nm thickness. We use the image threshold model in which the image contour is calculated by the aerial image. The threshold intensity Ith is different for each type of simulation. It is set so that 14 nm L/S has a line width of 14 nm on the wafer.

Figures 10(a)10(d) compare the pitch dependence of the line CD and T2T CD on the wafer. In all cases, the line CD decreases and the T2T CD increases as the pattern pitch becomes larger. This is because the image contrast of small pitch patterns is high when dipole illumination is used. In the case of the line CD, the difference among EM, CNN, and FT is small. However, in the case of the T2T CD, the difference among the three models is large, especially between EM (or CNN) and FT. This can be explained by the shadowing effect at the T2T slits (Fig. 11). Oblique incident light casts a shadow in the Y direction. This effect is included in EM and CNN because M3D amplitudes, which cause shadowing effect, are included in the diffraction amplitudes of these models, but not in FT. The shadowing effect darkens between the two line ends and reduces the T2T CD when the image intensity threshold model is used. Therefore, the T2T CD of FT is larger than that of EM or CNN.

Fig. 10

Pitch dependence of the line CD and T2T CD on the wafer. (a) Mask pattern shows the line width of 14 nm, pattern pitch of 56 nm, and T2T CD of 20 nm (1×). (b) Image intensity using EM model. (c) The threshold intensity Ith is set so that 14 nm L/S has a line width of 14 nm on the wafer. (d) The threshold intensity for each model is listed in the table.

JM3_22_2_024201_f010.png

Fig. 11

Shadowing effect at the T2T slit. Oblique incident light casts a shadow at the edge of the absorber in the Y direction. The height of the absorber is 60 nm, and the width is 14×4=56  nm. T2T CD on the mask is 20×4=80  nm.

JM3_22_2_024201_f011.png

We further study the impact of the M3D effect on iN3 metal patterns. We generate 100 random metal mask patterns as shown in Figs. 12(a) and 12(b). In each mask pattern, we put a T2T slit at the center of the mask and measure the T2T CD and line CD of the center line. In the case of the line CD, the root mean square deviation (RMSD) between the EM and FT CDs is 0.19 nm, and the RMSD between the EM and CNN CDs is 0.17 nm. Both numbers are negligibly small compared with the designed line width of 14 nm. However, in the case of the T2T CD (with a designed space width of 20 nm), the RMSD between the EM and FT CDs is 6.53 nm, whereas the RMSD between the EM and CNN CDs is 0.96 nm. A large deviation is observed between EM and FT CDs, which suggests the influence of the shadowing effect at the T2T slit.

Fig. 12

(a) Line CDs and (b) T2T CDs for 100 random metal patterns.

JM3_22_2_024201_f012.png

Figures 13(a) and 13(b) show the results for iN3 metal patterns with AFs and HHs. Similar results are obtained even when OPC masks are used.

Fig. 13

(a) Line CDs and (b) T2T CDs for 100 random metal patterns with AFs and HHs.

JM3_22_2_024201_f013.png

6.

Summary

We apply the CNN architecture developed for Manhattan patterns to mask patterns, which mimic iN3 logic metal or via layers. In addition, we train CNNs using iN3 metal/via patterns with OPC and curvilinear via patterns. In all cases, the validation loss becomes very small. Our CNN architecture is flexible enough to be applied to any iN3 metal or via layers. Even curvilinear mask patterns can be trained by CNN. Our CNN architecture has the capability to recognize curvilinear mask patterns.

When we use the training and validation data with the same characteristics, the validation loss becomes very small. On the other hand, using the training and validation data with different characteristics leads to a large validation loss. The selection of training data is very important for obtaining high accuracy. Real mask data, such as TEG mask patterns, are desirable, but they are hard to obtain for us.

Mask pattern recognition by CNN is the key for fast EUV lithography simulation. The accuracy of CNN depends on the quantity and quality of training data. Our test results suggest that layer specific CNN could be constructed, but further development of the CNN architecture might be required because our architecture is a simple repetition of convolution/max pooling/batch normalization. It is a big challenge to build a universal CNN for general mask patterns.

This work is based on a prior SPIE proceedings paper.28 The data supporting the findings of this study are available within the paper.

References

1. 

V. Philipsen, “Mask is key to unlock full EUV potential,” Proc. SPIE, 11609 1160904 https://doi.org/10.1117/12.2584583 PSISDG 0277-786X (2021). Google Scholar

2. 

A. Erdmann et al., “3D mask effects in high NA EUV imaging,” Proc. SPIE, 10957 109570Z https://doi.org/10.1117/12.2515678 PSISDG 0277-786X (2019). Google Scholar

3. 

A. Wong, “TEMPEST users’ guide,” (1994). Google Scholar

4. 

, “Panoramic v7 TRIG: rigorous 3D Maxwell solver for EUV,” https://www.panoramictech.com/ (). Google Scholar

5. 

M. Yeung and E. Barouch, “Development of fast rigorous simulator for large-area lithography simulation,” Proc. SPIE, 10957 109571D https://doi.org/10.1117/12.2515079 PSISDG 0277-786X (2019). Google Scholar

6. 

M. G. Moharam and T. K. Gaylord, “Rigorous coupled-wave analysis of planar-grating diffraction,” J. Opt. Soc. Am., 71 811 https://doi.org/10.1364/JOSA.71.000811 JOSAAH 0030-3941 (1981). Google Scholar

7. 

K. D. Lucas, H. Tanabe and A. J. Strojwas, “Efficient and rigorous three-dimensional model for optical lithography simulation,” J. Opt. Soc. Am. A, 13 2187 https://doi.org/10.1364/JOSAA.13.002187 JOAOD6 0740-3232 (1996). Google Scholar

8. 

H. Tanabe, “Modeling of optical images in resist by vector potentials,” Proc. SPIE, 1674 637 https://doi.org/10.1117/12.130360 PSISDG 0277-786X (1992). Google Scholar

9. 

H. Tanabe and A. Takahashi, “Data augmentation in extreme ultraviolet lithography simulation using convolutional neural network,” J. Micro/Nanopattern. Mater. Metrol., 21 (4), 041602 https://doi.org/10.1117/1.JMM.21.4.041602 (2022). Google Scholar

10. 

K. Adam and A. R. Neureuther, “Simplified models for edge transitions in rigorous mask modeling,” Proc. SPIE, 4346 331 https://doi.org/10.1117/12.435733 PSISDG 0277-786X (2001). Google Scholar

11. 

A. Erdmann et al., “Efficient simulation of light from 3-dimensional EUV-masks using field decomposition techniques,” Proc. SPIE, 5037 482 https://doi.org/10.1117/12.482744 PSISDG 0277-786X (2003). Google Scholar

12. 

P. Liu et al., “Fast and accurate 3D mask model for full-chip OPC and verification,” Proc. SPIE, 6520 65200R https://doi.org/10.1117/12.712171 PSISDG 0277-786X (2007). Google Scholar

13. 

J. Word et al., “OPC modeling and correction solutions for EUV lithography,” Proc. SPIE, 8166 81660Q https://doi.org/10.1117/12.899591 PSISDG 0277-786X (2011). Google Scholar

14. 

V. Domnenko et al., “EUV computational lithography using accelerated topographic mask simulation,” Proc. SPIE, 10962 169620O https://doi.org/10.1117/12.2515668 PSISDG 0277-786X (2019). Google Scholar

15. 

P. Liu et al., “Fast 3D thick mask model for full-chip EUVL simulations,” Proc. SPIE, 8679 86790W https://doi.org/10.1117/12.2010818 PSISDG 0277-786X (2013). Google Scholar

16. 

S. Lan et al., “Deep learning assisted fast mask optimization,” Proc. SPIE, 10587 105870H https://doi.org/10.1117/12.2297514 PSISDG 0277-786X (2018). Google Scholar

17. 

P. Liu, “Mask synthesis using machine learning software and hardware platforms,” Proc. SPIE, 11327 1132707 https://doi.org/10.1117/12.2551816 PSISDG 0277-786X (2020). Google Scholar

18. 

R. Pearman et al., “Fast all-angle mask 3D ILT patterning,” Proc. SPIE, 11327 113270F https://doi.org/10.1117/12.2554856 PSISDG 0277-786X (2020). Google Scholar

19. 

J. Lin et al., “Fast mask near-field calculation using fully convolution network,” in Int. Workshop Adv. Pattern. Solutions, (2020). https://doi.org/10.1109/IWAPS51164.2020.9286805 Google Scholar

20. 

H. Tanabe, S. Sato and A. Takahashi, “Fast EUV lithography simulation using convolutional neural network,” J. Micro/Nanopattern. Mater. Metrol., 20 (4), 041202 https://doi.org/10.1117/1.JMM.20.4.041202 (2021). Google Scholar

21. 

W. Ye et al., “TEMPO: fast mask topography effect modeling with deep learning,” in Int. Symp. Phys. Design ’20, 127 –134 (2020). https://doi.org/10.1145/3372780.3375565 Google Scholar

22. 

A. Awad et al., “Accurate prediction of EUV lithographic images and 3D mask effects using generative networks,” J. Micro/Nanopattern. Mater. Metrol., 20 (4), 043201 https://doi.org/10.1117/1.JMM.20.4.043201 (2021). Google Scholar

23. 

M. Born and E. Wolf, Principles of Optics, 7th ed.Cambridge University Press( (1999). Google Scholar

24. 

N. B. Cobb, “Fast optical and process proximity correction algorithms for integrated circuit manufacturing,” (1998). Google Scholar

25. 

D. Xu et al., “Investigation of low-n mask in 0.33 NA EUV single patterning at pitch 28 nm metal design,” Proc. SPIE, 12051 120510H https://doi.org/10.1117/12.2614197 PSISDG 0277-786X (2022). Google Scholar

26. 

L. E. Tan et al., “EUV low-n attenuated phase-shift mask on random logic via single patterning at pitch 36 nm,” Proc. SPIE, 12051 120510P https://doi.org/10.1117/12.2614000 PSISDG 0277-786X (2022). Google Scholar

27. 

D. Gloge, “Weakly guiding fibers,” Appl. Opt., 10 2252 https://doi.org/10.1364/AO.10.002252 APOPAI 0003-6935 (1971). Google Scholar

28. 

H. Tanabe, A. Jinguji and A. Takahashi, “Evaluation of CNN for fast EUV lithography simulation using iN3 mask patterns,” Proc. SPIE, 12495 124951J https://doi.org/10.1117/12.2659063 PSISDG 0277-786X (2023). Google Scholar

Biography

Hiroyoshi Tanabe received his PhD in physics from the University of Tokyo in 1986. He is a researcher at Tokyo Institute of Technology. He has more than 30 years of experience in optical and EUV lithography. He is the author of more than 30 papers. He was the program committee chair of Photomask Japan in 2003 and 2004. His current research interests include EUV masks and lithography simulation. He is a member of SPIE.

Akira Jinguji received his PhD in computer architecture from Tokyo Institute of Technology in 2022. He is an assistant professor at Tokyo Institute of Technology. His research focuses on high-performance computers with dedicated architectures. In particular, he designs high-speed digital circuits for deep learning computation. He is a member of IEEE and IEICE.

Atsushi Takahashi received his BE, ME, and DE degrees in electrical and electronic engineering from Tokyo Institute of Technology, Tokyo, Japan, in 1989, 1991, and 1996, respectively. He is currently a professor in the Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology. His research interests include VLSI layout design and combinational algorithms. He is a fellow of IEICE, a senior member of IEEE and IPSJ, and a member of ACM.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hiroyoshi Tanabe, Akira Jinguji, and Atsushi Takahashi "Evaluation of convolutional neural network for fast extreme ultraviolet lithography simulation using imec 3 nm node mask patterns," Journal of Micro/Nanopatterning, Materials, and Metrology 22(2), 024201 (15 June 2023). https://doi.org/10.1117/1.JMM.22.2.024201
Received: 12 February 2023; Accepted: 2 June 2023; Published: 15 June 2023
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Education and training

Metals

Diffraction

3D mask effects

Fourier transforms

Optical proximity correction

Convolutional neural networks

Back to Top