Deriving models for lithographic masks based either on first principles or using an empirical model is becoming increasingly challenging as complex effects (once relegated to noise level) become more relevant. Deep Learning offers an alternative solution that can leapfrog the shortcomings of these previous approaches but requires a source of input data that contains enough diversity to allow an effective training of the neural networks. The solution for mask lithography modeling presented in this paper makes use of carefully calibrated SEM images to extract the information required to allow the training and testing of a deep convolutional neural network that achieves accuracy beyond what can be done in metrology-based methods. We demonstrate how the input data is calibrated to be consumed in this flow and present examples demonstrating its predicting power which can, for instance, detect the location and shape of hotspots in the layout. One significant additional advantage is the improvement in the ease and speed of building models compared to previous solutions which can dovetail well with regular production flows and can be adapted to dynamic changes in the mask process.
Low dose scanning electron microscope (SEM) images are an attractive option to estimate the roughness of nanos- tructures. We recently proposed two deep convolutional neural network (CNN) architectures named “LineNet” to simultaneously perform denoising and edge estimation on rough line SEM images. In this paper we consider multiple visualization tools to improve our understanding of LineNet1; one of these techniques is new to the visualization of denoising CNNs. We use the resulting insights from these visualizations to motivate a study of two variations of LineNet1 with fewer neural network layers. Furthermore, although in classification CNNs edge detection is commonly believed to happen early in the network, the visualization techniques suggest that important aspects of edge detection in LineNet1 occur late in the network.
KEYWORDS: Scanning electron microscopy, Neural networks, Line width roughness, Denoising, Line edge roughness, Machine learning, Monte Carlo methods, Computer simulations, Convolutional neural networks, Wavelets
We propose the use of deep supervised learning for the estimation of line edge roughness (LER) and line width roughness (LWR) in low-dose scanning electron microscope (SEM) images. We simulate a supervised learning dataset of 100,800 SEM rough line images constructed by means of the Thorsos method and the ARTIMAGEN library developed by the National Institute of Standards and Technology. We also devise two separate deep convolutional neural networks called SEMNet and EDGENet, each of which has 17 convolutional layers, 16 batch normalization layers, and 16 dropout layers. SEMNet performs the Poisson denoising of SEM images, and it is trained with a dataset of simulated noisy-original SEM image pairs. EDGENet directly estimates the edge geometries from noisy SEM images, and it is trained with a dataset of simulated noisy SEM image-edge array pairs. SEMNet achieved considerable improvements in peak signal-to-noise ratio as well as the best LER/LWR estimation accuracy compared with standard image denoisers. EDGENet offers excellent LER and LWR estimation as well as roughness spectrum estimation.
We propose a deep convolutional neural network named EDGENet to estimate rough line edge positions in low-dose scanning electron microscope (SEM) images corrupted by Poisson noise, Gaussian blur, edge effects and other instrument errors and apply our approach to the estimation of line edge roughness (LER) and line width roughness (LWR). Our method uses a supervised learning dataset of 100800 input-output pairs of simulated noisy SEM rough line images with true edge positions. The edges were constructed by the Thorsos method and have an underlying Palasantzas spectral model. The simulated SEM images were created using the ARTIMAGEN library developed at the National Institute of Standards and Technology. The convolutional neural network EDGENet consists of 17 convolutional, 16 batch-normalization layers and 16 dropout layers and offers excellent LER and LWR estimation as well as roughness spectrum estimation.
We use deep supervised learning for the Poisson denoising of low-dose scanning electron microscope (SEM) images as a step in the estimation of line edge roughness (LER) and line width roughness (LWR). Our denoising algorithm applies a deep convolutional neural network called SEMNet with 17 convolutional, 16 batch-normalization and 16 dropout layers to noisy images. We trained and tested SEMNet with a dataset of 100800 simulated SEM rough line images constructed by means of the Thorsos method and the ARTIMAGEN library developed by the National Institute of Standards and Technology. SEMNet achieved considerable improvements in peak signal-to-noise ratio (PSNR) as well as the best LER/LWR estimation accuracy compared with standard image denoisers.
KEYWORDS: Photomasks, Clocks, Data communications, Logic, Data compression, Telecommunications, Digital electronics, Image compression, Computer architecture, Video compression
Multibeam electron beam systems will be used in the future for mask writing and for complementary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements, Amdahl’s law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time, we propose an alternate datapath architecture partly motivated by multibeam direct-write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology’s multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.
Multibeam electron beam systems will be used in the future for mask writing and for complimentary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements Amdahl’s Law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time we propose an alternate datapath architecture partly motivated by multibeam direct write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology’s multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.
The pattern requirements for mask writers have steadily been growing, and there is considerable interest in multibeam mask writers to handle the throughput and resolution challenges associated with the needs of sub- 10nm technology nodes. The mask writer of the future will process terabits of information per second and deal with petabytes of data. In this paper, we investigate lossless data compression and system parallelism together to address part of the data transfer problem. We explore simple compression algorithms and the effect of parallelism on the total compressed data in a multibeam system architecture motivated by the IMS Nanofabrication multibeam mask writer series eMET. We model the shot assignment problem and beam shot overlap by means of two-dimensional linear spatial filtering on an image. We describe a fast scanning strategy and investigate data volumes for a family of beam arrays with 2N ×(2N −1) beams, where N is an odd integer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.