We present a contour metrology-based process matching flow with machine learning-based site selection for best coverage, contour comparisons, and scoring to quantify process differences. This method can significantly improve the efficiency of process technology transfers between fabs. The key technology includes: 1) high-performance ML clustering on a full chip product with hundreds of millions of anchoring points, 2) process-matching oriented custom feature engineering that drives quantitative understanding of each SEM image, and 3) stable and reliable contour extraction of large amounts of CD-SEM images.
Each day, semiconductor manufacturing companies (fabs) run distributed compute-intensive post-tape-out-flow runs (PTOF) jobs to apply various resolution enhancement technology (RET) techniques to incoming designs and convert these designs into photomask data that is ready for manufacturing. This process is performed on large compute clusters managed by a job scheduler. To minimize the compute cost of each PTOF job, various manual techniques are used to choose the best compute setup that produces the optimum hardware utilization and efficient runtime for that job. We introduce a machine learning (ML) solution that can provide CPU time prediction for these PTOF jobs, which can be used to provide compute cost estimations, provide recommendations for resources, and feed scheduling models. ML training is based on job-specific features extracted from production data, such as layout size, hierarchy, and operations, as well as meta-data like job type, technology node, and layer. The list of input features correlated to the prediction was evaluated, along with several ML techniques, across a wide variety of jobs. Integrating an ML-based CPU runtime prediction module into the production flow provides data that can be used to improve job priority decisions, raise runtime warnings due to hardware or other issues, and estimate compute cost for each job (which is especially useful in a cloud environment). Given the wide variation of expected runtimes for different types of jobs, replacing manual monitoring of jobs in tape-out operation with an integrated ML-based solution can improve both the productivity and efficiency of the PTOF process.
The bulk of photomask demand is in technology nodes ≥65nm, using equipment, processes, and materials developed more than two decades ago1. Despite mature processes and tools, mask makers are challenged to meet continuing demand. The challenge comes not only in the forms of increased demand, but also that much of the equipment is approaching the end of its viable lifetime to support and maintain due to parts or expertise availability2. Mask writers in particular are problematic from a technical and financial perspective. Modern equipment and processes can be “too good” to simply use as a direct substitute when original equipment or processes become unavailable During initial lithography and device integration, device manufacturers tailored Optical Proximity Correction (OPC) and other wafer processing conditions based on the original mask signature for multiple mask layers. Changing to state-of-the-art mask fidelity would actually represent a liability, as the altered mask character could result in device shifts, yield reduction, or even unanticipated reliability failures. To account for the improved fidelity, re-optimization of the synergistic patterning between mask, wafer lithography and etch is required. Even on mature technologies, reintegration can require costly, difficult, and time-consuming requalification. While this path has often been pursued when manufacturers declare EOL of tools, we propose instead to contain the change in the mask shop by using Mask Process Corrections (MPC)3. Instead of using MPC to maximize mask fidelity, as is done in advanced nodes, we use MPC to replicate the original mask non-idealities on a new mask process.
A scanning electron microscope (SEM) is the metrology tool used to accurately characterize very fine structures on wafers, usually by extracting one critical dimension (CD) per SEM image. This approach for optical proximity correction (OPC) modeling requires many measurements resulting in a lengthy cycle time for data collection, review, and cleaning, and faces reliability issues when dealing with critical two-dimensional (2-D) structures. An alternative to CD-based metrology is to use SEM image contours for OPC modeling. To calibrate OPC models with contours, reliable contours matched to traditional CD-SEM measurements are required along with a method to choose structure and site selections (number, type, and image space coverage) specific to a contour-based OPC model calibration. The potential of SEM contour model-based calibration is illustrated by comparing two contour-based models to reference models, one empirical model and a second rigorous simulation-based model. The contour-based models are as good as or better than a CD-based model with a significant advantage in the prediction of complex 2-D configurations with a reduced metrology work load.
To ensure a high patterning quality, the etch effects have to be corrected within the OPC recipe in addition to the traditional lithographic effects. This requires the calibration of an accurate etch model and optimization of its implementation in the OPC flow. Using SEM contours is a promising approach to get numerous and highly reliable measurements especially for 2D structures for etch model calibration. A 28nm active layer was selected to calibrate and verify an etch model with 50 structures in total. We optimized the selection of the calibration structures as well as the model density. The implementation of the etch model to adjust the litho target layer allows a significant reduction of weak points. We also demonstrate that the etch model incorporated to the ORC recipe and run on large design can predict many hotspots.
KEYWORDS: Optical proximity correction, Photomasks, Manufacturing, Data processing, Front end of line, Back end of line, Visualization, Design for manufacturability, Integrated circuits, Semiconductors
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
Calibrating an accurate OPC model usually requires a lot of one-dimensional CD-SEM measurements. A promising
alternative is to use a SEM image contour approach but many challenges remain to implement this technique for
production. In this work a specific flow is presented to get good and reliable contours well matched with traditional CDSEM
measurements. Furthermore this work investigates the importance of site selection (number, type, image space
coverage) for a successful contour-based OPC model. Finally the comparison of conventional and contour based models
takes into account the calibration and verification performances of both models with a possible cross verification
between model data sets. Specific advantages of contour based model are also discussed.
KEYWORDS: 3D modeling, Data modeling, Photomasks, Calibration, Performance modeling, Semiconducting wafers, Optical proximity correction, Systems modeling, Panoramic photography, System on a chip
As mask feature sizes have shrunk well below the exposure wavelength, the thin mask of Kirchhoff approximation
breaks down and 3D mask effects contribute significantly to the through-focus CD behavior of specific features.
While full-chip rigorous 3D mask modeling is not computationally feasible, approximate simulation methods do
enable the 3D mask effects to be represented. The use of such approximations improves model prediction capability.
This paper will look at a 28nm darkfield and brightfield layer datasets that were calibrated with a Kirchhoff model
and with two different 3D-EMF models. Both model calibration accuracy and verification fitness improvements are
realized with the use of 3D models.
With each new process technology node chip designs increase in complexity and size, and mask data prep flows require
more compute resources to maintain the desired turn around time (TAT). In addition, to maintaining TAT, mask data
prep centers are trying to lower costs. Securing highly scalable processing for each element of the flow - geometry
processing, resolution enhancements and optical process correction, verification and fracture - has been the focal point
so far towards the goal of lowering TAT. Processing utilization for different flow elements is dependent on the
operation, the data hierarchy and device type. In this paper we pursue the introduction of a dynamic utilization driven
compute resource control system applied to large scale parallel computation environment. The paper will explain the
performance challenges in optimizing a mask data prep flow for TAT and cost while designing a compute resource
system and its framework. In addition, the paper will analyze performance metrics TAT and throughput of a production
system and discuss trade-offs of different parallelization approaches in data processing in interaction with dynamic
resource control. The study focuses on 65nm and 45nm process node.
With each new process technology node chip designs increase in complexity and size, and mask data prep flows require
more compute resources to maintain the desired turn around time (TAT) at a low cost. Securing highly scalable
processing for each element of the flow - geometry processing, resolution enhancements and optical process correction,
verification and fracture - has been the focal point so far. The utilization for different flow elements depends on the
operation, the data hierarchy and the device type. This paper introduces a dynamic utilization driven compute resource
control system applied to large scale parallel computation environment. The paper will analyze performance metrics
TAT and throughput for a production system and discuss trade-offs of different parallelization approaches in data
processing regarding interaction with dynamic resource control. The study focuses on 65nm and 45nm designs.
As tolerance requirements for the lithography process continue to shrink, the complexity of the optical proximity
correction is growing. Smaller correction grids, smaller fragment lengths and the introduction of pixel-based simulation
lead to highly fragmented data fueling the trend of larger file sizes as well as increasing the writing times of the vector
shaped beam systems commonly used for making advanced photomasks. This paper will introduce an approach of
layout modifications to simplify the data considering both fracturing and mask writing constraints in order to make it
more suitable for these processes. The trade-offs between these simplifications and OPC accuracy will be investigated.
A data processing methodology that allows preserving the OPC accuracy and modifications all the way to the mask
manufacturing will also be described. This study focuses on 65nm and 45nm designs.
This paper investigates the implementation of sub-resolution assist features (SRAFs) in high performance logic designs for the poly-gate conductor level. We will discuss the concepts used for SRAF rule generation, SRAF data preparation and what we term "binary" optical proximity correction (OPC) to prevent catastrophic line-width problems. Lithographic process window (PW) data obtained with SRAFs will be compared to PW data obtained without SRAF. SRAM cells are shown printed with annular illumination and SRAFs, for both the 130 nm and 100 nm logic nodes as defined by the International Technology Roadmap for Semiconductors (ITRS). This study includes a comparison of the experimental results of SRAMs printed from designs corrected with rule-based OPC to those printed from designs corrected with model-based OPC.
Low dielectric constant materials in the back-end-of-line process are needed to reduce resistive-capacitive delays due to continually shrinking interconnect dimensions. Several organic dielectrics which have etch rates similar to photoresists, such as benzocyclobutene and diamond-like carbon, have been explored for compatibility with lithographic processes. In this paper we discuss integration issues from a lithographic perspective, including low-k materials selection and properties, integration sequences, use of hard masks and the effects on reflectivity, resist process compatibility and focus effects using an advanced DUV scanning system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.