Basic image intensity parameters, like maximum and minimum intensity values (Imin and Imax), image logarithm slope (ILS), normalized image logarithm slope (NILS) and mask error enhancement factor (MEEF) , are well known as indexes of photolithography imaging quality. For full chip verification, hotspot detection is typically based on threshold values for line pinching or bridging. For image intensity parameters it is generally harder to quantify an absolute value to define where the process limit will occur, and at which process stage; lithography, etch or post- CMP. However it is easy to conclude that hot spots captured by image intensity parameters are more susceptible to process variation and very likely to impact yield. In addition these image intensity hot spots can be missed by using resist model verification because the resist model normally is calibrated by the wafer data on a single resist plane and is an empirical model which is trying to fit the resist critical dimension by some mathematic algorithm with combining optical calculation. Also at resolution enhancement technology (RET) development stage, full chip imaging quality check is also a method to qualify RET solution, like Optical Proximity Correct (OPC) performance. To add full chip verification using image intensity parameters is also not as costly as adding one more resist model simulation. From a foundry yield improvement and cost saving perspective, it is valuable to quantify the imaging quality to find design hot spots to correctly define the inline process control margin. This paper studies the correlation between image intensity parameters and process weakness or catastrophic hard failures at different process stages. It also demonstrated how OPC solution can improve full chip image intensity parameters. Rigorous 3D resist profile simulation across the full height of the resist stack was also performed to identify a correlation to the image intensity parameter. A methodology of post-OPC full chip verification is proposed for improving OPC quality at RET development stage and for inline process control and yield improvement at production stage.
This paper provides DFM solutions on yield improvement based on a foundry’s perspective. We have created a
novel work flow for efficient yield enhancement at different stages throughout the process of design-to-silicon. In
the design environment, other than conforming to the conventional design rule manual, we may guide the designer
to employ the well-characterized regular logic bricks that are built from process validated hotspots. Later, after
design sign-off, layout manipulation or layout retargeting are implemented during the mask preparation stage to
enlarge the process window when faced with a diversity of layout patterns in the design. At the same time, two
crucial methods, namely layout analysis and layout comparison, are used to capture all layout related detractors. The
first method can identify the process sensitive hotspots, which will be highlighted and anchored as process limiters
during the patterning process. Layout comparison can be an efficient way to narrow down the yield roadblocks by
debugging the yield loss on similar process and design styles. Another smart solution is creating customized process
control monitoring structures (PCM), which are extracted from previous yield ramping lessons and process hotspots.
These PCMs will be dropped into scribe lane of production tapeouts and serve as pioneer testkeys for the initial
production ramp up.
As the semiconductor industry scales down to 90nm and below, Model-Based OPC has become a standard practice to
compensate for optical proximity effects and process variations occurring when printing features below the exposure
wavelength. For parametric OPC models, it is assumed that the empirical data are accurate and the model parameter
space is sufficiently well sampled. In spite of advanced metrology tools, the measurement uncertainty for 1D small
critical dimensions and 2D patterns remains to be a challenge. Traditionally, the weight of SEM measurement points are
based on either statistical method such as standard deviations, or engineers' judgment, which is either time consuming or
individual-dependent. In this paper, the slope-integrated OPC model calibration methodology is proposed, which takes
into account the slope as a weighting indicator. The additional measurement objects per calibration structure are
economically feasible, as most metrology tool time is spent on addressing and auto-focusing. When we consider one
measurement point with both CD and slope measurements, the slightly increased time is tolerable for FAB, which
requires a short turn around time (TAT). By this approach, we can distinguish measurement points with low confidence
from those accurate ones. Furthermore, we check the fitting differences among equal-weighted data sheets, empiricalweighted
data sheets and slope-weighted data sheets, by using the same variable threshold model form. From the edge
placement error (EPE) of fitting results and the overlap between simulated contours and SEM images, it is found that the
proposed slope-integrated methodology results in a more accurate and stable model.
The effectacy of the OPC model depends greatly on test pattern data calibration that accurately captures mask and wafer
processing characteristics. The CD deviation caused by an off-center mask process can easily consume the majority of
the lithography process CD budget. Mask manufacturing variables such as write tools' resolution, etch process effects,
and pre-bias of the fractured data have great impacts on OPC model performance. As a result, wafer performance using
masks from different mask shops varies due to variations in the mask manufacturing process, even if the masks are
written with the same data set and use the same manufacturing specifications. A methodology for mask manufacturing
calibration is proposed in order to make an OPC model consistent between two mask manufacturing processes. The
methodology consists of two parts: mask manufacturing calibration and wafer-level OPC accuracy verification. The
mask manufacturing process and metrology are calibrated separately. The OPC model is built based on the database of
the first-party mask shop, and OPC verification is carried out by wafer data using the newly calibrated mask from the
second-party mask shop. By checking wafer performance of both OPC model matrix items and complicated 2D
structures, the conclusion can be drawn that different mask shops can share the same OPC model with rigorous mask
calibration. This methodology leads to lower engineering costs, shorter turn around time (TAT) and robust OPC
performance.
The number of tunable parameters increases dramatically as we push forward to the next node of hyper-NA immersion
lithography. It is very important to keep the lithographic process model calibration time under control, and its end result
insensitive to either the starting point in the parameter space or the noise in the measurement data. For minimizing the
least-squares error of a multivariate non-linear system, the industry standard is the Levenberg-Marquardt algorithm. We
describe a distributed computing technique that is natural to the algorithm, and easy to implement in a cluster of
computers. Applying this technique to calibrating lithographic process model, we can achieve robust optimization results
in nearly constant calibration time.
Model-based Optical Proximity Correction (MBOPC) is used to make systematic modifications to transfer a pattern's design intent from a drawn database to a wafer. This is accomplished by manipulating the shape of mask features to generate the desired pattern (design intent) on the wafer. MBOPC accomplishes this task by dividing drawn patterns into segments, then using a process model to manipulate these segments to achieve the design intent on the wafer. The generation of an accurate process model is very important to the MBOPC process because it contains the process information used to manipulate correction segments. When corrected data are written on a reticle, the faithful and well-controlled reproduction of the data on the mask is critical to realizing the desired lithographic performance. This paper will explore methodologies to improve model accuracy using mask fabrication data and process test patterns. Model accuracy improvement will be accomplished using intelligent sampling plans and representative mask structures. The sampling plan needs to identify critical device and process features. The test mask used to generate the process model needs to have test structures to gather process data. The test mask also must have test structures that can evaluate model quality by testing the extrapolation and interpolation of the model to data that was no used to generate the process model. These methodologies will be shown to improve final mask pattern quality.
Lithography Rule Check (LRC) becomes a necessary procedure for post OPC in 0.15μm LV and below technology in order to guarantee mask layout correctness. LRC uses a process model to simulate the mask pattern and compare its performance to the desired layout. When the results are out of specified tolerances, LRC will generate error flags as weak points to trigger further checks. This paper introduces LRC to detect the weak points even in non-OPC employed circuit layout such as 0.18μm to 0.15μm process. LRC is more important for semiconductor foundry since there are diverse design layouts and shrinks in production. This diversity leads to the possibility of problematic structures reaching the reticle. In this work, LRC is added as a necessary step in tape-out procedure for the sub 0.18μm process nodes. LRC detected weak points such as low or excessive contrast sites, high MEEF areas and small process window features, then modified the layout according to check results. Our work showed some mask related potential problems can be avoided by LRC in even non model based OPC process and therefore guarantee improved product yield.
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.