In advanced nodes, the extension of DUV lithography deep into the sub-wavelength dimensions has led to exploration of many new Resolution Enhancement Techniques (RET). Generally speaking, these RET have enabled higher resolution capabilities using the same exposure wavelength, but at the cost of increasingly complex mask optimization process. One such technique applied to perform Optical Proximity Correction (OPC) is called Inverse Lithography Technique (ILT). It promises the best possible theoretical mask design by solving the inverse problem, where the optical transform from mask to wafer image is solved in reverse using a rigorous mathematical approach [1]. Although the benefits and potentials of ILT in producing a single exposure mask are well documented [2], its implementation in multiple patterning OPC (MP-OPC) is less explored. In this paper, an ILT mask optimization is applied on a metal layer, consisting of 3 exposures in a litho-etch x 3 (LELELE) process flow. It demonstrates the application of both multi-exposure and etch awareness within the ILT mask correction scheme. This is accomplished by including inter-layer constraints for the resist and the post-etch contours in the objective function of the ILT optimization. The ability to reduce potential interexposure failure modes as well as the associated increase in computational resources will be assessed. Additionally, the results will be compared against a conventional model-based OPC with similar multi-exposure and etch awareness.
State-of-the-art OPC recipes for production semiconductor manufacturing are fine-tuned, often artfully crafted parameter sets are designed to achieve design fidelity and maximum process window across the enormous variety of patterns in a given design level. In the typical technology lifecycle, the process for creating a recipe is iterative. In the initial stages, very little to no “real” design content is available for testing. Therefore, an engineer may start with the recipe from a previous node; adjust it based on known ground rules and a few test patterns and/or scaled designs, and then refine it based on hardware results. As the technology matures, more design content becomes available to refine the recipe, but it becomes more difficult to make major changes without significantly impacting the overall technology scope and schedule. The dearth of early design information is a major risk factor: unforeseen patterning difficulties (e.g. due to holes in design rules) are costly when caught late.
To mitigate this risk, we propose an automated flow that is capable of producing large-scale realistic design content, and then optimizing the OPC recipe parameters to maximize the process window for this layout. The flow was tested with a triple-patterned 10nm node 1X metal level. First, design-rule clean layouts were produced with a tool called Layout Schema Generator (LSG). Next, the OPC recipe was optimized on these layouts, with a resulting reduction in the number of hotspots. For experimental validation, the layouts were placed on a test mask, and the predicted hotspots were compared with hardware data.
As technology advances into deep submicron nodes, the mask manufacturing process accuracy become more
important. Mask Process Correction (MPC) has been transitioning from Rules-Based Mask Process correction to
Model-Based Mask Process Correction mode. MPC is a subsequent step to OPC, where additional perturbation is
applied to the mask shapes to correct for the mask manufacturing process. Shifting towards full model-based MPC is
driven mainly by the accuracy requirements in advanced technology nodes, both for DUV and EUV processes.
In the current state-of-the-art MPC process, MPC is completely decoupled from OPC, where each of them assumes
that the other is executing perfectly. However, this decoupling is not suitable anymore due to the limited tolerance in
the mask CDU budget and the increased wafer CDU requirements required from OPC. It is becoming more
important to reduce any systematic mask errors, especially where they matter the most.
In this work, we present a new combined-verification methodology that allows testing the combined effect of mask
process and lithography process together and judging the final wafer patterning quality. This has the potential to
intercept risks due to superposition of OPC and MPC correction residual errors, and capturing and correcting such a
previously hidden source of patterning degradation.
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn’t sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch.
In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
Early in a semiconductor node’s process development cycle, the technology definition is locked down using somewhat risky assumptions on what the process can deliver once it matures. In this early phase of the development cycle, detailed design rules start to be codified while the wafer patterning process is still being fine-tuned. As the process moves along the development cycle, and wafer processes are dialed-in, key yield improvement efforts focus on variability reduction. Design retargeting definitions are tweaked and finalized, and the use of finely tuned etch models to compensate for process bias are applied to accurately capture the more mature wafer process. The resulting mature patterning process is quite different from the one developed during the early stages of the technology definition. In this paper we describe an approach and flow to drive continuous improvement in the mask solution (OPC and MBSRAF) later in the process development and production readiness cycle stage. First, we establish the process window entitlement within the design-space by utilizing advanced mask optimization (MO) combined with the baseline process (i.e., model, etch compensation, and design retargeting). Second, gaps to the entitlement are used to identify and target issues with the existing OPC recipe and to drive continuous improvements to close these performance gaps across the critical design rules. We demonstrate this flow on a 20 nm contact layer.
As technology development advances into deep submicron nodes, it is very important not to ignore any systematic effect that can impact CD uniformity and the final parametric yield. One important challenge for OPC is in choosing the proper etch process correction flow to compensate for design-to-design etch shrink variations. Although model-based etch compensation tools have been commercially available for a few years now, rules-based etch compensation tables have been the standard practice for several nodes. In our work, we study the limitations of the rules-based etch compensation versus model-based etch compensation. We study a 10nm process and provide the details of why using Model-Based Etch Process Correction can achieve up to 15% improvement in final CD uniformity. We also provide a systematic methodology for identifying the proper etch correction technique for a given etch process and assessing the potential accuracy gain when switching to the model-based etch correction.
Dummy fill insertion is a necessary step in modern semiconductor technologies to achieve homogeneous
pattern density per layer. This benefits several fabrication process steps including but not limited to Chemical
Mechanical Polishing (CMP), Etching, and Packaging. As the technology keeps shrinking, fill shapes become more
challenging to pattern and require aggressive model based optical proximity correction (MBOPC) to achieve better
design fidelity. MBOPC on Fill is a challenge to mask data prep runtime and final mask shot count which would
affect the total turnaround time (TAT) and mask cost. In our work, we introduce a novel flow that achieves a robust
and computationally efficient fill handling methodology during mask data prep, which will keep both the runtime
and shot count within their acceptable levels. In this flow, fill shapes undergo a smart MBOPC step which improves
the final wafer printing quality and topography uniformity without degrading the final shot count or the OPC cycle
runtime. This flow is tested on both front end of line (FEOL) layers and backend of line (BEOL) layers, and results
in an improved final printing of the fill patterns while consuming less than 2% of the full MBOPC flow runtime.
Sub-wavelength photolithography heavily depends on OPC (optical proximity correction), where the pattern fidelity
and CD Uniformity can never be achieved without a good OPC. The OPC runtime-resource factor has been
exponentially increasing every node. It is currently approaching a dangerous level in terms of runtime and cost as
the 20nm node is approaching production. A reasonable portion of the OPC computation is spent in small iterative
mask perturbations trying to reach a state that prints closer to the OPC target, followed by the final few iterations
aiming to accurately achieve printability on target with an almost zero EPE (edge placement error). In our work, we
propose replacing the first few iterations of OPC with a single fast multi-model iteration that can perturb the OPC
mask into a shape that is very close to its final state. This approach is proven to reduce the OPC runtime by an
average of 28% without degrading the final mask quality.
Printing small vias with tight pitches is becoming very challenging and consequently, different techniques are explored to achieve a robust and stable process. These techniques include reverse tone imaging (RTI) process, source optimization, mask transmission (attenuated Phase Shift Masks (attnPSM) versus binary thin OMOG masks), three-dimensional mask effects models, and SRAF printing models. Simulations of NILS, MEEF, DoF and process variability (PV) band width across a wide range of patterns are used to compare these different techniques in addition to the experimental process window. The results show that the most significant benefits can be gained by using attnPSM masks in conjunction with source optimization and RTI process. However, this improvement alone is not enough; every facet of the computational lithography and process must be finely tuned to produce sufficient imaging quality. As technology continues to shrink, Electromagnetic Field (EMF)-induced errors limit the scalability of this process and we will discuss the need for advanced techniques to suppress and correct for them.
In this work, we present a new technique to detect non-Litho-Friendly design areas based on their Aerial
Image signature. The aerial image is calculated for the litho target (pre-OPC). This is followed by the
fixing (retargeting) the design to achieve a litho friendly OPC target. This technique is applied and tested
on 28 nm metal layer and shows a big improvement in the process window performance. For an optimized
Aerial-Image-Retargeting (AIR) recipe is very computationally efficient and its runtime doesn't consume
more than 1% of the OPC flow runtime.
Sub-resolution Assist Feature (SRAF) insertion is one of the most important Resolution Enhancement
Techniques (RET) for the 65 nm, 45 nm nodes and beyond. In this paper, we are proposing a novel approach for
the optimum placement of 2D SRAF structures using state of the art Calibre RET flow. In this approach, the
optimal SRAF shapes are achieved simultaneously during the OPC step. The SRAF and main features are
optimized to account for their edge placement and process window metrics (aerial image slope/contrast, out of
focus/dose EPE, etc...). The resulting mask shapes deliver some of the properties that can be obtained using the
Inverse Lithography Techniques (ILT), such as excellent Process Window Performance, while there is almost
no impact on the runtime. The implemented model-based optimization flow remains compatible with the current
OPC production flows.
To maximize the process window and CD control of main features, sizing and placement rules for sub-resolution assist
features (SRAF) need to be optimized, subject to the constraint that the SRAFs not print through the process window.
With continuously shrinking target dimensions, generation of traditional rule-based SRAFs is becoming an expensive
process in terms of time, cost and complexity. This has created an interest in other rule optimization methodologies, such
as image contrast and other edge- and image-based objective functions.
In this paper, we propose using an automated model-based flow to obtain the optimal SRAF insertion rules for a design
and reduce the time and effort required to define the best rules. In this automated flow, SRAF placement is optimized by
iteratively generating the space-width rules and assessing their performance against process variability metrics. Multiple
metrics are used in the flow. Process variability (PV) band thickness is a good indicator of the process window
enhancement. Depth of focus (DOF), the total range of focus that can be tolerated, is also a highly descriptive metric for
the effectiveness of the sizing and placement rules generated. Finally, scatter bar (SB) printing margin calculations
assess the allowed exposure range that prevents scatter bars from printing on the wafer.
In this work, the reduction of the shot count of the mask data is studied. This shot count reduction is achieved by
reducing of the number of jogs resulting from the Model-Based Optical Proximity Correction (MBOPC) stage. To
reduce the number of OPC-jogs, we study the impact of aligning very small jogs on the shot count as well as their effect
on the residual Edge Placement Error (EPE). The OPC-jog alignment phase is made during OPC and not after it, so that
the post alignment OPC iterations are responsible for the correction of any residual average EPE resulting from the jogs
alignment. The results of this approach show a reduction of the total shot count in the mask fabrication stage by 18%,
while the EPE distribution is still almost the same compared to the standard OPC approach, promising for a nice
enhancement in the OPC flow to be more fracture friendly expected to decrease the data size of the fracturing, the mask
writing time as well as the mask costs.
Sub-resolution assist features (SRAFs) or scatter bars (SBs) have steadily proliferated through IC
manufacturer data preparation flows as k1 is pushed lower with each technology node. The use of this
technology is quite common for gate layer at 130 nm and below, with increasingly complex geometric rules
being utilized to govern the placement of SBs in proximity to target layer features. Recently, model based
approaches for placement of SBs has arisen. In this work, the variety of rule-based and model-based SB
options are explored for the gate layer by using new characterization and optimization functions available
in the latest generation of correction and OPC verification tools. These include the ability to quantify
across chip CD control with statistics on a per gate basis. The analysis includes the effects of defocus,
exposure, and misalignment, and it is shown that significant improvements to CD control through the full
manufacturing variability window can be realized.
Current state-of-the-art OPC (optical proximity correction) for 2-dimensional features consists of optimized
fragmentation followed by site simulation and subsequent iterations to adjust fragment locations and
minimize edge placement error (EPE). Internal and external constraints have historically been available in
production quality code to limit the movement of certain fragments, and this provides additional control for
OPC. Values for these constraints are left to engineering judgment, and can be based on lithography
process limitations, mask house process limitations, or mask house inspection limitations. Often times
mask house inspection limitations are used to define these constraints. However, these inspection
restrictions are generally more complex than the 2 degrees of freedom provided in existing standard OPC
software. Ideally, the most accurate and robust OPC software would match the movement constraints to
the defect inspection requirements, as this prevents over-constraining the OPC solution.
This work demonstrates significantly improved 2-D OPC correction results based on matching movement
constraints to inspection limitations. Improvements are demonstrated on a created array of 2D designs as
well as critical level chip designs used in 45nm technology. Enhancements to OPC efficacy are proven for
several types of features. Improvements in overall EPE (edge placement error) are demonstrated for
several different types of structures, including mushroom type landing pads, iso crosses, and H-bar
structures. Reductions in corner rounding are evident for several 2-dimensional structures, and are shown
with dense print image simulations. Dense arrays (SRAM) processed with the new constraints receive
better overall corrections and convergence. Furthermore, OPC and ORC (optical rules checking)
simulations on full chip test sites with the advanced constraints have resulted in tighter EPE distributions,
and overall improved printing to target.
We propose a new approach for the design of phased array (PHASAR) demultiplexers. This approach is based on cascading multimode interference (MMI) PHASAR structures with different frequency responses to obtain an optimized performance. We show that with this approach, we can improve the uniformity of the overall demultiplexer (DMUX), especially when a large number of output ports is required. The design of a 64-output DMUX with only 0.2-dB uniformity is thus demonstrated compared to a 7-dB uniformity of a two-cascaded eight-channel demultiplexer, also shown in this work. This approach allows the designer to optimize the structure design, considering both the insertion loss and the uniformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.