More aggressive feature scaling and increasingly complex transistor structures are driving a steady increase in process complexity, increasing the risk that a specified pattern may not be manufacturable with an acceptable yield.
A single layer now requires more process steps, and each of those entails more tunable parameters than ever before. To help manage design risk, foundries provide detailed design for manufacturing (DFM) rules, telling designers what patterns to avoid. However, the sheer number of DFM rules for a leading-edge process can overwhelm conventional design methods.
A design that meets all foundry requirements simultaneously may not be possible, in which case the model will fail to converge. Even if a solution does exist, finding it may require unrealistic amounts of computation time.
Kostas Adam, vice president of engineering at Synopsys, and head of the company’s mask solutions team, explained that machine learning techniques based on neural networks can augment or replace conventional compact models. They can help identify patterns that pose significant design risk, help find cause-effect relationships, and help modify designs to limit those risks.
Among other things, DFM constraints affect the power, performance, and area (PPA) tradeoffs for the design. The simplest solution to a DFM violation is often to move the affected devices further apart, but that’s rarely the solution with the best PPA result. On the other hand, in order to achieve aggressive PPA targets, it may be necessary to accept more design risk — and potentially lower yield. Evaluating these interactions can be extremely difficult, though, especially in the early process development stages when few actual designs are available. It’s difficult to refine design rules without testing realistic designs.
Finding and fixing hot spots
Geng Han, research staff member at IBM Research, noted in a presentation at this year’s SPIE Advanced Lithography and Patterning conference that the root causes of defect hot spots lie in interactions across layers and between process steps. Process development test vehicles are necessarily limited and time consuming, and they don’t always identify potential issues. So Han proposed using synthetic layout generation to augment traditional test vehicles.
A guided machine learning model generated test patterns adhering to proposed design rules. The synthetic layouts do not need to be electronically functional, and they can target specific layout densities, specific kinds of line ends, and so on. These patterns can be simulated by a lithography model, rendered in silicon, or both, to identify the patterns most likely to produce defects. This information can be used to help refine the process and update the design rules. [1]
Even so, Jinah Kim, process integration engineer at Samsung, explained that optimizing a design’s PPA is an iterative process. The designer must manually adjust the design parameters, run the place-and-route (P&R) process, and evaluate the result. For an 8,000 square micron block, evaluating 50 different design conditions would require about 1,200 CPU hours. Instead, Kim’s group proposed an alternative methodology to consider PPA impacts and yield impacts in parallel.
Samsung’s method depends on Cadence’s Cerebrus, an AI-driven chip design automation tool that allows the designer to specify primitives, such as switching power and power leakage, depending on the goals. Training a machine learning model using only those primitives should ensure that the resulting designs meet PPA targets. Then, Cerebrus generates design scenarios by applying proposed DFM rules to the model created in the first phase. By ensuring the proposed designs already meet PPA targets, this approach reduced the evaluation time for 50 blocks of 8,000 square microns to only 90 CPU hours, a 13-fold improvement. [2]
In contrast, using the P&R tool to fix DFM rule violations is ineffective because there are too many of them. Instead, Lynn Wang, principal member of the technical staff at GlobalFoundries, and her colleagues built a pattern library by using a machine learning tool to group patterns with similar geometric features from reference designs, then pair these “problem” patterns with human-designed solutions. By adding this library and its associated model to the P&R hotspot repair tool, the tool was able to automatically fix 81% of DFM violations. In their tests, this approach was 50X faster than rerouting the layer. [3]
Predicting defect hot spots from the design polygons alone can lead to high numbers of false positives, noted Jonathan Ho, senior member of the technical staff at AMD. Simulating the likely wafer features using the process parameters is challenging because of the sheer number of parameters, not all of which will affect the pattern on the wafer. Instead, Ho observed that the post-etch feature shapes are the sum of all processes from the initial resist coating and exposure, through the development and etch processes. As physical features, post-etch patterns are also relatively easy to identify and measure. Accordingly, the AMD group — in collaboration with Siemens EDA — used a reinforcement learning model to identify potential hotspots based on similarities between design patterns and patterns known from silicon to be defect-prone. [4]
Yield today, yield next year
Once the device moves from design to production, the next challenge is to actually achieve the yield promised by design risk estimates. Both the process and the defects associated with it evolve over time. Equipment behavior can degrade and process chemicals can age. Continuous improvement efforts can shift the process window.
Taekwon Jee, CEO of SemiAI, attempted to capture both common issues and historical fixes in a structured way. His team used PRISM, a proprietary digital twin tool, to collect raw process sensor data and tag it with tool, process, layer, and other contextual information. A hybrid neural network-transformer model then extracted causal relationships between variables. Meanwhile, the company’s INFER semantic reasoning engine used a large language model (LLM) trained on structured issue reports, including root causes and corrective actions. By tying this data to the PRISM database, Jee was able to tie past issues to their associated sensor data. The next step, automated failure prediction, flagged emerging anomalies in current data and linked them to historical fixes for similar issues. [5]
At this stage, Jee said, human oversight was critical. While many suggested fixes did indeed resolve the problem, about 5% did not, particularly in ambiguous situations. That 5% could easily lead to lost production, or could even make the underlying issue worse.
Artificial intelligence without (as much) hype
In the last few years, AI has vaulted into the public consciousness, thanks to the rise of ChatGPT and other LLM-based tools. Opinions run the gamut from what OpenAI CEO Sam Altman called “the most powerful technology humans have ever created,” to an environment-destroying plagiarism machine.
But far from the mass media spotlight, machine learning tools are assisting humans by doing what computers do best — collecting, managing, and analyzing the enormous amounts of data that modern industrial processes generate. So while AI is stirring up controversy, machine learning is rapidly becoming an essential part of the designer’s tool kit.
References
Geng Han, et al., “Guided Random Synthetic Layout Generation and Machine-Learning Based Defect Prediction for Leading Edge Technology Node Development,” DTCO and Computational Patterning IV, edited by Neal V. Lafferty, Harsha Grunes, Proc. of SPIE Vol. 13425, 1342509 doi: 10.1117/12.3051134
Jinah Kim, “New DTCO Framework Leveraging Machine Learning for Comprehensive PPA-Y Optimization,” DTCO and Computational Patterning IV, edited by Neal V. Lafferty, Harsha Grunes, Proc. of SPIE Vol. 13425, 134250K doi: 10.1117/12.3051595
Lynn T.-N. Wang, et al., “Machine learning-assisted pattern optimizations for fixing Design for Manufacturability (DFM) rule check violations,” DTCO and Computational Patterning IV, edited by Neal V. Lafferty, Harsha Grunes, Proc. of SPIE Vol. 13425, 134250A doi: 10.1117/12.3051488
Jonathan Ho, et al., “Enhancing Multilayer Process Defect Prediction Accuracy on an Artificial Intelligence/Machine Learning (AI/ML) Platform,” DTCO and Computational Patterning IV, edited by Neal V. Lafferty, Harsha Grunes, Proc. of SPIE Vol. 13425, 134250N doi: 10.1117/12.3051562
Taekwon Jee, “LLM-Based Overlay Issue Classification and Solution Optimization in Semiconductor Manufacturing,” DTCO and Computational Patterning IV, edited by Neal V. Lafferty, Harsha Grunes, Proc. of SPIE Vol. 13425, 134251D doi: 10.1117/12.3050976
Related Reading
Memory Wall Problem Grows With LLMs
Hardware advances have addressed some of the problems. The next step may hinge on the algorithms.