{"id":331353,"date":"2026-03-10T08:26:07","date_gmt":"2026-03-10T08:26:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/331353\/"},"modified":"2026-03-10T08:26:07","modified_gmt":"2026-03-10T08:26:07","slug":"tool-matching-getting-tougher-across-test-metrology","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/331353\/","title":{"rendered":"Tool Matching Getting Tougher Across Test &#038; Metrology"},"content":{"rendered":"<p>Key Takeaways<\/p>\n<p>Engineers leverage both device-specific and tool-level data to identify a process \u201csweet spot.\u201d<br \/>\nTight, frequent tool-to-tool matching enables greater yield and fab flexibility.<br \/>\nMachine learning helps capture the nuances of a tool\u2019s signature.<\/p>\n<p>Many people outside of the semiconductor industry wonder how humans can fabricate transistors with tens of nanometer scale dimensions on a consistent basis, day in and day out, from one process tool to another, one line to another, and one fab site to another. One way this is achieved is through tool-to-tool matching (TTTM). But TTTM is getting much more difficult as fabs produce increasingly complex chips with smaller features and process windows.<\/p>\n<p>Wafers can undergo 600 to 800 process steps over the course of 3 months, so the tools must produce consistent results. The systems that check these results in metrology and test must meet some of the highest standards.<\/p>\n<p>\u201cThe latest tech nodes require hundreds of tightly interdependent process steps in multi-patterning, high-k\/metal gate, complex etch chemistries, selective deposition, buried power rails, etc. Across the entire fabrication process, every tiny process imperfection may accumulate as a compound effect that affects the yield,\u201d said PeiFen Teh, director of applications engineering at <a href=\"https:\/\/semiengineering.com\/entities\/onto-innovation\/\" rel=\"nofollow noopener\" target=\"_blank\">Onto Innovation<\/a>. \u201cThus, TTTM spec at every critical process step is essential to ensure process stability throughout the process line.\u201d<\/p>\n<p>Shorter product lifecycles, the need for faster yield ramps, and a diverse supply chain also challenge tool matching operations. \u201cIt\u2019s getting more critical and harder to perform because we have a distributed supply chain with a higher product mix, and we need to produce identical test outcomes,\u201d said Eli Roth, product manager of SMART manufacturing at <a href=\"https:\/\/semiengineering.com\/entities\/teradyne-corporation\/\" rel=\"nofollow noopener\" target=\"_blank\">Teradyne<\/a>. \u201cYou\u2019re looking for transparency on more complex devices. Guard bands are constantly being tightened, advanced packaging is putting more dies together, so you\u2019re looking for more repeatability on the device, and that\u2019s applying more pressure on your test to reduce error contributors as much as possible. And the ramps are faster, so you have less time to stabilize your NPI base before you\u2019re in production.\u201d<\/p>\n<p>Tool matching (aka chamber matching) ensures consistent outputs, for instance, from one ATE tool to another of the same model. There are various ways this is accomplished, but it starts with a NIST-traceable standard wafer that checks the accuracy of different measurements such as CDs. Then, tools are matched by adjusting hardware settings until critical outputs match. For advanced nodes, data-driven machine learning models simulate the complex, nonlinear bias between tools. The fab then repeats these steps for other tools in the fleet.<\/p>\n<p>Sometimes a best or average tool is employed. \u201cGolden tools or test vehicles are widely used. We like to characterize a reference with a known-good vehicle, and then statistically align the rest of the fleet against that behavior,\u201d said Roth. It is also important to quantify the amount of variation associated with the measurement system itself.<\/p>\n<p>Tool matching is not a \u201cone and done\u201d step. In fact, the more leading-edge the process, the more frequently the tools are likely to be matched. Still, there are clear times when tool-to-tool matching is required:<\/p>\n<p>At tool installation\/qualification;<br \/>\nWhen new products or new processes are introduced;<br \/>\nAfter a corrective-maintenance or preventive-maintenance routine;<br \/>\nAfter an instrument or component set is replaced, and<br \/>\nAt regular intervals such as once a day, once a shift, or once a lot (advanced nodes).<\/p>\n<p>More data sharing is needed to meet leading device maker\u2019s needs. \u201cWhile baseline tool matching using manufacturer-provided data is expected, device makers are now demanding deeper alignment at critical process steps to ensure consistent device performance. Achieving this level of matching requires access to fab-level device data, such as metrology results and functional test outcomes,\u201d said Melvin Lee Wei Heng, director of applications engineering at Onto Innovation. \u201cLeveraging this device-specific information in combination with tool-level data is now essential to confirm that tools are operating within the process \u2018sweet spot\u2019 and delivering uniform performance across the manufacturing line.\u201d<\/p>\n<p>\u201cWe use a lot of VLSI NIST traceable standards for step heights and linewidth measurements. But beyond just the calibration of a system, we also match the optics to make sure that when a recipe is transferred from one tool to another, there\u2019s no change in the illumination settings, the optics are the same and the illumination of the systems is the same,\u201d said Andrew Lopez, application engineer at <a href=\"https:\/\/semiengineering.com\/entities\/bruker\/\" rel=\"nofollow noopener\" target=\"_blank\">Bruker<\/a>.\u00a0 For example, using standard wafers the engineer can adjust tools such as calipers or sensors to within strict tolerances. \u201cWe look at the linearity of multiple different step heights and multiple different linewidths to make sure that the system is sensitive enough to detect variation coming in from the process.\u201d<\/p>\n<p>While they are related, tool matching is not the same as tool fingerprinting or capturing a tool\u2019s \u201csignature.\u201d Every tool in a fab \u2014 a scanner, etcher, cleaner, tester, optical inspection system, etc. \u2014 has its own microscopic irregularities in machined parts or wear-and-tear artifacts. As a result, identical systems behave slightly differently even when executing the same recipe. By capturing and analyzing this signature, engineers can align performance tool-to-tool. [Editor\u2019s Note: A future story will address matching of process tools.]<\/p>\n<p>Fingerprinting may or may not benefit from the introduction of machine learning models. \u201cTraditional fingerprinting methods depend on engineered features, control charts, and threshold-based comparisons. These approaches work well when variation is low-dimensional and predictable,\u201d said Vincent Chu, senior consulting manager for <a href=\"https:\/\/semiengineering.com\/entities\/advantest-corporation\/\" rel=\"nofollow noopener\" target=\"_blank\">Advantest<\/a> Cloud Solutions. \u201cHowever, today\u2019s testers collect far richer data \u2014 high-resolution parametrics, waveform signatures, timing measurements, and continuous telemetry. In these higher-dimensional spaces, ML models can capture subtle, non-linear behaviors that define a tool\u2019s true operating \u2018signature.\u2019 This enables a more accurate and scalable representation of a tester\u2019s behavioral baseline without relying entirely on predefined metrics.\u201d<\/p>\n<p>In metrology, as in testing, both precision and accuracy are important metrics. Accuracy is how close a measurement is to its true value. It can be achieved by comparing measurements to those of a known standard, such as a standard wafer with multiple features, but it is difficult to attain.<\/p>\n<p>\u201cWe would love to make sure every metrology output can be labeled as accurate, but that is almost never the case. We almost always settle for precision, and then when we have a certain level of experience over time of hitting that target, we get a good yield at the end,\u201d said Chris Mack, co-founder and CTO of Fractilia. \u201cSo we\u2019ll call that \u2018accurate,\u2019 but it\u2019s not really a number that is accurate in the sense of a NIST standard metrology result. Still, precision continues to be the more important characteristic of a metrology tool that we pay the most attention to.\u201d<\/p>\n<p>Precision is determined by taking a measurement of the same feature multiple times and recording the variability around a central value.<\/p>\n<p>How tool matching works<br \/>The metrics that are matched depend on the tool. For example, in acoustic microscope imaging, metrics include image intensity, signal amplitude, depth response, and defect detectability. \u201cWe utilize LTSM (Long-Term Stability Monitoring) or Global Tool Matching. This uses a known or reference sample and software algorithm to compensate for any system-to-system variation by normalizing the acoustic image response, so inspection results remain consistent from tool to tool and site to site,\u201d said Bryan Schackmuth, product line manager of AMI at <a href=\"https:\/\/semiengineering.com\/entities\/cyberoptics\/\" rel=\"nofollow noopener\" target=\"_blank\">Nordson<\/a>. \u201cThe LTSM allows for image normalization without requiring manual operator adjustments to achieve matched images. This global matching procedure is typically run any time a change is made to the operating frequency (for example, when changing the transducers), or before the start of each shift or each day.\u201d<\/p>\n<p>Increasingly, metrology measurements are correlated with electrical test results. \u201cTools are typically matched based on a hierarchy of steps, and each fab or OEM may approach it slightly differently,\u201d said Joe Fillion, director of product management at Onto Innovation. \u201cIt starts with a fingerprint or configuration comparison. Tools need to be as closely matched from a software and hardware standpoint \u2014 same software version, lenses, apertures, light source, MFCs, etc. Once there is a reasonable match, tools will often perform standard auto-test or calibration routines to ensure behavior is consistent across tools. If results are consistent and meet the intended specification, standard qualification runs are performed to measure actual performance on the wafer. These results will have a target value with an upper and lower limit to ensure operation within an acceptable range.\u201d<\/p>\n<p>Onto\u2019s Teh provided a step-by-step guide to tool matching. \u201cWe align the performance of each tool component first and monitor the fleet tool matching performance,\u201d he said, including:<\/p>\n<p>Component-level calibration: Monitoring system health check parameters and applying calibration when out of specifications;<br \/>\nSystem-level calibration: Examining the spectral response of a fleet of tools measured on a standard wafer;<br \/>\nSpectral calibration: Used to improve the fleet matching level, and<br \/>\nParametric results monitoring: This is done using a standard wafer (measuring CD, thickness, or material constants). Recalibration may be applied to optimize the tool matching level on each parameter.<\/p>\n<p>On testers, engineers need to keep a handle on component drift. \u201cThermal sensor drifts over time,\u201d said Teradyne\u2019s Roth. \u201cThere\u2019s timing skew. We generally manage drift through period calibration and reference checks, and we\u2019re continuously checking our equipment to our reference scanner so we know how far off that can be and when we have to do period calibration. SPC monitoring and big data monitoring are other ways to test that. Like the hammer that\u2019s always looking for the nail, we\u2019re probably looking at periodic calculations.\u201d<\/p>\n<p>Depending on a tool\u2019s configuration, sometimes tool-level calibration can be built in. \u201cOur tester is based on a high-precision resistor, so it uses a self-verification method to ensure that each measurement is correct. That\u2019s how we affirm that each tool is calibrated and provides consistent measurements across testers,\u201d said Jesse Ko, COO of <a href=\"https:\/\/semiengineering.com\/entities\/modus-test\/\" rel=\"nofollow noopener\" target=\"_blank\">Modus Test<\/a>.<\/p>\n<p>Electrical test and metrology often work hand in hand, as well. \u201cFabs have incorporated in-line electrical\/functional tests to ensure a tool is performing to the level where it has no impact on the device,\u201d said Onto\u2019s Heng. \u201cIn certain critical process steps, cross-sectional analysis is done to ensure the profile formed is per-device specs for sensitive layers where conventional metrology measurements are inadequate.\u201d<\/p>\n<p>A different way of looking at the problem is to start looking at outcomes, rather than tools, and working backward from there. This is essentially what Intel did when its \u201cCopy Exact\u201d strategy, which replicated everything in a fab exactly \u2014 equipment, methodologies, and processes \u2014 produced different results. The company ultimately narrowed the cause down to environmental conditions such as humidity. Calibrating equipment may just be the first step in a complex investigation.<\/p>\n<p>\u201cIt\u2019s the same model, the same calibration,\u201d said Jon Holt, worldwide fab applications solutions manager at <a href=\"https:\/\/semiengineering.com\/entities\/pdf-solutions\/\" rel=\"nofollow noopener\" target=\"_blank\">PDF Solutions<\/a>. \u201cYou\u2019re making sure the measurements are accurate, because that\u2019s another potential source of variability between two sites. Either measuring it in the same location or using the same tool is one way. But then you start looking even broader and holistically at environmental variables. Is your cooling water, your gas supply, your gas distribution set up the same and bringing all that information that\u2019s needed to play. And then the final real challenge is functionality. Is that component functioning the way it\u2019s supposed to be functioning? Does that device have the currents or breakdown voltages, or the gains or speed expected? It\u2019s not like I can stick an FEC (forward error correction)tool in there and match all the sensor outputs and the chambers are matched. I wish it was that easy.\u201d<\/p>\n<p>\u201cLights out\u201d fab<br \/>As the industry moves toward fully automated operations, tool-to-tool matching is likely to become interwoven with production. \u201cThis will likely shift from periodic calibration-driven activity to a continuous data-driven monitored system,\u201d said Roth. \u201cRather than roll your reference card up and check and recheck, we\u2019re going to have continuous automated monitoring with flags and alarms \u2014 a more sophisticated version of what we do today.\u201d<\/p>\n<p>It\u2019s interesting to note that not too long ago, individual CD-SEMs were not matched in the field. \u201cWe didn\u2019t plan to introduce a product to improve tool-to-tool matching,\u201d said Fractilia\u2019s Mack. \u201cBut we found that our strategy of measuring the errors on CD-SEMs and removing them from the metrology results, as a way of getting more accurate metrology results, just naturally produces better tool-to-tool matching. We\u2019ve seen a 10x improvement in tool-to-tool matching using our technology on top of the CD SEM.\u201d<\/p>\n<p>The next step for CD-SEMs is getting a handle on stochastics. \u201cTool-to-tool matching between CD-SEMs is a very difficult thing because of the shrinking tolerances of all the CDs. And then you add on top of that this new need-to-do, tool-to-tool matching of stochastics like line-width roughness, line-edge roughness, or CD uniformity. This is something we\u2019ve never done. So we\u2019re kind of inventing it.\u201d<\/p>\n<p>Because signal-to-noise is getting harder to maintain, metrology is turning to machine learning. \u201cAs features shrink, it becomes harder to measure what needs to be matched, explained Onto\u2019s Teh. \u201cWe anticipate the sub-1nm parameters will have the spectral sensitivity level close to the noise floor of metrology tools. Some parameters that are very small in dimensions are buried under the shadow of more sensitive parameters. In such scenarios, ML models may be utilized to amplify critical signals.\u201d<\/p>\n<p>In addition to amplifying signals, ML models can be instrumental in managing tool fingerprints. They can effectively log and identify changes that have been made to a tool. Such actions will allow for correlations between these changes and tool performance (hardware, software, and wafer results), to enable greater insights into cause and effect. Once a level of trust is built up, the next step would be more automated decision-making.<\/p>\n<p>\u201cML evolves tool fingerprinting from manually defined statistics into a learned behavioral representation, which is particularly useful in the context of advanced test systems generating huge amounts of data in high-volume fleet operations,\u201d said Advantest\u2019s Chu. \u201cML also enhances anomaly detection, which is critical in production test. By learning the normal behavioral pattern of a specific test cell, models can identify early deviations caused by calibration shifts, component aging, environmental changes, or loadboard effects \u2014 often earlier and more reliably than static thresholds. In multi-tool fleets, ML can highlight cross-tester differences that may affect binning or correlation. That being said, ML complements rather than replaces classical statistical approaches.\u201d<\/p>\n<p>Conclusion<br \/>Tool-to-tool matching is not a new process for fabs and testing facilities, but it has gotten significantly more challenging with device scaling, increasing device complexity, shrinking process windows, and tighter tolerances. At the 2nm node, metrology systems are operating at the very limits of what is possible, making any improvements in signal-to-noise ratio a welcome sight.<\/p>\n<p>When one is measuring a 3nm feature, for instance, there is a need for overlay of less than 0.3nm. For this to become feasible, engineers now need to model the stochastic effects of line-edge roughness, line-width roughness, and CD uniformity in order to match one CD-SEM tool with another.<\/p>\n<p>Engineers typically begin by comparing tool signatures, where tools are matched down to the component level. From components to systems to parametric calibration, matching has become more sophisticated and more automated with the help of machine learning. To achieve even finer levels of tool-to-tool matching in metrology, engineers need access to the fab\u2019s electrical test data. Tool-to-tool matching plays a pivotal role in yielding the most advanced devices.<\/p>\n<p>Related Reading<br \/><a href=\"https:\/\/semiengineering.com\/high-data-quality-needed-to-better-utilize-fab-data-streams\/\" rel=\"nofollow noopener\" target=\"_blank\">High-Quality Data Needed To Better Utilize Fab Data Streams<\/a><br \/>Engineers require timely and aligned data with just the right level of granularity.<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"Key Takeaways Engineers leverage both device-specific and tool-level data to identify a process \u201csweet spot.\u201d Tight, frequent tool-to-tool&hellip;\n","protected":false},"author":2,"featured_media":331354,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[163330,145526,163331,85,46,163332,163333,145533,163334,163335,125,163336,163337,163338],"class_list":{"0":"post-331353","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-advantest","9":"tag-bruker","10":"tag-fractilia","11":"tag-il","12":"tag-israel","13":"tag-modus-test","14":"tag-nordson","15":"tag-onto-innovation","16":"tag-pdf-solutions","17":"tag-semiconductor-equipment","18":"tag-technology","19":"tag-teradyne","20":"tag-tool-matching","21":"tag-tttm"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/331353","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=331353"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/331353\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/331354"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=331353"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=331353"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=331353"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}