Key Takeaways

Engineers leverage both device-specific and tool-level data to identify a process “sweet spot.”
Tight, frequent tool-to-tool matching enables greater yield and fab flexibility.
Machine learning helps capture the nuances of a tool’s signature.

Many people outside of the semiconductor industry wonder how humans can fabricate transistors with tens of nanometer scale dimensions on a consistent basis, day in and day out, from one process tool to another, one line to another, and one fab site to another. One way this is achieved is through tool-to-tool matching (TTTM). But TTTM is getting much more difficult as fabs produce increasingly complex chips with smaller features and process windows.

Wafers can undergo 600 to 800 process steps over the course of 3 months, so the tools must produce consistent results. The systems that check these results in metrology and test must meet some of the highest standards.

“The latest tech nodes require hundreds of tightly interdependent process steps in multi-patterning, high-k/metal gate, complex etch chemistries, selective deposition, buried power rails, etc. Across the entire fabrication process, every tiny process imperfection may accumulate as a compound effect that affects the yield,” said PeiFen Teh, director of applications engineering at Onto Innovation. “Thus, TTTM spec at every critical process step is essential to ensure process stability throughout the process line.”

Shorter product lifecycles, the need for faster yield ramps, and a diverse supply chain also challenge tool matching operations. “It’s getting more critical and harder to perform because we have a distributed supply chain with a higher product mix, and we need to produce identical test outcomes,” said Eli Roth, product manager of SMART manufacturing at Teradyne. “You’re looking for transparency on more complex devices. Guard bands are constantly being tightened, advanced packaging is putting more dies together, so you’re looking for more repeatability on the device, and that’s applying more pressure on your test to reduce error contributors as much as possible. And the ramps are faster, so you have less time to stabilize your NPI base before you’re in production.”

Tool matching (aka chamber matching) ensures consistent outputs, for instance, from one ATE tool to another of the same model. There are various ways this is accomplished, but it starts with a NIST-traceable standard wafer that checks the accuracy of different measurements such as CDs. Then, tools are matched by adjusting hardware settings until critical outputs match. For advanced nodes, data-driven machine learning models simulate the complex, nonlinear bias between tools. The fab then repeats these steps for other tools in the fleet.

Sometimes a best or average tool is employed. “Golden tools or test vehicles are widely used. We like to characterize a reference with a known-good vehicle, and then statistically align the rest of the fleet against that behavior,” said Roth. It is also important to quantify the amount of variation associated with the measurement system itself.

Tool matching is not a “one and done” step. In fact, the more leading-edge the process, the more frequently the tools are likely to be matched. Still, there are clear times when tool-to-tool matching is required:

At tool installation/qualification;
When new products or new processes are introduced;
After a corrective-maintenance or preventive-maintenance routine;
After an instrument or component set is replaced, and
At regular intervals such as once a day, once a shift, or once a lot (advanced nodes).

More data sharing is needed to meet leading device maker’s needs. “While baseline tool matching using manufacturer-provided data is expected, device makers are now demanding deeper alignment at critical process steps to ensure consistent device performance. Achieving this level of matching requires access to fab-level device data, such as metrology results and functional test outcomes,” said Melvin Lee Wei Heng, director of applications engineering at Onto Innovation. “Leveraging this device-specific information in combination with tool-level data is now essential to confirm that tools are operating within the process ‘sweet spot’ and delivering uniform performance across the manufacturing line.”

“We use a lot of VLSI NIST traceable standards for step heights and linewidth measurements. But beyond just the calibration of a system, we also match the optics to make sure that when a recipe is transferred from one tool to another, there’s no change in the illumination settings, the optics are the same and the illumination of the systems is the same,” said Andrew Lopez, application engineer at Bruker.  For example, using standard wafers the engineer can adjust tools such as calipers or sensors to within strict tolerances. “We look at the linearity of multiple different step heights and multiple different linewidths to make sure that the system is sensitive enough to detect variation coming in from the process.”

While they are related, tool matching is not the same as tool fingerprinting or capturing a tool’s “signature.” Every tool in a fab — a scanner, etcher, cleaner, tester, optical inspection system, etc. — has its own microscopic irregularities in machined parts or wear-and-tear artifacts. As a result, identical systems behave slightly differently even when executing the same recipe. By capturing and analyzing this signature, engineers can align performance tool-to-tool. [Editor’s Note: A future story will address matching of process tools.]

Fingerprinting may or may not benefit from the introduction of machine learning models. “Traditional fingerprinting methods depend on engineered features, control charts, and threshold-based comparisons. These approaches work well when variation is low-dimensional and predictable,” said Vincent Chu, senior consulting manager for Advantest Cloud Solutions. “However, today’s testers collect far richer data — high-resolution parametrics, waveform signatures, timing measurements, and continuous telemetry. In these higher-dimensional spaces, ML models can capture subtle, non-linear behaviors that define a tool’s true operating ‘signature.’ This enables a more accurate and scalable representation of a tester’s behavioral baseline without relying entirely on predefined metrics.”

In metrology, as in testing, both precision and accuracy are important metrics. Accuracy is how close a measurement is to its true value. It can be achieved by comparing measurements to those of a known standard, such as a standard wafer with multiple features, but it is difficult to attain.

“We would love to make sure every metrology output can be labeled as accurate, but that is almost never the case. We almost always settle for precision, and then when we have a certain level of experience over time of hitting that target, we get a good yield at the end,” said Chris Mack, co-founder and CTO of Fractilia. “So we’ll call that ‘accurate,’ but it’s not really a number that is accurate in the sense of a NIST standard metrology result. Still, precision continues to be the more important characteristic of a metrology tool that we pay the most attention to.”

Precision is determined by taking a measurement of the same feature multiple times and recording the variability around a central value.

How tool matching works
The metrics that are matched depend on the tool. For example, in acoustic microscope imaging, metrics include image intensity, signal amplitude, depth response, and defect detectability. “We utilize LTSM (Long-Term Stability Monitoring) or Global Tool Matching. This uses a known or reference sample and software algorithm to compensate for any system-to-system variation by normalizing the acoustic image response, so inspection results remain consistent from tool to tool and site to site,” said Bryan Schackmuth, product line manager of AMI at Nordson. “The LTSM allows for image normalization without requiring manual operator adjustments to achieve matched images. This global matching procedure is typically run any time a change is made to the operating frequency (for example, when changing the transducers), or before the start of each shift or each day.”

Increasingly, metrology measurements are correlated with electrical test results. “Tools are typically matched based on a hierarchy of steps, and each fab or OEM may approach it slightly differently,” said Joe Fillion, director of product management at Onto Innovation. “It starts with a fingerprint or configuration comparison. Tools need to be as closely matched from a software and hardware standpoint — same software version, lenses, apertures, light source, MFCs, etc. Once there is a reasonable match, tools will often perform standard auto-test or calibration routines to ensure behavior is consistent across tools. If results are consistent and meet the intended specification, standard qualification runs are performed to measure actual performance on the wafer. These results will have a target value with an upper and lower limit to ensure operation within an acceptable range.”

Onto’s Teh provided a step-by-step guide to tool matching. “We align the performance of each tool component first and monitor the fleet tool matching performance,” he said, including:

Component-level calibration: Monitoring system health check parameters and applying calibration when out of specifications;
System-level calibration: Examining the spectral response of a fleet of tools measured on a standard wafer;
Spectral calibration: Used to improve the fleet matching level, and
Parametric results monitoring: This is done using a standard wafer (measuring CD, thickness, or material constants). Recalibration may be applied to optimize the tool matching level on each parameter.

On testers, engineers need to keep a handle on component drift. “Thermal sensor drifts over time,” said Teradyne’s Roth. “There’s timing skew. We generally manage drift through period calibration and reference checks, and we’re continuously checking our equipment to our reference scanner so we know how far off that can be and when we have to do period calibration. SPC monitoring and big data monitoring are other ways to test that. Like the hammer that’s always looking for the nail, we’re probably looking at periodic calculations.”

Depending on a tool’s configuration, sometimes tool-level calibration can be built in. “Our tester is based on a high-precision resistor, so it uses a self-verification method to ensure that each measurement is correct. That’s how we affirm that each tool is calibrated and provides consistent measurements across testers,” said Jesse Ko, COO of Modus Test.

Electrical test and metrology often work hand in hand, as well. “Fabs have incorporated in-line electrical/functional tests to ensure a tool is performing to the level where it has no impact on the device,” said Onto’s Heng. “In certain critical process steps, cross-sectional analysis is done to ensure the profile formed is per-device specs for sensitive layers where conventional metrology measurements are inadequate.”

A different way of looking at the problem is to start looking at outcomes, rather than tools, and working backward from there. This is essentially what Intel did when its “Copy Exact” strategy, which replicated everything in a fab exactly — equipment, methodologies, and processes — produced different results. The company ultimately narrowed the cause down to environmental conditions such as humidity. Calibrating equipment may just be the first step in a complex investigation.

“It’s the same model, the same calibration,” said Jon Holt, worldwide fab applications solutions manager at PDF Solutions. “You’re making sure the measurements are accurate, because that’s another potential source of variability between two sites. Either measuring it in the same location or using the same tool is one way. But then you start looking even broader and holistically at environmental variables. Is your cooling water, your gas supply, your gas distribution set up the same and bringing all that information that’s needed to play. And then the final real challenge is functionality. Is that component functioning the way it’s supposed to be functioning? Does that device have the currents or breakdown voltages, or the gains or speed expected? It’s not like I can stick an FEC (forward error correction)tool in there and match all the sensor outputs and the chambers are matched. I wish it was that easy.”

“Lights out” fab
As the industry moves toward fully automated operations, tool-to-tool matching is likely to become interwoven with production. “This will likely shift from periodic calibration-driven activity to a continuous data-driven monitored system,” said Roth. “Rather than roll your reference card up and check and recheck, we’re going to have continuous automated monitoring with flags and alarms — a more sophisticated version of what we do today.”

It’s interesting to note that not too long ago, individual CD-SEMs were not matched in the field. “We didn’t plan to introduce a product to improve tool-to-tool matching,” said Fractilia’s Mack. “But we found that our strategy of measuring the errors on CD-SEMs and removing them from the metrology results, as a way of getting more accurate metrology results, just naturally produces better tool-to-tool matching. We’ve seen a 10x improvement in tool-to-tool matching using our technology on top of the CD SEM.”

The next step for CD-SEMs is getting a handle on stochastics. “Tool-to-tool matching between CD-SEMs is a very difficult thing because of the shrinking tolerances of all the CDs. And then you add on top of that this new need-to-do, tool-to-tool matching of stochastics like line-width roughness, line-edge roughness, or CD uniformity. This is something we’ve never done. So we’re kind of inventing it.”

Because signal-to-noise is getting harder to maintain, metrology is turning to machine learning. “As features shrink, it becomes harder to measure what needs to be matched, explained Onto’s Teh. “We anticipate the sub-1nm parameters will have the spectral sensitivity level close to the noise floor of metrology tools. Some parameters that are very small in dimensions are buried under the shadow of more sensitive parameters. In such scenarios, ML models may be utilized to amplify critical signals.”

In addition to amplifying signals, ML models can be instrumental in managing tool fingerprints. They can effectively log and identify changes that have been made to a tool. Such actions will allow for correlations between these changes and tool performance (hardware, software, and wafer results), to enable greater insights into cause and effect. Once a level of trust is built up, the next step would be more automated decision-making.

“ML evolves tool fingerprinting from manually defined statistics into a learned behavioral representation, which is particularly useful in the context of advanced test systems generating huge amounts of data in high-volume fleet operations,” said Advantest’s Chu. “ML also enhances anomaly detection, which is critical in production test. By learning the normal behavioral pattern of a specific test cell, models can identify early deviations caused by calibration shifts, component aging, environmental changes, or loadboard effects — often earlier and more reliably than static thresholds. In multi-tool fleets, ML can highlight cross-tester differences that may affect binning or correlation. That being said, ML complements rather than replaces classical statistical approaches.”

Conclusion
Tool-to-tool matching is not a new process for fabs and testing facilities, but it has gotten significantly more challenging with device scaling, increasing device complexity, shrinking process windows, and tighter tolerances. At the 2nm node, metrology systems are operating at the very limits of what is possible, making any improvements in signal-to-noise ratio a welcome sight.

When one is measuring a 3nm feature, for instance, there is a need for overlay of less than 0.3nm. For this to become feasible, engineers now need to model the stochastic effects of line-edge roughness, line-width roughness, and CD uniformity in order to match one CD-SEM tool with another.

Engineers typically begin by comparing tool signatures, where tools are matched down to the component level. From components to systems to parametric calibration, matching has become more sophisticated and more automated with the help of machine learning. To achieve even finer levels of tool-to-tool matching in metrology, engineers need access to the fab’s electrical test data. Tool-to-tool matching plays a pivotal role in yielding the most advanced devices.

Related Reading
High-Quality Data Needed To Better Utilize Fab Data Streams
Engineers require timely and aligned data with just the right level of granularity.