February 2026 Volume 8
OPERATIONS & MANAGEMENT
GETTING STARTED IN GAGE VARIATION ANALYSIS AND TROUBLESHOOTING By Ray Harkins
D uring my 12 years managing quality for a North American forging facility, one of my most persistent challenges was verifying that the gages we were using – from hand tools like calipers and depth gages to complex systems like coordinate measuring machines (CMMs) – were delivering accurate results. After all, few events are more troubling for a manufacturer than shipping a batch of parts believed to be acceptable based on their measurement results that are later discovered to be defective. To help ensure the accuracy of measurement systems, quality engineers and managers can draw from a variety of strategies such as gage selection practices, calibration methods, verification methods, Measurement Systems Analysis (MSA) techniques and more. MSA refers to a collection of experimental and statistical methods designed to evaluate the error introduced by a measurement system , and the resulting usefulness of that system for a particular application. It is not immediately obvious to newer quality professionals and those outside of quality that gages lie. Even the most sophisticated, multi-sensor measurement platforms lie. The important questions are: • How big of a lie is the gage telling? • Does that lie significantly affect the specific application? Measuring and identifying the sources of those lies, or more technically, errors, are the domain of MSA. Measurement error is the difference between the value displayed by the gage and the true dimension of the unit under test. Error arises from many sources including wear inside the instrument, operator technique, environmental conditions, and the inherent limits of the measurement technology. In MSA, we quantify these errors so we can understand how much trust to place in the measurement results. The idea that all gages fail to deliver perfect results to their users is best expressed by an equation fundamental to measurement science: Y = T + e where: • Y = the resulting value of the measurement process (what the gage reads) • T = the true, often unknown, measurement of the object under evaluation • e = the error introduced by the measurement system Measurement error can take many forms. Some, like instability (a slow drift over time), emerge gradually. Others, like nonlinearity (inconsistency across a gage’s range), create systematic bias. But two particular types of error, repeatability and reproducibility, are the most discussed among measurement specialists because of their applicability to a range of commonly used tools like calipers, depth gages, and similar shop floor gages. In fact, these two errors are often evaluated together in a specialized MSA study called Gage R&R Analysis .
Repeatability and Reproducibility Repeatability error, also known as Equipment Variation (EV), refers to the variation in measurements when the same operator uses the same gage under the same conditions. Many manufacturing professionals have had the following experience: You measure an important part feature like flash thickness using a common hand tool like digital micrometers. You record the measurement result and note that it is very close to the upper specification limit. So just to “double-check”, you remeasure the same flash in the same spot with the same micrometers. And we get a different result. Maybe a result above the upper specification limit. This is how we experience EV on the shop floor. Repeatability error is caused by issues such as friction-generating debris within the gage mechanisms, minor design flaws, wear or corrosion. These sources of variation result in the random dispersion of values around the true measurement of the part. And for the statistically inclined, EV is an estimation of the standard deviation of that dispersion. Reproducibility, also known as Appraiser Variation (AV), refers to the variation in measurements when different operators use the same measurement system under similar conditions. In other words, where EV highlights differences within a single operator, AV highlights differences between operators. And like EV, AV is also an estimation of the standard deviation of the reproducibility error. AV is experienced in production when, after getting different results in subsequent measurements of the same part, you invite a coworker to measure the part who gets yet another different value. Consider the digital calipers shown in figure 1. Because of their low cost and ease of use, calipers like these are commonly used in forging, ring rolling, machining and numerous other processes. They are typically used on the shop floor or inspection lab to measure diameters, widths, and thicknesses.
Figure 1. Picture of digital calipers measuring the width of a forge gear blank
FIA MAGAZINE | FEBRUARY 2026 36
Made with FlippingBook - Online catalogs