What is distortion calibration?
Every time we use an optical system, i.e. a lens and matching camera, we must face the issue of distortion. The optical distortion of the
system can be defined as a bias which causes a set of points to be imaged in different relative positions than the real ones.
A typical example is a straight line which is imaged as curved because of the lens distortion Fig.1 shows the effect of distortion on a
calibration pattern.
The mathematical transformation connecting the original undistorted field of view to the distorted image can be very hard to model, also considering that it can change considerably through the field of view itself.
The first effect of distortion on metrology is the loss of repeatability of the measurements: since an object feature “looks” slightly different
depending on where the object is located on the FoV because of distortion, the value of a measurement on that feature will be likely to change every time the object is removed and put back again.
Fig. 2 Gaussian distribution of repeated measures. Blue, red and orange distributions
represent the same result ( μ = 0 ) with different repeatability (best for blue). The green
bell curve represents a wrong (but repeatable) result, e.g. biased by a fixed offset.
If we measure a through-hole diameter 100 times, the distribution of the results can be approximated by a gaussian curve: results close to the average are very frequent, whereas very different
results are unlikely.
The repeatability of the measure is related to the width of the bell: the thinner the width, the harder it will be to find a measure far away from the average. In other words, a certain feature (e.g.
a length) will be «almost the same, almost every time». On the other hand, a wide bell represents the situation in which we can’t tell whether a measure is actually different from the expected value (e.g. because it’s a defective part) or it’s a
statistically expected outlier given by the low repeatability of our measurement system.
The typical width used is called sigma (or “full width at half maximum”, FWHM), and it’s directly related to the repeatability.
We can thus establish a direct method to compare the accuracy requirements: if the tolerance on a measurement is given as multiples of its specific sigma value, we consequently state the likelihood of an out-of-tolerance part to present itself. A two
sigma compliant object will be within tolerance 95% of the times. A three sigma object will have a 99.7% confidence level, rising to 99.99999% at 5 sigma.
Suppose your distribution has an average value of 150 mm and sigma = 1 mm. The associated error depends on the confidence value for your application. In fact, we can state in the feature specs that its length is 150 mm +/- 3 mm, and this will be true
99.7% of the time. On the other hand, if we want 1 mm to be a 3 sigma tolerance, we must improve our measurement process until 1 sigma = 0.33 mm.