The invention relates to the inspection of components, in particular optical components, to assess imperfections.
In 1963, the US Department of Defense defined a performance specification for inspecting optical components either in reflection or transmission, which was assigned the code MIL-O-13830A. The reflection variant is widely used to this day, but the transmission variant is not widely used. The originally envisaged optical components were lenses, prisms, mirrors, reticles, windows and wedges. In 1997 the performance specification was revised into the current version MIL-PRF-13830B. The contents of both of these performance specification documents are incorporated herein by reference in their entirety.
Manufacturers of optical components are expected by customers to test their product against MIL-PRF-13830 and supply a specification sheet listing the imperfections that are present on each component and assigning a grade to each. The performance specification considers two types of imperfection: scratches and digs. A dig is a pit. The performance specification is based on a visual inspection carried out by a human operator—an inspector—who compares imperfections in the component to inspect against reference imperfections present in one or more reference components. For scratches, the performance specification grades them by their visibilty. For digs, the performance standard grades them by their diameter, i.e. by a geometric parameter, in ten micrometre increments. However, since dig diameter is determined by visual inspection by an inspector, digs are also in effect graded by visibility. A set of reference components is provided, wherein the set collectively provides all the different grades of imperfection. In the art, the reference components are sometimes referred to as “comparator plates”. The scratch grade is a subjective measure of how visible the imperfection appears to the inspector. The scratch grades are 10, 20, 40, 60 and 80, where a higher number relates to a more visible imperfection. The dig grades are in terms of the dig diameter, which for irregularly shaped digs is defined as the average diameter. Other equivalent terms in use for scratches are scratch weighting and scratch brightness.
A set of reference components from a given provider might have one reference component for each scratch grade and another for each dig grade. Other manufacturers might put all grades of scratch on a single reference component and all grades of dig on another single reference component. In principle any mapping between reference imperfections and reference components carrying them is possible.
The inspector has a set of reference components which collectively provide scratches and digs of all the visibility grades: i.e. 10, 20, 40, 60 and 80 for scratches. The inspector holds the test component and one of the reference components alongside each other in his hands in a light box where they are, or at least should be, illuminated with the same lighting conditions. Both components are tilted and the inspector makes a subjective assessment, based on how visible the imperfection appears, of whether a test component imperfection looks brighter or darker than reference imperfections of different grades. Supposing there is a separate reference component for each grade, if the inspector decides the test component imperfection is worse (or better) than that of the reference component, he exchanges the reference component for another from the master set with a higher (or lower) imperfection number and repeats the comparison until he is satisfied he can grade the test component accurately. For example, if a scratch looks worse, i.e. brighter, than the 40 reference scratch, but better, i.e. dimmer, than the 60 reference scratch, then the scratch is graded as a 60 scratch. In other words, an imperfection with a visibility or brightness that lies between two adjacent reference imperfections is graded as the worse grade. This procedure is repeated by the inspector for each visible imperfection on the test component, so in the end a number of imperfections are identified, their location on the test component and their physical extent is recorded and their grade specified.
The test method described in MIL-PRF-13830B requires a specific optical configuration including a range of surface illumination angles and the capture of scattered light from the optical surface with the inspector's (human) eye positioned at a defined distance from the test and reference components. In practice, a dark-field microscope is used to provide the specified configuration.
The performance specification MIL-PRF-13830 is based on an original master set of reference components, which is retained by the Department of Defense. Third party manufacturers of derivative master sets exist, so that optical component manufacturers can purchase their own sets for use in their post-production testing.
One well known and obvious shortcoming of MIL-PRF-13830 is that it relies on the inspector's subjective impression of how visible or bright one imperfection appears to be on one component compared with another imperfection on another component. For example, the same imperfection may be graded consistently by one inspector differently from another inspector. This problem is discussed in the article: “The Truth About Scratch and Dig”, David Aikens 2010: (https://www.lambdaphoto.co.uk/pdfs/Savvy/The%20Truth%20About%20Scratch%20and%20Dig.pdf).
However, there is another, more subtle problem which we do not think is known, and we have only discovered by applying our invention as described elsewhere in this document. This other problem relates to the manufacture of derivative master sets by third party manufacturers. The derivative master sets are not precisely manufactured copies of the original master set, but rather are normally manufactured optical components, each of which is graded by the same subjective visibility assessment against an existing derivative master set. That is a third party manufacturer of derivative master sets, has its own derivative master set, which ideally has been referenced to the original master set held by the Department of Defense, which it uses to bin individual optical components according to imperfection grade, and it is these binned optical components that are then collected together into sets which are sold as master sets to optical components manufacturers. Consequently, these third party manufacturers are selling derivative master sets which suffer from the same, well recognized shortcomings of the inspection method, namely inconsistent grading as a result of the subjective nature of the inspection. The net effect is that, what might be defined as a grade 40 scratch in a reference component of a derivative master set, is actually a grade 20 scratch according to the original master set, so an optical component manufacturer may be assigning incorrect imperfection numbers to all the optical components which are graded by the incorrectly-graded reference imperfection. Large numbers of derivative master sets are therefore in circulation all of which are different, thereby compounding the subjectivity problem.
Because the drawbacks of the MIL-PRF-13830B performance specification were well known, a non-subjective standard ISO10110-7 was defined in 2008 with the idea of replacing MIL-PRF-13830B. Because ISO10110-7 has no subjective element, it is amenable to being automated.
DE 10 2015 201 823 A1 describes an automated test apparatus for inspecting according to ISO10110-7.
It is also known to assess imperfections using a dark-field microscope inspection, in which an imperfection is automatically graded by integrating the total amount of light scattered from the imperfection (https://www.crystran.co.uk/metrology).
Nevertheless, despite its shortcomings, the reflection variant of MIL-PRF-13830B remains a test that is often requested by customers.
The aim of the invention is to provide a computer-automated test apparatus and associated method that replicates the subjective human visual inspection of an imperfection's visibility, and therefore performs as well as an expert human inspector using the original master set. We therefore wish to provide an apparatus and method which provide results which are consistent, reproducible and aligned to the average results obtained from a range of expert inspectors in a range of test facilities across a range of samples.