Independence is a prominent theme in the field of Quality. At its heart is the separation of doing and checking.
The example of the Hubble Space Telescope
Why is independence important? Consider the Hubble Space Telescope. It is a major achievement of science. But the first images it returned were found to be blurry! After a costly spacewalk and replacement, the problem was fixed. A fascinating 1990 NASA report pointed to why:
“The Perkin-Elmer plan for fabricating the primary mirror placed complete reliance on the reflective null corrector as the only test to be used in both manufacturing and verifying the mirror’s surface with the required precision.”
Basically, the instrument used to test the mirror was the same one used to manufacture it. In fact, cruder (but more independent) tests had detected the same error, but these results were disregarded.
This report clearly implicates the lack of independent checking. The instrument, although exquisitely precise, contributed to this costly error because it was used for both manufacturing the mirror and for verifying it.
The report touches on other Quality concepts as it goes on to say, “the engineering unit responsible for the mirror was insulated from review or technical supervision and were unaware that discrepant data existed, and were subject to great concern about cost and schedule, which further inhibited consideration of independent tests.”’
Reasons for independent review
Building a space telescope can seem simple compared to the dynamics of a typical workplace. The following are some more “human-factors” reasons for independent review.
An independent reviewer can catch dumb mistakes. In drafting a scientific report, for example, the writer can become blind to certain errors that are readily apparent to a reviewer. Each draft is so subtly different from the previous one that the final draft may have drifted, or become too wordy, or have been dragged along with the author’s interpretation so far that the protocol and predefined endpoints have been underemphasized or not addressed at all.
The reviewer will also notice more prosaic omissions right before finalization such as a missing company watermark, a missing part of the header, or the wrong formatting.
Reviewers in scientific and technical fields can have varying levels of independence from the author. Some may be within the same company but in a different department. Some may be contracted. Some may be peers with the same title as the author but a different specialty.
In each of these arrangements, the reviewer has the advantage of looking at the report or product from a perspective that is closer to that of the customer. The reviewer will question confusing technical jargon. The reviewer will avoid “filling in the gaps” with their own knowledge.
Most importantly, an independent reviewer will apply a standard (such as a checklist) during their review instead of relying on their expertise, however extensive it may be, in the subject. If this standard is valid (see below) and matches the customer’s requirements and any other pertinent requirements, then problems will be identified well in advance of the product being released to the customer.
The independent reviewer will be more familiar with the requirements
Understanding the customer’s requirements
Members of a well-organized Quality program will regularly review client feedback, client audits, and returned product. Ideally they will be involved in requirements elicitation, where other sales and design people elicit and define exactly what the customer wants. These customer requirements then are translated into the standard that the product is checked against during the review.
Understanding business requirements
Ideally, scientists and technical personnel involved in testing and interpretation of results will be insulated from customer pressures. This should apply to the reviewer as well. But the reviewer may be better versed in the business environment in which the research, testing or manufacturing is occurring.
Understanding other requirements such as regulations and standards
Although scientists are usually enthusiastic about learning the best techniques and methods, there can be a great amount of drift from the best practices. To obtain consistent results in a particular assay, an entire lab may rely on a technique developed 30 or 40 years previously and defined in a single published paper. Without pressure from a Quality group or other reviewers, they may never make the undertaking of updating their methods to current standards.
In fact, in many organizations the Quality program includes a regulatory affairs unit. This is because Quality reviewers check the product against the standard. The regulatory affairs unit keeps the correct and current standards on file and available to the scientists. When there is a gap between the current methods and the standards, members of the Quality group document this and ask for justification. If the gap is too large, the physical products, validations, protocols, standard operating procedures, and final reports may be rejected, and not accepted until those gaps are closed.
Applying a validated standard
This idea is crucial and warrants its own article. Suffice it to say that an independent reviewer, because they are approaching someone else’s work, is more likely than the originator/author/producer to use a standard that has been demonstrated to match requirements.
I will expand on the role of standards in Quality later.
An independent reviewer is also insulated from customer pressures.
This is not to say they will impose undue delay, rather that they are less likely to be influenced by pressures outside the predetermined standards they use during review.
You will find something similar in the editorial policy of a news organization: the sales team, which sells advertising space to businesses such as car dealerships, will not have extensive involvement with the editorial team, which may be very vocal about air pollution, traffic deaths and climate change.
If the independent reviewer adheres to clear, written requirements, and the originator is familiar with these requirements, then problems are more likely to be fixed upstream, in advance of the review. Even problems that the originator could have easily hidden from the customer will have been addressed, because of the likelihood that they will be discovered during review.
Suggestions for your organization
Independence is difficult to define and implement. Here are some open-ended questions to ask of your organization.
Is your Quality unit too involved in the nitty-gritty of things?
Do you remain the final check on most processes before release or finalization? When you review something, are things mostly polished and complete? Or does your Quality role involve identifying minute corrections, which then require rework? Keep in mind that the more errors there are in the product submitted to the Quality group, the more likely there is to be further rework and further review handoffs.
To fix errors further upstream, at lower cost, consider introducing a technical data review unit that does these checks. QA can implement spot checks and other verifications that the technical data reviewers’ processes are in place and that the defined reviews were done. Consider doing this for the most error-prone processes first. For other processes, that are found to be running smoothly, the number of checks and reviews can be reduced.
Are you poised to adapt, wherever the company goes in terms of growth?
You are responsible for timely reviews so you do not want to be a bottleneck at the end of the process. If the Quality review often triggers investigations, rework and holds that delay the release of the product, then it is time to add a technical review upstream.
You want to be able to grow with the organization. If your business expands 10% more than predicted in one year, the Quality group should be able to absorb that, and decide later on whether it should expand too. An independent approach to review allows management to control how much the reviewers’ work will grow and change with the growth of testing or production.
Do your Quality people have routine, repetitive work that can be delegated to a technician instead?
An example may be temperature and humidity monitoring, or sampling for bacteria in the water system. If these are routine and repetitive, a technician can quickly be trained to do them. The QA personnel can then sign off on these checks and provide management with assurance that the process is under control. While checking these logs and reports, the Quality group may find they have the time to define new processes for better trending and reporting as well.
Do you have a way to address drift?
Drift must be addressed periodically. There is no tried and true way to do this, but a couple of things may help: periodic reviews, regular looks at industry best practices, and the normal churn that results from new people being hired from other companies and longtime employees departing.
That last one – churn – is important. Often a longstanding but flawed practice goes unquestioned until a new employee says, “You know, at my old job we did it this way…”
When someone says this, listen!
Some further reading
A costly error partly caused by checking with the same instrument that was used for doing:
The Hubble Space Telescope Optical Systems Failure Report