Henrik S. Nielsen:
Experience Gained from Offering Accredited 3rd Party Proficiency Testing

As presented at the 2004
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

HN Proficiency Testing has been offering third-party proficiency testing for about three years and has been accredited for the last two years. This paper discusses some of the experience gained and the lessons learned in the process. Accreditation bodies generally require accredited laboratories to participate in proficiency testing. Since most laboratories see this as nothing but an inconvenience and an added expense of maintaining accreditation, very few non-accredited laboratories participate. Consequently, the opportunity to analyze third-party proficiency test results provides a unique insight particularly into the state of the accredited laboratories that constitute the backbone of the US metrology infrastructure. As it turns out, technical insight into the measurement processes analyzed is at least as important for the proficiency testing provider as is a thorough understanding of the statistics behind the common proficiency testing metrics. The paper discusses some of the general trends that can be identified from this vantage point as well as some specific examples of where proficiency testing turned out to be more than just an expensive inconvenience for the participating laboratory. In accordance with the rules for accredited proficiency testing providers, the anonymity of all participating laboratories, innocent or otherwise, will be protected throughout the paper.

Henrik S. Nielsen:
Determining Consensus Values in Interlaboratory Comparisons and Proficiency Testing

Winner of "Best Paper on Theoretical Metrology" at the 2003
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

An important part of interlaboratory comparisons and proficiency testing is the determination of the reference value of the measurand and the associated uncertainty. It is desirable to have reference values with low uncertainty, but it is crucial that these values are reliable, i.e. they are correct within their stated uncertainty. In some cases it is possible to obtain reference values from laboratories that reliably can produce values with significantly lower uncertainty than the proficiency testing participants, but in many cases this is not possible for economical or practical reasons. In these cases a consensus value can be used as the best estimate of the measurand. A consensus value has the advantage that it often has a lower uncertainty than the value reported by the reference laboratory. There are well known and statistically sound methods available for combining results with different uncertainties, but these methods assume that the stated uncertainty of the results is correct, which is not a given. In fact, the very purpose of proficiency testing is to establish whether the participants can measure within their claimed uncertainty. The paper explores a number of methods for determining preliminary consensus values used to determine which participant values should be deemed reliable and therefore included in the calculation of the final consensus value and its uncertainty. Some values are based on impressive equations and others have curious names. The relative merits of these methods in various scenarios are discussed.

Henrik S. Nielsen:
Can Proficiency Testing Add Value?

As presented at the 2002
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Accreditation bodies are increasingly using proficiency testing as a tool to ensure the credibility of their accreditation programs by requiring the laboratories they accredit to demonstrate that they can live up to their uncertainty claims in interlaboratory comparisons. Accredited laboratories mostly see proficiency testing as an added expense they are forced to incur which adds little or no value. However, when used appropriately, proficiency testing can reduce a laboratory’s risk of producing incorrect measuring results. While focusing on the En (normalized error) approach, the paper explores the underlying assumptions and associated limitations in various reporting methods traditionally used in proficiency testing. It discusses the important steps that are necessary to ensure that correct conclusions are drawn from a proficiency test and the exposure and potential unnecessary cost participating laboratories are subject to, if these steps are not taken. Additionally, the paper covers some personal experiences, where the author has gained valuable knowledge of measuring processes and their limitations as a participant in interlaboratory comparisons.

Henrik S. Nielsen: CMMs and Proficiency Testing.

As presented at the International Dimensional Workshop 2002

Abstract:

Many factors add to the variation in CMM measurements. Some originate in the machine itself or the environment. Some come into play in every measurement and some depend on the probe configuration used, including probe articulations or changes. Others depend on the part being measured; its rigidity and thermal properties and still others depend on the measurement strategy and point distribution chosen by the operator. Since geometrical requirements, whether specified using ANSI/ASME Y14.5 or ISO 1101 apply to a continuous surface, it is impossible to measure GD&T “in accordance with the standard” on a CMM. Therefore CMM measurements of geometry become a question of what constitutes an acceptable approximation. For these reasons and because there is a lack of formalized ways of estimating the uncertainty of CMM measurements, proficiency testing can be a valuable reality check for how well one can measure with a CMM.