I recently came encountered an "issue" with a supplier. A 3D surfaced feature with a profile tolerance is being checked using one of 2 CMMs depending on what's available. I should note that the entire sample population from each batch is checked using the same CMM (i.e., both CMMs cannot be used to inspect the same feature). One CMM reports the "deviation" as 2x the absolute max deviation from the surface and compares it against the total profile tolerance. The other CMM reports the "deviation" as either the maximum deviation if all points in the cloud fall on the same side of the surface or, if some points in the cloud are on the inside and some on the outside of the basic surface, it reports the additive as the deviation (e.g., one point is -.001, the other is +.002, then the deviation is .003). For the latter cases, the deviation is compared to half the total profile tolerance. I've become used to the former method of comparing 2x the max absolute to the total tolerance. My biggest issue is that, regardless of the methodology, they are transferring the "deviation" to their inspection form and therefor, without having the printout available, I can't blindly compare the results of multiple lots to calculate capability indices, etc. since the deviations may be calculated differently.
The CMM operator maintains that the latter method is correct based on ASME Y14.5.1M-1994 and comparing twice the absolute deviation to the total tolerance is therefor incorrect. I've read the applicable section 6.5 of the standard, but all it does is make my head hurt. Anyone better versed in the mathematical definition standard have an opinion?