isn't there a different statistical test to say "what is the range of true states given a (or several) measurements of the property?"
>> There is no statistical "test" other than collecting and histogramming the data. A normal distribution is not necessarily that distinguishable from a Gaussian distribution, unless you collect gobs of data.
i mean there is a true pressure being read, does it Have to fall within the error band of a measurement ? (i don't think so)
>> What value you measure is dependent on a number of error sources, only one of which is the instrument. Even the actual pressure, which is the result of billions of atoms colliding with each other and the instrument's sensor element is actually a Poisson distribution, which is related to the "shot" noise and numerically equivalent to the square root of the number of individual atomic collisions detected by the sensor. In most cases, this results in a signal to noise ratio that's in the millions, and can be considered to be a constant. But for other quantities, and even lower pressure sensing, the statistical distribution of the actual measurand is significant. In the case where the measurand's noise is significant, the standard errors need to be appropriately combined to get the effective standard error. Given that, a measured value should be within ±2σ essentially 95% of the time. However, that's not to say that there might not be a measurement that's, say, 4σ from the mean value. Of course, the mean value itself has a standard error as well, related to the fact that the mean value is the average of a bunch of noisy measurements whose standard error is the root sum of squares of the individual standard errors.
if you read 99 and have an error of +-2, then you are confident that the value is between 97 and 101. how confident ? this would help set up the normal distribution ('cause if you had 90% confidence, wouldn't this mean 90% of the population is within these bounds ? and so you could determine the standard deviation from normal population distribution statistics).
>> That appears to be a circular rationale. If you have "an error of ±2" then you, or someone, has already done the analysis, or fudged it. Verification of a measurement, which is what a calibration lab essentially does, simply means that you are comparing your instrument with a "golden" instrument whose noise is substantially lower than that of yours. Assuming that everyone upstream has done their jobs, then there is confidence that given a input, the measurement is within ±2σ 95% of the time. However, given a single measurement data point, you cannot absolutely be sure that this single value is within ±2σ of the true value. Only by taking multiple measurements could you achieve confidence in the measured value.
Much of this is highly dependent on how tight the requirement is, and how much test accuracy ratio (TAR) you have. The same ±2 applied to 1000 psi would automatically confer a certain level of confidence that your measurement is reasonably good, as compared to getting a reading of 5 psi.
TTFN
faq731-376
Need help writing a question or understanding a reply? forum1529