Note that 30 years, even assuming 100% operation during that time, only amounts to 263 khr, so a well-designed system might have lots of components that will not have failed in that time. Moreover, while we typically assign failure rates to mechanical components, many such components don't really have "random" failures, but, rather, they have wearout failures, which is actually not predicted with failure rate approaches.
Typically failure rates only realistically apply to electronics, but good design and benign environment can push out the random failures.
Again, the failure rate to be used is the PREDICTED, not actual, since it's strongly environment dependent. One might argue that original duty cycle for GB vs AUC was underestimated, and should be changed. Note, however, these analyses are intended for WORST-CASE, not nominal, not best. The idea is to be pessimistic, because if the predicted failure rate actually turns to be the real failure rate, someone messed up. Things being the way they are, all that pessimism is intended to protect the user and manufacturer from the corner cutting that invariable occurs.
The failure probability is that of that particular failure, which could be astronomically low, and if it's a particularly bad consequence, then the lower the probability, the better. Otherwise, you, as a manufacturer must specify and implement mitigations to minimize the effects of the critical failures, which could cost serious money or redesign.
TTFN
faq731-376
Need help writing a question or understanding a reply? forum1529