FacEngr, and all
Determining Rational Design Criteria
Granted that large hurricanes have occurred in the past and these latest ones may not have been the worst, but there are two other things you must also watch out for too. Are there changes to frequency and are there trending tendencies? Are the averages, or better "moving averages" of either of those changing over your time history? Both could indicate that some causation factor is at work, maybe not. For example, a rising moving average wave height, a max wave height, max wind speeds, or max flood stage river levels can cause us trouble. Changes in either may affect our design conditions. A rising moving average of max wave height would suggest that we think about increasing the design criteria of the minimum height between the lower deck and the highest mean sea level. An increasing frequency of high wave data suggests that our present 100yr and 1000yr design wave heights might be too small.
Say we have offshore platform with following design criteria,
Wave Criteria #1, Any offshore platform should be built such that the lower deck remains above a wave height with a return period of 100yrs or less and not experience any significant damage.
#2 is that a platform should survive a wave height with a return period of 1000yrs or less. If the frequency of either the presently defined 100 or 1000 year wave heights are observed to be increasing, the probability of their occurrence is increasing, meaning that the return periods of those wave heights are decreasing. 100yr waves might now have 86yr return periods. 1000yr waves may have 737yr return periods. Decreasing return periods of existing criteria implies that the criteria's present values are too low and they must be revised upwards to reflect new wave heights that will have the same expected frequencies of 100 and 1000yrs.
How do we rationally decide to revise our design criteria, or not? We get out our old wave data and look at all the waves ever recorded. We calculate the probabilities of each wave height in our old storm data and plot those against wave height. We calculate the expected return periods for all the waves in our old data and plot return periods against wave heights.
Now we do the same thing, but this time we include the new data recorded over, say the last 10 years that include some very large storms that we had never seen before. We plot the new curves and see that all our old wave height values now have a higher probability of occurring. We also see that the same return periods now show they have greater wave heights.
Yes, perhaps there were even bigger storms in the past that we were not aware of, but as the saying goes, "what you don't know can't hurt you" and that was fine until now, but now we do have an indication that some really big waves might have occurred in the past and we simply failed for some reason to have gotten them recorded into our old data base. We still do not know if they actually occurred, but everything points to their probable existence in the past. Logic follows that, if they probably existed in past storms, they probably exist today and we should think about including those waves in a revised design criteria based on all the new data and whatever foreseeable implications that may have on our designs. If we don't, some good lawyer with a consulting PhD in statistics at his side will surely beat us to pieces, if it turns out that some foreseeable event capsizes our platform. So we ask ourselves how much risk do we want to avert and set our design criteria accordingly, or API RP 2A adopts it as and we build the next platform according to that new recommended practice.
The additional problem that trends in the data can present to us is the possibility that there is causation of some kind taking place, or not. If we think there might be a cause, we might be able to foresee that and add some additional criteria to account for trends in our design, or we can just accept some risk doing nothing while realising that those trends might continue and quickly surpass even our new criteria. How much risk will we accept? We could also account for any trending in the latest data by working with moving averages of our data that would reduce any effects of old data while strengthening any affects of newer data. Including simply discarding the first 50yrs of 100yrs of data, if we thought that including all data would unduely bias our design to the low side and result in undersigning our work in the face of conditions we view as more prevalent in today's environment.
All of those questions raised in the above are answered, not by believing in climate change, or not, what causes it, or doesn't, it just boils down to the day to day practicality of quantify risk as best you can and finding out how much risk your CEO is willing to take, or not. At home, I can take whatever risk I think is appropriate for me.
I included the IExplorer vs Murders graph to make it obvious that you must be aware of what your data is, or is not going to tell you. I'm happy that the humor was appreciated.
Apologies to all those I bored writing stuff below pay grade.
I have a spreadsheet showing an example of how a typical design wave height criteria is made and revised according to the method I outlined above, should anyone be interested in the math.