Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

a silly question

Status
Not open for further replies.

ginsoakedboy

Mechanical
Oct 14, 2004
157
Say, we have modeled a transient heat transfer problem (cooldown of fluid within an insulated pipe) using 2 different numerical techniques; FEA and CFD.

Next, we conduct an experiment to obtain some data for benchmarking.

Now, when we compare the temperature results from numerical techniques against the experimental data; is it justified to use a percentage basis for evaluating accuracy of the simulations? Note that percentage change in temperature is inherently meaningless because of arbitrary nature of the temperature units.

Is there a construct that will allow a relative measure for accuracy of 2 different techniques? Thank you in advance.
 
Replies continue below

Recommended for you

I would venture YES but due consideration must be given to the question "a percentage of what".

For example: I would say that comparing the ratio of final temperatures (actual / calculated) is quite missleading. The ratio of actual / calculated heat transfer coefficient (or associated R value) would seem to be a reasonable way of evaluating the accuracy of the simulation.
 
If I understand your questions, I would say +/- 10% of data would be a good correlation. Since data is the undeniable truth and FEA and CFD are really math in code form you would have to compare your analysis to the data.

Tobalcane
"If you avoid failure, you also avoid success."
 
Or to say it another way, models never prove anything. A model can be very effective at predicting future behaviour and it is a really good thing to calibrate a model with real data (i.e., to "predict" past behaviour).

The big question in my mind is when you say
is it justified to use a percentage basis for evaluating accuracy of the simulations?
Are you adjusting actuals to a percentage of the model results (BAD technique) or adjusting the model to better match the observed data (GOOD technique)?

David
 
Bear in mind that a "model" is never more than an abstraction of reality. The "model" eliminates details that complicate the math, or the processing time, or eliminates details by accident, or conscious choice of the analyst.

TTFN

FAQ731-376
 
If you did the experiment once, you "know" what the "real" value(s) is/are for what you are measuring. Then, you can compare the models (which are, at best, merely theorectical assumptions of calculations applied to theorectical conditions).

Then, when you run the experiment again, you can BEGIN comparing the accuracy of the assumed conditions to the "real world" variations in "real world" experimental conditions: somebody leaving a fan on, or turning the building AC/heat on/off, or leaving a door open, or bumping the test pipe or test thermometer. Loose insulation? Overly tight insulation? All sorts of things can change. Do change.

So, if you run the experiment the same way three times, you can judge how close the "real world" conditions each time are to EACH OTHER each time - which lets you judge how close the theory (models) are to the actual cases. Theorectically, in a "proper" statistically valid "design of experiments" yo could calculate by sigmas how variation you expect to get, what sigma's (normal variation) you can accept, etc - but frankly - how many people can afford lots of repeated test runs?

Then - only then - can you "adjust" the modeled results for future model runs up or down (by some percent) to make the final calculated answer be designed right.
 
Temperature ratios can be scientifically correct if you are comparing temperature differences (delta T) or using an absolute temperature scale. I cringe when I hear someone say the temperature rose 10%, from 50F to 55F. That is completely non-sense and every engineer should recognize that at once. However, it is okay to say that the delta T increased 10%, from 50F to 55F.
 
Thank you, everyone.

I share the oft-stated aversion to taking ratios of temperature values. So, is there any alternative way to "non-dimensionalize" the temperature data for making comparisons between 2 competing methods?

PS. The idea is to see which model yields results closer to experimental. Not trying to calibrate the models but assuming the models are "ok"; and looking for a better model. I hope my intent is clearer now.
 
So if what you are interested in is error or accuracy, why not look at error?

Presumably you can get temperature information out of the model for each of your experimental measurement points at different times. Just look at the difference at each time.
 
It's cooling down. So look at Actual Change in Temperature vs. Predicted Change in Temperature. That will give the same relation independently of the temperature scale.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor