a silly question
a silly question
(OP)
Say, we have modeled a transient heat transfer problem (cooldown of fluid within an insulated pipe) using 2 different numerical techniques; FEA and CFD.
Next, we conduct an experiment to obtain some data for benchmarking.
Now, when we compare the temperature results from numerical techniques against the experimental data; is it justified to use a percentage basis for evaluating accuracy of the simulations? Note that percentage change in temperature is inherently meaningless because of arbitrary nature of the temperature units.
Is there a construct that will allow a relative measure for accuracy of 2 different techniques? Thank you in advance.
Next, we conduct an experiment to obtain some data for benchmarking.
Now, when we compare the temperature results from numerical techniques against the experimental data; is it justified to use a percentage basis for evaluating accuracy of the simulations? Note that percentage change in temperature is inherently meaningless because of arbitrary nature of the temperature units.
Is there a construct that will allow a relative measure for accuracy of 2 different techniques? Thank you in advance.





RE: a silly question
For example: I would say that comparing the ratio of final temperatures (actual / calculated) is quite missleading. The ratio of actual / calculated heat transfer coefficient (or associated R value) would seem to be a reasonable way of evaluating the accuracy of the simulation.
RE: a silly question
RE: a silly question
Tobalcane
"If you avoid failure, you also avoid success."
RE: a silly question
The big question in my mind is when you say Are you adjusting actuals to a percentage of the model results (BAD technique) or adjusting the model to better match the observed data (GOOD technique)?
David
RE: a silly question
TTFN
FAQ731-376: Eng-Tips.com Forum Policies
RE: a silly question
Then, when you run the experiment again, you can BEGIN comparing the accuracy of the assumed conditions to the "real world" variations in "real world" experimental conditions: somebody leaving a fan on, or turning the building AC/heat on/off, or leaving a door open, or bumping the test pipe or test thermometer. Loose insulation? Overly tight insulation? All sorts of things can change. Do change.
So, if you run the experiment the same way three times, you can judge how close the "real world" conditions each time are to EACH OTHER each time - which lets you judge how close the theory (models) are to the actual cases. Theorectically, in a "proper" statistically valid "design of experiments" yo could calculate by sigmas how variation you expect to get, what sigma's (normal variation) you can accept, etc - but frankly - how many people can afford lots of repeated test runs?
Then - only then - can you "adjust" the modeled results for future model runs up or down (by some percent) to make the final calculated answer be designed right.
RE: a silly question
RE: a silly question
I share the oft-stated aversion to taking ratios of temperature values. So, is there any alternative way to "non-dimensionalize" the temperature data for making comparisons between 2 competing methods?
PS. The idea is to see which model yields results closer to experimental. Not trying to calibrate the models but assuming the models are "ok"; and looking for a better model. I hope my intent is clearer now.
RE: a silly question
Presumably you can get temperature information out of the model for each of your experimental measurement points at different times. Just look at the difference at each time.
RE: a silly question