×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS

Log In

Come Join Us!

Are you an
Engineering professional?
Join Eng-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!
  • Students Click Here

*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

Jobs

Neural Network Validity Range

Neural Network Validity Range

Neural Network Validity Range

(OP)
Hello All,

I work with a multilayered artificial neural network (ANN) for purpose of doing non linear regression / universal interpolator.
It is a "classical " ANN whose topology briefly described as

-Input Layer (n neurons) -> X1, X2, ...., Xn Inputs
-Single Hidden Layer
-Output Layer (1 neuron) -> Y Output

Training Phase:
With reference to the training data set, knowing min/max values for each serie of input, training data set are scaled to fit into [-1, 1]. Then ANN is trained.

Exploitation Phase:
ANN is operated with real data for prediction purposes.
Each input is scaled and it is ensured that it is not to exceed [-1, 1], in other words real data fed to ANN shall be bound to the min/max values of the training data set. Quite standard.

Now my question:
What about the case for example where input X1 is bound between [X1min, X1max]|training range, X2 is bound between [X2min, X2max]|training range, but the couple (X1, X2) has not been "seen" by the ANN in a combination as such during training. Is there a cross-checking procedure that parses all input ranges and map them out into a verification function f(X1, X2,....Xn) = 0 or 1 (pass, fail)? should the function not be satisified, what the ANN would be doing in reality is extrapolating.

Thanks in advance

RE: Neural Network Validity Range

Finally, a question worthy of the CE forum... and I have no clue what the answer might be sad

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Neural Network Validity Range

"Is there a cross-checking procedure that parses all input ranges and map them out into a verification function f(X1, X2,....Xn) = 0 or 1 (pass, fail)? should the function not be satisified, what the ANN would be doing in reality is extrapolating. "

Typically, the answer would be NO, as what you describe is what a computer programmer ought to do. Since the ANN isn't trained to respond to the new input, the resultant response could be anything. A human would respond with "Huh???" or "WTF?" but an ANN doesn't have that option, unless it's trained to do so. To that degree, an ANN is worse than a child.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Neural Network Validity Range

(OP)
IRstuff,

Thanks for the response; so such function most probably does not exist.
Now my next questions...:
- Is the type of cross-checking referred to required? I mean is this a relevant problem?
- If yes, how to draw that sharp line then, i.e., to do a cross-check in the sense I described? I suspect there would be no easy answer to this one but I would appreciate some direction/guidance.

In the system/program I work on, there is a form that shows up to indicate to the User, min/max bounds for each input - not to exceed. But these inputs are taken individually for checking. This could be usefull as somehow it gives landmarks - but I realize it may not be sufficient; by now I added a remark that says "for reference only" ; but quite possibly this could lack completness and no guarantee (in mathematical sense) provided that what the network is producing has been "seen before".

RE: Neural Network Validity Range

That's a bit of a philosophical question, it seems to me; if your training set is bounded, then what is the meaning of a new, unbounded input? Is it valid or not? If it's valid, then the training set was insufficient. And there's no deterministic answer to what the ANN will do. If it's not valid, then it needs to be truncated or thrown out. An ANN isn't really about calculations, per se, it's about creating classifications of the input data, i.e., the "X" corresponds to this "Y", etc.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Neural Network Validity Range

(OP)
Yes it is a bit philosophical I agree. But is it not the case that philosophy is the mother of all sciences ? :)
Anyway, an attempt from my side: I think we need to define / agree on what the term "bounded" means as far as ANN are concerned (Can this be done without the maths agonizing pain?) If a definition/criteria is agreed upon, we could use it to assess if the training set is bounded or not. Only then we would treat unbounded input exactly according to your observations. Do you agree?

Maybe also some complementary considerations based on the article quoted above:

Quoted
The simplest treatment of extrapolation involves placing individual upper and lower bounds on the permissible range of each independent variable.This approach will frequently overestimate the region of validity of the model since it assumes the independent variables are uncorrelated...
Unquoted

PS/ I skipped the rest of the paragraph in the article because I reallly saw the water burning with all the mathematical concepts recalled...

RE: Neural Network Validity Range

Perhaps I mis-spoke, because, technically, all training sets are bounded. What I intended to convey was more about how well the training set corresponds to all valid inputs. You mentioned that the training set was limited to ±1, but that you were concerned about inputs that might be more like ±2. So, the question is whether that case is realistic and probable, i.e., is your training set simply missing valid data, or what? If the former, then you'll need to expand your training set to account for all those extra valid inputs. If the latter, then you need still need to expand your training set and teach the ANN that those inputs are not valid, and give it an output such as "NaN" or "Bad Input" that it can use to tell you that the input was invalid.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Neural Network Validity Range

(OP)
IRstuff,

Perhaps I mis-spoke, because, technically, all training sets are bounded. What I intended to convey was more about how well the training set corresponds to all valid inputs.

Agreed on 1st sentence. If inputs are valid then there is correspondance de facto, or am I wrong here? so the whole question is about validity.

You mentioned that the training set was limited to ±1, but that you were concerned about inputs that might be more like ±2.


I need to insist on this one. Not exactly what you mention. I did not pass the idea correctly, sorry. I try to make it clearer if I can.
What I am concerned about is this:

Lets consider ANN with 2 inputs with data sets X1 and X2. We have a training set and we have prediction set, so I will use subscripts "t" for training input data and "p" for prediction input data. I will denote any element of the sets X1, X2 by x1, x2 respectively.
We have this situation:

TRAINING:
x1t ∈ [-1,+1] ; x2t ∈ [-1,+1] (as you mentionned all training sets are bounded)

PREDICTION:
Say ANN is fed with these inputs: x1p=0.75 and x2p=0.4;
As you can see here the condition x1p ∈ [-1,+1] ; x2p ∈ [-1,+1] is satisfied.
But the only pair which ANN has seen during training that is as close as possible to x1p and x2p is this:
x1t=0.70 and x2t=0.4 (just for example sake)

Question: is ANN input vector: x1p=0.75 and x2p=0.4 to be considered a valid input?
In other words, is this a bounded or an unbounded input?

RE: Neural Network Validity Range

OK, that's a radically different question, and depends on the training set size and the differentiability of the classes. An ANN is more like a correlator, and in your specific example of linear regression, each input in the training set falls into some classification (dependent on the hidden layer) that activates some number of inputs to the output layer which essentially aggregates them into a single value. If you only have that one input as a training set, then your example production input would likely correlate well enough to trigger a similar response as the training input would.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Neural Network Validity Range

(OP)
Ok thanks for your response. What if I have none of the two inputs as training set?
Even worse, what if I have six or eight ANN inputs and none of them as training set (while certainly all of them bounded to [-1,+1]?
How do you measure how good the correlator is correlating?

I think this measure is not to be confiused with training / cross-validation indicators (RMSE, cross-entropy or alike). This is a different beast even thought the two concepts would aggregate at the end to determine reliability of the results.

Practically speaking, I am not chasing precision, but I would like to know if the form that I have that shows min/max to User (so these limits are not exceeded applied to each input individually) can be improved. We know that if min/max are exceeded it is a big No No. We are still left with combinations between inputs which may lead to good or poor correlations and apparently no rule of thumb to make a verification upfront.

I did an experiment with my program. I know that when two of my inputs approach 1 (for example 0.90) this is not good ; many reasons - including scarcity of training data combined with ANN saturation which can be improved but that is another topic. So I try values where I know training data for each input are dense, say for example I set the two inputs to 0.5. So I know that for first and second input 0.5 is not a "difficult values", still what I observe is that the ANN output exhibits awkward/unwanted shape. I inspect the data more closely and I realize that the probability that the combination of 0.5, 0.5 is encountered in training set is quite low even thought the probability for ANN to encounter during training value of 0.5 is high for each input individually. In fact even considering a window 0.5 +/-delta, 0.5 +/-delta, points on that window occur only rarely.

If I make a variation and use first input 0.5 and second -0.5 - here I have good density of points individualy and in combination, output is good (output function shape corresponds to physical expectations).

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Eng-Tips forums is a member-only feature.

Click Here to join Eng-Tips and talk with other members!


Resources

eBook - Efficient and Effective Production Support with 3D Printed Jigs and Fixtures
Jigs and fixtures offer manufacturers a reliable process for delivering accurate, high-quality outcomes, whether for a specific part or feature, or for consistency across multiples of parts. Although the methodologies and materials for producing jigs and fixtures have evolved beyond the conventional metal tooling of years past, their position as a manufacturing staple remains constant due to the benefits they offer. Download Now
Overcoming Cutting Tool Challenges in Aerospace Machining
Aerospace manufacturing has always been on the cutting edge, from materials to production techniques. However, these two aspects of aerospace machining can conflict, as manufacturers strive to maintain machining efficiency with new materials by using new methods and cutting tools. Download Now

Close Box

Join Eng-Tips® Today!

Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.

Here's Why Members Love Eng-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close