## Neural Network Validity Range

## Neural Network Validity Range

(OP)

Hello All,

I work with a multilayered artificial neural network (ANN) for purpose of doing non linear regression / universal interpolator.

It is a "classical " ANN whose topology briefly described as

-Input Layer (n neurons) -> X1, X2, ...., Xn Inputs

-Single Hidden Layer

-Output Layer (1 neuron) -> Y Output

With reference to the training data set, knowing min/max values for each serie of input, training data set are scaled to fit into [-1, 1]. Then ANN is trained.

ANN is operated with real data for prediction purposes.

Each input is scaled and it is ensured that it is not to exceed [-1, 1], in other words real data fed to ANN shall be bound to the min/max values of the training data set. Quite standard.

What about the case for example where input X1 is bound between [X1min, X1max]|training range, X2 is bound between [X2min, X2max]|training range, but the couple (X1, X2) has not been "seen" by the ANN in a combination as such during training. Is there a cross-checking procedure that parses all input ranges and map them out into a verification function f(X1, X2,....Xn) = 0 or 1 (pass, fail)? should the function not be satisified, what the ANN would be doing in reality is extrapolating.

Thanks in advance

I work with a multilayered artificial neural network (ANN) for purpose of doing non linear regression / universal interpolator.

It is a "classical " ANN whose topology briefly described as

-Input Layer (n neurons) -> X1, X2, ...., Xn Inputs

-Single Hidden Layer

-Output Layer (1 neuron) -> Y Output

**Training Phase:**With reference to the training data set, knowing min/max values for each serie of input, training data set are scaled to fit into [-1, 1]. Then ANN is trained.

**Exploitation Phase:**ANN is operated with real data for prediction purposes.

Each input is scaled and it is ensured that it is not to exceed [-1, 1], in other words real data fed to ANN shall be bound to the min/max values of the training data set. Quite standard.

**Now my question:**What about the case for example where input X1 is bound between [X1min, X1max]|training range, X2 is bound between [X2min, X2max]|training range, but the couple (X1, X2) has not been "seen" by the ANN in a combination as such during training. Is there a cross-checking procedure that parses all input ranges and map them out into a verification function f(X1, X2,....Xn) = 0 or 1 (pass, fail)? should the function not be satisified, what the ANN would be doing in reality is extrapolating.

Thanks in advance

## RE: Neural Network Validity Range

Dan - Owner

http://www.Hi-TecDesigns.com

## RE: Neural Network Validity Range

Typically, the answer would be NO, as what you describe is what a computer programmer ought to do. Since the ANN isn't trained to respond to the new input, the resultant response could be anything. A human would respond with "Huh???" or "WTF?" but an ANN doesn't have that option, unless it's trained to do so. To that degree, an ANN is worse than a child.

TTFN (ta ta for now)

I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg

FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

## RE: Neural Network Validity Range

Thanks for the response; so such function most probably does not exist.

Now my next questions...:

- Is the type of cross-checking referred to required? I mean is this a relevant problem?

- If yes, how to draw that sharp line then, i.e., to do a cross-check in the sense I described? I suspect there would be no easy answer to this one but I would appreciate some direction/guidance.

In the system/program I work on, there is a form that shows up to indicate to the User, min/max bounds for each input - not to exceed. But these inputs are taken individually for checking. This could be usefull as somehow it gives landmarks - but I realize it may not be sufficient; by now I added a remark that says "for reference only" ; but quite possibly this could lack completness and no guarantee (in mathematical sense) provided that what the network is producing has been "seen before".

## RE: Neural Network Validity Range

A Neural Network Architecture that computes its own realiablilty

## RE: Neural Network Validity Range

TTFN (ta ta for now)

I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg

FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

## RE: Neural Network Validity Range

Anyway, an attempt from my side: I think we need to define / agree on what the term "bounded" means as far as ANN are concerned (Can this be done without the maths agonizing pain?) If a definition/criteria is agreed upon, we could use it to assess if the training set is bounded or not. Only then we would treat unbounded input exactly according to your observations. Do you agree?

Maybe also some complementary considerations based on the article quoted above:

Quoted

The simplest treatment of extrapolation involves placing individual upper and lower bounds on the permissible range of each independent variable.This approach will frequently overestimate the region of validity of the model since it assumes the independent variables are uncorrelated...

Unquoted

PS/ I skipped the rest of the paragraph in the article because I reallly saw the water burning with all the mathematical concepts recalled...

## RE: Neural Network Validity Range

TTFN (ta ta for now)

I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg

FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

## RE: Neural Network Validity Range

Perhaps I mis-spoke, because, technically, all training sets are bounded. What I intended to convey was more about how well the training set corresponds to all valid inputs.Agreed on 1st sentence. If inputs are valid then there is correspondance de facto, or am I wrong here? so the whole question is about validity.

You mentioned that the training set was limited to ±1, but that you were concerned about inputs that might be more like ±2.

I need to insist on this one. Not exactly what you mention. I did not pass the idea correctly, sorry. I try to make it clearer if I can.

What I am concerned about is this:

Lets consider ANN with 2 inputs with data sets X1 and X2. We have a training set and we have prediction set, so I will use subscripts "t" for training input data and "p" for prediction input data. I will denote any element of the sets X1, X2 by x1, x2 respectively.

We have this situation:

TRAINING:

x1t ∈ [-1,+1] ; x2t ∈ [-1,+1] (as you mentionned all training sets are bounded)

PREDICTION:

Say ANN is fed with these inputs: x1p=0.75 and x2p=0.4;

As you can see here the condition x1p ∈ [-1,+1] ; x2p ∈ [-1,+1] is satisfied.

But the only pair which ANN has seen during training that is as close as possible to x1p and x2p is this:

x1t=0.70 and x2t=0.4 (just for example sake)

Question: is ANN input vector: x1p=0.75 and x2p=0.4 to be considered a valid input?

In other words, is this a bounded or an unbounded input?

## RE: Neural Network Validity Range

I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg

## RE: Neural Network Validity Range

Even worse, what if I have six or eight ANN inputs and none of them as training set (while certainly all of them bounded to [-1,+1]?

How do you measure how good the correlator is correlating?

I think this measure is not to be confiused with training / cross-validation indicators (RMSE, cross-entropy or alike). This is a different beast even thought the two concepts would aggregate at the end to determine reliability of the results.

Practically speaking, I am not chasing precision, but I would like to know if the form that I have that shows min/max to User (so these limits are not exceeded applied to each input individually) can be improved. We know that if min/max are exceeded it is a big No No. We are still left with combinations between inputs which may lead to good or poor correlations and apparently no rule of thumb to make a verification upfront.

I did an experiment with my program. I know that when two of my inputs approach 1 (for example 0.90) this is not good ; many reasons - including scarcity of training data combined with ANN saturation which can be improved but that is another topic. So I try values where I know training data for each input are dense, say for example I set the two inputs to 0.5. So I know that for first and second input 0.5 is not a "difficult values", still what I observe is that the ANN output exhibits awkward/unwanted shape. I inspect the data more closely and I realize that the probability that the combination of 0.5, 0.5 is encountered in training set is quite low even thought the probability for ANN to encounter during training value of 0.5 is high for each input individually. In fact even considering a window 0.5 +/-delta, 0.5 +/-delta, points on that window occur only rarely.

If I make a variation and use first input 0.5 and second -0.5 - here I have good density of points individualy and in combination, output is good (output function shape corresponds to physical expectations).