83296 # XOR Java Neural Network

Attempting a XOR neural network in java, but the network always predicts the final output it trains in.

Here is my code

```for( int i = 0; i < 4; i++ ) { //Forward pass diff = 1; while( diff > 0.01 ) { SumError = 0; Y1 = ( InputOne[i] * Weight ) + ( InputTwo[i] * Weight ) + Weight; Y1 = 1 / ( 1 + Math.exp( -Y1 ) ); Node1[i] = Y1; Y2 = ( InputOne[i] * Weight ) + ( InputTwo[i] * Weight ) + Weight; Y2 = 1 / ( 1 + Math.exp( -Y2 ) ); Node2[i] = Y2; Y3 = ( Y1 * Weight ) + ( Y2 * Weight ) + Weight; Y3 = 1 / ( 1 + Math.exp( -Y3 ) ); Node3[i] = Y3; diff = Math.abs( Result[i] - Y3 ); System.out.println( i + " " + Result[i] + " " + Y3 + " " + diff ); //Error Signals Delta3[i] = Y3 * ( 1 - Y3 ) * ( Result[i] - Y3 ); Delta2[i] = Node2[i] * ( 1 - Node2[i] ) * ( Delta3[i] * Weight ); Delta1[i] = Node1[i] * ( 1 - Node1[i] ) * ( Delta3[i] * Weight ); //Update Weights Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta2[i] * InputOne[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta2[i] * InputTwo[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta1[i] * InputOne[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta1[i] * InputTwo[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta3[i] * Y1 ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta3[i] * Y2 ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta1[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta2[i] ) ); Weight = Weight + ( ( WeightChange * alpha ) + ( eta * Delta3[i] ) ); for( int k = 0; k < 9; k++ ) { WeightChange[k] = OldWeight[k] - Weight[k]; OldWeight[k] = Weight[k]; } //Global Error for( int j = 0; j < 4; j++ ) { Y1 = ( InputOne[j] * Weight ) + ( InputTwo[j] * Weight ) + Weight; Y1 = 1 / ( 1 + Math.exp( -Y1 ) ); Y2 = ( InputOne[j] * Weight ) + ( InputTwo[j] * Weight ) + Weight; Y2 = 1 / ( 1 + Math.exp( -Y2 ) ); Y3 = ( Y1 * Weight ) + ( Y2 * Weight ) + Weight; Y3 = 1 / ( 1 + Math.exp( -Y3 ) ); //System.out.println( Y3 + " " + Math.abs( Result[j] - Y3 ) ); SumError = SumError + Math.pow( ( Result[j] - Y3 ) , 2 ); } SumError = SumError * 0.5; } Count = Count + 1; } ```

Where `InputOne`, `InputTwo` and `Result` are the truth table entries for XOR, the weights are randomly assigned and `WeightChange` is the momentum.

I then feed in the truth table again and every output is more or less the same as the last input it trained on.

Does anybody have any ideas?