Neural Networks 1

New to FreeBASIC? Post your questions here.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

As usual, once I get a flood of feedback I start coding.

I might also examine this for 8 bit signed integers, anyway here's the code.

Code: Select all

'
' ----------------------------------------------------------------------
' int_bit3.bas
'
' signed integer to binary representation 
'               and 
' binary representation to signed integer
'
' (c) 2025 sciwiseg@gmail.com   , luxan 
'
' --------------------------- Declarations -----------------------------
'
declare function bit2int(a1() as ushort,isize as integer) as integer
declare sub int2bit(a as integer,a1() as ushort,isize as integer) 
'
' ------------------------- Variables & arrays -------------------------
'
dim as integer a,b
dim as ushort a1(0 to 16),b1(0 to 16),i,j
'
' ----------------------------------------------------------------------
'
'     Int 2 Bit
a=-15
print "a =";a
print " int 2 bit, from routine "
int2bit(a ,a1(), 16) 
'
' ----------------------------- bit 2 int ------------------------------
'
b=bit2int(a1() ,16) 
print " bit 2 int, from routine "
print "b =";b
'
' ------------------------ Stress testing routines ---------------------
'
'           In the realm of signed 16-bit integers, the range extends 
'                            from -32,768 to 32,767.
'
for a=-32768 to 32767
    int2bit(a ,a1(), 16)
    b=bit2int(a1() ,16)     
if a <> b then 
   print " err @ ";a
   exit for 
end if    
next a

print
print " Done "

end
'
' ======================================================================
'
'
sub int2bit(a as integer,a1() as ushort,isize as integer) 
'
'   Convert bits to signed integer representation .
'
dim as integer i,int_size
dim as ushort b

int_size=sizeof(a)*8-1
'print " size ";int_size
if isize<int_size then int_size=isize end if
'print " size' ";int_size
for i=0 to int_size-1
    b=0
    b=-Bit(a,i) 
    a1(i)=b
next i
' 
'
end sub
'
' ----------------------------------------------------------------------
'
function bit2int(a1() as ushort,isize as integer) as integer
'
'   Convert bits to signed integer representation .
'
dim as integer b,i,int_size
dim as ushort a

int_size=sizeof(b)*8-1
'print " size ";int_size
if isize<int_size then int_size=isize end if
'print " size' ";int_size
b=0
for i=0 to int_size-1
 a=a1(i)
 if a=1 then  b=b+2^i end if
next i

' Check for negative value in two's complement if the size is 32 bits
If (a1(int_size-1) = 1) Then
' If highest bit is set, it's a negative number in two's complement
   b = b - (1 Shl int_size) ' Convert to negative
End If
' 
  return b
'
end function
'
' ----------------------------------------------------------------------
'


Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Previous, valid for 4bit and 8bit signed integers
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

This is the present status of the NN for logic
emulation.

The use of randomised shuffling of indexes for the input and
corresponding training data is recommended.
The Sigmoid activation function may not be optimal.
Temporary storage of weights and biases in auxilary
arrays, then copy back depending upon increasing loss
error, may improve training.
A different NN structure may return better results.

Upon a sucessful training and testing episode, the
weights and biases should be saved to a file for
future inference.

At the moment there are errors arising for some inputs
when we test the correctness of the inference.

Code: Select all


/' 

    DS_NN2b2.bas

    (c) Copyright 2025, sciwiseg@gmail.com

    Extended FreeBasic Neural Network with Multiple Hidden Layers
    Includes suggestions for loss calculation, matrix transpose, and inference testing.
'/

' Define matrix operations
Type Matrix
    rows As Integer
    cols As Integer
    ReDim data1(1,1) As Double
End Type

' Initialize a matrix
Sub MatrixInit(m As Matrix, rows As Integer, cols As Integer)
    m.rows = rows
    m.cols = cols
    ReDim m.data1(rows - 1, cols - 1)
End Sub

' Multiply two matrices
Function MatrixMultiply(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, b.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To b.cols - 1
            result.data1(i, j) = 0
            For k As Integer = 0 To a.cols - 1
                result.data1(i, j) += a.data1(i, k) * b.data1(k, j)
            Next k
        Next j
    Next i
    
    Return result
End Function

' Add two matrices
Function MatrixAdd(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) + b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Subtract two matrices
Function MatrixSubtract(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) - b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Transpose a matrix
Function MatrixTranspose(m As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, m.cols, m.rows)
    
    For i As Integer = 0 To m.rows - 1
        For j As Integer = 0 To m.cols - 1
            result.data1(j, i) = m.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Apply a function (e.g., sigmoid) to a matrix
Sub MatrixApplyFunc(m As Matrix, func As Function (x As Double) As Double)
    For i As Integer = 0 To m.rows - 1
        For j As Integer = 0 To m.cols - 1
            m.data1(i, j) = func(m.data1(i, j))
        Next j
    Next i
End Sub

' Sigmoid activation function
Function Sigmoid(x As Double) As Double
    Return 1 / (1 + Exp(-x))
End Function

' Derivative of sigmoid
Function SigmoidDerivative(x As Double) As Double
    Return x * (1 - x)
End Function

' Threshold Matrix to 0 or 1
Sub Threshold(predicted As Matrix)
    Dim dat1 As Double = 0
    Dim dat2 As Double = 0
    For i As Integer = 0 To predicted.rows - 1
        For j As Integer = 0 To predicted.cols - 1
            dat2 = 0
            dat1 = predicted.data1(i, j)
            If dat1 > 0.5 Then dat2 = 1
            predicted.data1(i, j) = dat2
        Next j
    Next i
End Sub

' Mean Squared Error (MSE) loss function
Function MeanSquaredError(predicted As Matrix, target As Matrix) As Double
    Dim error1 As Double = 0
    For i As Integer = 0 To predicted.rows - 1
        For j As Integer = 0 To predicted.cols - 1
            error1 += (predicted.data1(i, j) - target.data1(i, j)) ^ 2
        Next j
    Next i
    Return error1 / (predicted.rows * predicted.cols)
End Function

' Neural Network Type
Type NeuralNetwork
    num_layers As Integer
    redim layer_sizes(0) As Integer
    redim weights(0) As Matrix
    redim biases(0) As Matrix
End Type

' Initialize the neural network
Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
        dim as integer ls_ub
        dim as single n_in, n_out
        ls_ub=ubound(layer_sizes)
        n_in = layer_sizes(0)        
        n_out = layer_sizes(ls_ub)
        Dim As Single scale = Sqr(6.0f / (n_in + n_out)) ' for sigmoid activation function.        
    nn.num_layers = UBound(layer_sizes) + 1
    Print " nn.num_layers "; nn.num_layers
    
    ReDim nn.layer_sizes(nn.num_layers - 1)
    For i As Integer = 0 To nn.num_layers - 1
        nn.layer_sizes(i) = layer_sizes(i)
    Next i
    
    ReDim nn.weights(nn.num_layers - 2)
    ReDim nn.biases(nn.num_layers - 2)
 
 ' Use the system timer to seed the random number generator
    Randomize Timer
    For i As Integer = 0 To nn.num_layers - 2
        MatrixInit(nn.weights(i), layer_sizes(i + 1), layer_sizes(i))
        MatrixInit(nn.biases(i), layer_sizes(i + 1), 1)
        
        ' Randomize weights and biases
        For j As Integer = 0 To layer_sizes(i + 1) - 1
            For k As Integer = 0 To layer_sizes(i) - 1
                nn.weights(i).data1(j, k) = (Rnd*2.0f - 1.0f)*scale '*scale ' Range: -1 to 1
            Next k
            
            nn.biases(i).data1(j, 0) = 0.001f*(Rnd*2.0f - 1.0f)*scale
        Next j
    Next i


End Sub

' Feedforward pass
Function NNFeedforward(nn As NeuralNetwork, input1 As Matrix) As Matrix
    Dim layer_output As Matrix = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_output = MatrixMultiply(nn.weights(i), layer_output)
        layer_output = MatrixAdd(layer_output, nn.biases(i))
        MatrixApplyFunc(layer_output, @Sigmoid)
    Next i
    
    Return layer_output
End Function
'
Sub NNTrain(nn As NeuralNetwork, input1 As Matrix, target As Matrix, learning_rate As Double)
    ' Feedforward
    Dim layer_outputs(nn.num_layers - 1) As Matrix
    layer_outputs(0) = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_outputs(i + 1) = MatrixMultiply(nn.weights(i), layer_outputs(i))
        layer_outputs(i + 1) = MatrixAdd(layer_outputs(i + 1), nn.biases(i))
        MatrixApplyFunc(layer_outputs(i + 1), @Sigmoid)  ' Apply the Sigmoid function
    Next i
    
    ' Backpropagation
    Dim errors(nn.num_layers - 1) As Matrix
    errors(nn.num_layers - 1) = MatrixSubtract(target, layer_outputs(nn.num_layers - 1))
    
    For i As Integer = nn.num_layers - 2 To 0 Step -1
        Dim gradients As Matrix = layer_outputs(i + 1)
        MatrixApplyFunc(gradients, @SigmoidDerivative)  ' Apply the Sigmoid Derivative
        For j As Integer = 0 To gradients.rows - 1
            gradients.data1(j, 0) *= errors(i + 1).data1(j, 0) * learning_rate
        Next j
        
        Dim layer_outputs_T As Matrix = MatrixTranspose(layer_outputs(i))
        Dim weights_deltas As Matrix = MatrixMultiply(gradients, layer_outputs_T)
        nn.weights(i) = MatrixAdd(nn.weights(i), weights_deltas)
        nn.biases(i) = MatrixAdd(nn.biases(i), gradients)
        
        If i > 0 Then
            Dim weights_T As Matrix = MatrixTranspose(nn.weights(i))
            errors(i) = MatrixMultiply(weights_T, errors(i + 1))
        End If
    Next i
End Sub

Sub ConvertToBinary(output_data() As Double, binary_output() As Integer)
    Dim i As Integer
    For i = 0 To UBound(output_data)
        If output_data(i) > 0.5 Then
            binary_output(i) = 1
        Else
            binary_output(i) = 0
        End If
    Next i
End Sub

' ----------------------------- Main -----------------------------------
' Example usage
Dim nn As NeuralNetwork

' Define layer sizes, for signed 4bit integer x2 inputs:
' 8 input nodes, 16 nodes in the first hidden layer,
' 9 nodes in the second hidden layer, 10 nodes in the third hidden layer,
' and 8 output nodes.
'
dim as integer i,a,b,c,j,k
Dim layer_sizes(4) As Integer = {8, 32, 16, 8, 8}

' Convert an integer to a binary array
Sub int2bit(value As Integer, binaryArray() As Integer, numBits As Integer)
    Dim i As Integer
    For i = 0 To numBits - 1
        binaryArray(i) = (value Shr i) And 1
    Next i
End Sub

Dim as integer numPairs = 256
Dim as integer numBits = 4
Dim inputs(numPairs-1) As Matrix
Dim targets(numPairs-1) As Matrix

' Define binary arrays
Dim a1(numBits-1) As Integer
Dim b1(numBits-1) As Integer
Dim c1(2 * numBits - 1) As Integer

' Calculate range for signed numBits integers
Dim minValue As Integer = -2 ^ (numBits - 1)
Dim maxValue As Integer = 2 ^ (numBits - 1) - 1

' Loop through the range of signed numBits integers
i = 0
For b = minValue To maxValue
    For a = minValue To maxValue
        int2bit(a, a1(), numBits)
        int2bit(b, b1(), numBits)
        c = a * b
        int2bit(c, c1(), 2 * numBits)
        
        ' Initialize matrices and populate them
        MatrixInit(inputs(i), 2 * numBits, 1)
        For k = 0 To numBits - 1
            inputs(i).data1(k, 0) = a1(k)
            inputs(i).data1(k + numBits, 0) = b1(k)
        Next k

        MatrixInit(targets(i), 2 * numBits, 1)
        For k = 0 To 2 * numBits - 1
            targets(i).data1(k, 0) = c1(k)
        Next k

        i += 1
    Next a
Next b
print
print " inputs and targets set "
print

' Initialize the neural network
NNInit(nn, layer_sizes())


' Train the network with all input-output pairs
Dim learningRate As Double = 0.01 ' Adjust learning rate
ReDim output1(numPairs-1) As Matrix

' Training loop with shuffling

Dim total_loss As Double
Dim indices(numPairs-1) As Integer

' Initialize indices array
For j = 0 To numPairs-1
    indices(j) = j
Next j
Dim As Integer r
Dim idx As Integer


For i = 1 To 20000 ' Increase number of iterations
    Randomize Timer
    
    total_loss = 0

    ' Shuffle indices
    For j = 0 To numPairs-2
        r = Int(Rnd * (numPairs - j)) + j
        Swap indices(j), indices(r)
    Next j

    ' Training with shuffled indices
    For j = 0 To numPairs-1
        idx  = indices(j)
        NNTrain(nn, inputs(idx), targets(idx), learningRate)
        output1(idx) = NNFeedforward(nn, inputs(idx))
        total_loss += MeanSquaredError(output1(idx), targets(idx))
    Next j

    ' Print progress every 1000 iterations
    If i Mod 1000 = 0 Then
        Print "Iteration: "; i
        Print "Average Loss: "; total_loss / numPairs
        Print "-------------------------"
    End If
Next i
'
'  Convert to valid binary representation
'

        For j = 0 To 3 ' Print a sample of the first 4 pairs for demonstration
            Print "Input: [";
            For k = 0 To 2 * numBits - 1
                Print inputs(j).data1(k, 0);
                If k < 2 * numBits - 1 Then Print ", ";
            Next k
            Print "] -> Output: [";
            For k = 0 To 2 * numBits - 1
                i = output1(j).data1(k, 0)
               if i > 0.5 then i=1 else i=0 end if 
                print i;
             '   Print output1(j).data1(k, 0);
                If k < 2 * numBits - 1 Then Print ", ";
            Next k
            Print "]";
            print "_(";
         For k = 0 To 2 * numBits - 1
           print targets(j).data1(k, 0);
           If k < 2 * numBits - 1 Then Print ", ";
         Next k           
            Print ")" 
         Next j

' <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

' Test the trained network with input values of either 0 or 1
Print
Print "Testing Trained Network with Inputs used for training:"

'Dim total_loss As Double = 0
ReDim output1(numPairs - 1) As Matrix

For j As Integer = 0 To numPairs - 1
    ' Perform feedforward
    output1(j) = NNFeedforward(nn, inputs(j))
    
    ' Apply threshold to convert to binary
    Threshold(output1(j))
    
    ' Calculate the mean squared error
    total_loss += MeanSquaredError(output1(j), targets(j))
Next j

' Print the total error
Print "Total error: "; total_loss

end


Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Using this structure for the NN, I progress through
the loss like this.

Upon testing the NN, the Total Error: 0.01267549334836566,
is quite good; however, it needs to be 0.

More training epochs and a further tweak to the NN structure
might get me there.

Dim layer_sizes(4) As Integer = {8, 64, 16, 16, 8}

inputs and targets set

nn.num_layers 5

Iteration: 1000
Average Loss: 0.003938545973649954
-------------------------
Iteration: 2000
Average Loss: 0.0005728368007013788
-------------------------
Iteration: 3000
Average Loss: 0.0003542277281416577
-------------------------
Iteration: 4000
Average Loss: 0.0001764990238313419
-------------------------
Iteration: 5000
Average Loss: 0.000118669819950343
-------------------------
Iteration: 6000
Average Loss: 0.0001062033765087364
-------------------------
Iteration: 7000
Average Loss: 7.596880390132646e-05
-------------------------
Iteration: 8000
Average Loss: 6.451114874791795e-05
-------------------------
Iteration: 9000
Average Loss: 5.587898324739697e-05
-------------------------
Iteration: 10000
Average Loss: 4.951364589205335e-05
-------------------------

Input: [ 0, 0, 0, 1, 0, 0, 0, 1]
Output: [ 0, 0, 0, 0, 0, 0, 1, 0]
Expected:( 0, 0, 0, 0, 0, 0, 1, 0)

Input: [ 1, 0, 0, 1, 0, 0, 0, 1]
Output: [ 0, 0, 0, 1, 1, 1, 0, 0]
Expected:( 0, 0, 0, 1, 1, 1, 0, 0)

Input: [ 0, 1, 0, 1, 0, 0, 0, 1]
Output: [ 0, 0, 0, 0, 1, 1, 0, 0]
Expected:( 0, 0, 0, 0, 1, 1, 0, 0)

Input: [ 1, 1, 0, 1, 0, 0, 0, 1]
Output: [ 0, 0, 0, 1, 0, 1, 0, 0]
Expected:( 0, 0, 0, 1, 0, 1, 0, 0)

Testing Trained Network with Inputs used for training:
Total error: 0.01267549334836566
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Part of the larger project, this reads Neural Network weights and biases from a text file.

I've used a pytorch equivalent of the freebasic Neural Network code, with some interesting results.

From there I saved the weights and biases as a text file, the same should also be possible from
the previous freebasic NN code, worthwhile if you have a lot of training epochs.

Code: Select all

'
'  NN_WB3f.bas
'
'      (c) 2025 sciwiseg@gmail.com 
'
'   As this is mostly my work, I've included the above.
'
'   I was wrestling with A.I agents attempting to construct
' something sensible, eventually I did a major rewrite.
'
'  If I wasn't attempting to tackle so many tasks at this time,
' I'd use my established methods for constructing code.
'
Function comma_cnt(line2 As String) As Integer
    ' Count the number of commas in a string
    Dim As Integer i, n
    Dim As String*1 c

    n = 0
    For i = 1 To Len(line2)
        c = Mid(line2, i, 1)
        If c = "," Then n += 1
    Next i

    Return n
End Function

' --------------------------------- main -------------------------------
'
Dim As Integer fileNumber = FreeFile()
Dim As String fileName = "model_weights_biases.txt" ' Replace with your file name
Dim As String line1
Dim As Integer lineIndex = 0, currentLayer = -1, tn, n
Dim As Boolean isWeights = FALSE, isBiases = FALSE

' Arrays to store the comma counts for weights and biases
Dim As Integer weightsCounts(100), biasesCounts(100) ' Adjust size as needed

' Arrays to store the extracted numerical values
ReDim As Double WeightsData(100, 5000) ' Adjust size for maximum weights per layer
ReDim As Double BiasesData(100, 1000) ' Adjust size for maximum biases per layer

' ......................................

' Determine the number of commas (counts) for each layer
Dim As Integer totalLayers = -1
'
Open fileName For Input As #fileNumber
While Not Eof(fileNumber)
    Line Input #fileNumber, line1

    ' Identify layer start and count total layers
    If InStr(line1, "Layer") And InStr(line1, "Weights:") Then
        totalLayers += 1
        isWeights = TRUE
        isBiases = FALSE
        weightsCounts(totalLayers) = 0
        biasesCounts(totalLayers) = 0
        Continue While
    End If

    If InStr(line1, "Layer") And InStr(line1, "Biases:") Then
        isWeights = FALSE
        isBiases = TRUE
        Continue While
    End If

    ' Count commas for weights
    If isWeights Then
        weightsCounts(totalLayers) += comma_cnt(line1) + 1
    End If

    ' Count commas for biases
    If isBiases Then
        biasesCounts(totalLayers) += comma_cnt(line1) + 1
    End If
Wend
Close #fileNumber
totalLayers += 1 ' Correct total layers
'
' ......................................................
'
'   Store layer tag info into an array.
dim as string ltag(2*totalLayers)
dim as integer k
Open fileName For Input As #fileNumber
k=0
While Not Eof(fileNumber)
    Line Input #fileNumber, line1
    ' Identify layer start and count total layers
    If InStr(line1, "Layer")  Then
        if k<2*totalLayers then ltag(k)=line1 end if
        k =k + 1
        Continue While
    End If
Wend
Close #fileNumber
'
' ......................................


' Dynamically store numerical values in arrays
Dim As Integer weightIndex, biasIndex, Ub
'
Open fileName For Input As #fileNumber
currentLayer = -1
isWeights = FALSE
isBiases = FALSE
weightIndex = 0

While Not Eof(fileNumber)
    Line Input #fileNumber, line1

    ' Handle weights tag
    If InStr(line1, "Layer") And InStr(line1, "Weights:") Then
        currentLayer += 1
        isWeights = TRUE
        isBiases = FALSE
        weightIndex = 0
        Continue While
    End If

    ' Handle biases tag
    If InStr(line1, "Layer") And InStr(line1, "Biases:") Then
        isWeights = FALSE
        isBiases = TRUE
        biasIndex = 0
        Continue While
    End If

    ' Extract weights data
    If isWeights Then
        Dim As Integer pos1 = 1
        Dim As Integer commaPos
        
        'print " currentLayer:   ";currentLayer
        While pos1 <= Len(line1)
            commaPos = InStr(pos1, line1, ",")
            If commaPos = 0 Then commaPos = Len(line1) + 1

            WeightsData(currentLayer, weightIndex) = Val(Mid$(line1, pos1, commaPos - pos1))
            weightIndex += 1
            pos1 = commaPos + 1
        Wend
    End If

    ' Extract biases data
    If isBiases Then
        Dim As Integer pos1 = 1
        Dim As Integer commaPos
        While pos1 <= Len(line1)
            commaPos = InStr(pos1, line1, ",")
            If commaPos = 0 Then commaPos = Len(line1) + 1
            BiasesData(currentLayer, biasIndex) = Val(Mid$(line1, pos1, commaPos - pos1))
            biasIndex += 1
            pos1 = commaPos + 1
        Wend
    End If
Wend
Close #fileNumber

' ......................................

' Output the stored numerical values
Print "Stored Weights and Biases:"
For i As Integer = 0 To totalLayers - 1
    Print
    Print
    Print "Layer "; i
    Print "  Weights: ";
    For j As Integer = 0 To weightsCounts(i) - 1
          Print WeightsData(i, j); 
       if j< weightsCounts(i) - 1 then
          Print " , ";
       end if
    Next j
    Print
    Print
    Print "  Biases: ";
    For j As Integer = 0 To biasesCounts(i) - 1
        Print BiasesData(i, j); 
       if j< biasesCounts(i) - 1 then
          Print " , ";
       end if  
    Next j
    Print
Next
'
' ...................................
'
print
print " .............................."
for k=0 to 2*totalLayers
   print ltag(k)
next k   

end
'
' ======================================================================
'


Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Interesting result when training, then testing,
via python3 with torch.

Indicates that once the Average Loss reaches a
particular value, the NN will faithfully perform
4bit signed multiplication. We're also using
thresholding after the NN output.

The NN structure is: 8,128,16,8 and the activation function
is mostly Sigmoid, there are other routines involved that are
difficult to find in the torch library.

Code: Select all


Iteration: 1000, Average Loss: 2.380320e-03
Iteration: 2000, Average Loss: 3.297255e-04
Iteration: 3000, Average Loss: 1.268513e-04
Iteration: 4000, Average Loss: 6.106124e-05
Iteration: 5000, Average Loss: 3.276418e-05
Iteration: 6000, Average Loss: 1.819211e-05
Iteration: 7000, Average Loss: 1.037651e-05
Iteration: 8000, Average Loss: 6.146425e-06
Iteration: 9000, Average Loss: 3.707119e-06
Iteration: 10000, Average Loss: 2.247131e-06
Iteration: 11000, Average Loss: 1.357829e-06
Iteration: 12000, Average Loss: 8.113111e-07
Iteration: 13000, Average Loss: 4.909950e-07
Iteration: 14000, Average Loss: 3.013166e-07
Iteration: 15000, Average Loss: 1.858727e-07
Iteration: 16000, Average Loss: 1.115494e-07
Iteration: 17000, Average Loss: 6.457653e-08
Iteration: 18000, Average Loss: 3.934541e-08
Iteration: 19000, Average Loss: 2.501945e-08
Iteration: 20000, Average Loss: 1.653625e-08

Testing Trained Network:
Total Error: 0.00000000e+00

Sample Results:
Input:    [1. 0. 0. 0. 1. 0. 0. 0.]
  
Output:   [0. 1. 0. 0. 0. 0. 0. 0.]
Expected: [0. 1. 0. 0. 0. 0. 0. 0.]


Input:    [1. 0. 0. 0. 1. 0. 0. 1.]
  
Output:   [0. 0. 1. 1. 1. 0. 0. 0.]
Expected: [0. 0. 1. 1. 1. 0. 0. 0.]


Input:    [1. 0. 0. 0. 1. 0. 1. 0.]
  
Output:   [0. 0. 1. 1. 0. 0. 0. 0.]
Expected: [0. 0. 1. 1. 0. 0. 0. 0.]


Input:    [1. 0. 0. 0. 1. 0. 1. 1.]
  
Output:   [0. 0. 1. 0. 1. 0. 0. 0.]
Expected: [0. 0. 1. 0. 1. 0. 0. 0.]



Post Reply