Neural Networks 1

New to FreeBASIC? Post your questions here.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Neural Networks 1

Post by Luxan »

To be updated from time to time.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Refer to Beginners, Easy Matrix for other routines, include
files and library code .


As an introduction I'd suggest you look at this series
of videos.


https://www.youtube.com/watch?v=aircAruvnKk

To a fair extent I'm modelling my NN programs using
that information.
The MNIST handwritten numerals data, mentioned there
is something I've already successfully trialed using
python; eventually I might use this with the BASIC
program.
In the meantime though, I intend to use a much smaller
data set to check the validity of my code, this will
consist of a binary representation of unsigned integers.
The format of the target values will also be a binary
representation of integers.

I don't claim that the Matrix routines in use are perfect.
However much of the code I build upon that has some worth,
even if it's instructional .

Code: Select all

'
'  bit_rep2.bas
'
'   Numerical representation of bits.
'
'             and
'
'   Binary representation of numbers.
'
'
#cmdline "-exx"
#include once "easy_matx.bi"
'
' next include, not required for this frame work .
'
' #include once "easy_maty.bi"
'
declare sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
declare function ubxlb(x as uinteger, lb as integer) as uinteger

declare function bits2int(bsq as Matrix, nb as integer) as uinteger
'
' ----------------------------------------------------------------------
'
dim as uinteger v(0 to 3), w(0 to 3), x(0 to 15), y(0 to 15), i, k, m
'
dim as integer la, pwr
la = 2
'pwr = (2^(la)-1)^2
pwr = 2^(2*la)-1
dim as Matrix ab = Matrix(1, la*2)
'
'             Sequence through bit representations .
'
for i=0 to pwr 
    ' print " ";i;
     int2bit(ab , i, la ) ' input
     prt_m(ab)
     print "  x  "
     m = ubxlb(i , la)
     int2bit(ab , m, la ) ' target, convert
     prt_m(ab)    
     '
     '  train , retain previous weights, to adjust then save for 
     ' next [ input , target ] pair .
     '  output , convert, check .
     ' 
  next i

print
print " ---------------------------------------------------------- "

m = bits2int(ab, la) 
print " m = ";m
'
'  For all valid input data samples .
'  test data -> pretrained NN -> output , convert, compare expected.


end
'
' ======================================================================
'
'
'
'   |0|1||2|3|
'    1 2  4 8
'
end
'
' ====================================================================== 
'
'
function bits2int(bsq as Matrix, nb as integer) as uinteger
'
'               Matrix bits to uinteger .  
'
dim as uinteger x
dim as single bt
dim as integer i, j, nx, ny ' , lb, ub
'
nx = ubound(bsq.m, 1)
ny = ubound(bsq.m, 2)
'
for j = 0 to ny
 for i=0 to nx 
     bt = bsq.m(i,j)
     bt = int(bt + 0.5)
     bt = bt*(2^j)
     x = x + bt 
 next i
next j 
' 
                 return x
'
end function

' ----------------------------------------------------------------------
'
sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
'
'         Convert an integer to bits of length lb
'     , assign to matrix elements .
'
dim as integer i, j, nx, ny ' , lb, ub
'
nx = ubound(bsq.m, 1)
ny = ubound(bsq.m, 2)
'
for j = 0 to nb-1
 for i=0 to nx 
    bsq.m(i,j) = -Bit(x,j) 
    bsq.m(i,j+nb) = -Bit(x,j+nb) 
 next i
next j 
'
'
end sub
'
' ----------------------------------------------------------------------
'
function ubxlb(x as uinteger, lb as integer) as uinteger
'
'   Upper bits x Lower bits , multiplication.
'
dim as integer pwr
dim as uinteger a, b, c
'
           pwr = 2^lb
'
             b = int(x/pwr)
             a = x - pwr*b
             c = a * b
          '   print " ";a;" , ";b;" , ";c
'
    return c
'
'
end function


Also from this video :

https://www.youtube.com/watch?v=Ilg3gGewQ5U


Others have been suggested by gunslinger .

https://www.youtube.com/watch?v=hfMk-kjRv4c
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

I used the previous code to train, then test my
existing NN .
It tends to remember the last training epoch
using those weights, as the default; irrespective of the input.
Somethings is amok.

If this is over fitting, then there are ways to
compensate for this; ways I'm not yet familiar
with.
Early Termination looks like one possibility .

Time to examine NN code from different languages and replicate
the previous training and test arrangement within those.

With FreeBASIC I only need to read about a method and I'm able to
eventually visualize what needs to be done.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

I've been searching for a free coding A.I that automatically checks the code it generates
from a user, or other, prompt. There doesn't appear to be such a beast.

This is different than what I usually do, a mostly A.I generated piece of code.
On this occasion, from DeepSeek, I didn't ask what the sources were, maybe next time.

Other coders may want to test this, examine how back propagation is being performed and use
the knowledge to produce their own unique code.

I was about to purchase a $200 book that explained back propagation, now not so necessary.

Code: Select all


/'

    DS_NN2a.bas

    (c) Copyright 2025, sciwiseg@gmail.com
    
    Generated from DeepSeek , edited and tested by myself.
    Some similarities to code I've written and uploaded.

'/

' Define matrix operations
Type Matrix
    rows As Integer
    cols As Integer
    ReDim data1(1,1) As Double
End Type

' Initialize a matrix
Sub MatrixInit(m As Matrix, rows As Integer, cols As Integer)
    m.rows = rows
    m.cols = cols
    ReDim m.data1(rows - 1, cols - 1)
End Sub

' Multiply two matrices
Function MatrixMultiply(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, b.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To b.cols - 1
            result.data1(i, j) = 0
            For k As Integer = 0 To a.cols - 1
                result.data1(i, j) += a.data1(i, k) * b.data1(k, j)
            Next k
        Next j
    Next i
    
    Return result
End Function

' Add two matrices
Function MatrixAdd(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) + b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Subtract two matrices
Function MatrixSubtract(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) - b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function



' Apply a function (e.g., sigmoid) to a matrix
Sub MatrixApplyFunc(m As Matrix, func As Function (x As Double) As Double)
    For i As Integer = 0 To m.rows - 1
        For j As Integer = 0 To m.cols - 1
            m.data1(i, j) = func(m.data1(i, j))
        Next j
    Next i
End Sub

' Sigmoid activation function
Function Sigmoid(x As Double) As Double
    Return 1 / (1 + Exp(-x))
End Function

' Derivative of sigmoid
Function SigmoidDerivative(x As Double) As Double
    Return x * (1 - x)
End Function

' -------------------------------- new ---------------------------------


Type NeuralNetwork
    num_layers As Integer
    redim layer_sizes(0) As Integer
    redim weights(0) As Matrix
    redim biases(0) As Matrix
End Type

Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
    nn.num_layers = UBound(layer_sizes) + 1
    
    
    redim nn.layer_sizes(nn.num_layers)
    dim i as integer
    for i=0 to nn.num_layers
    nn.layer_sizes(i) = layer_sizes(i)
    next i
    
    ReDim nn.weights(nn.num_layers - 2)
    ReDim nn.biases(nn.num_layers - 2)
    
    For i As Integer = 0 To nn.num_layers - 2
        MatrixInit(nn.weights(i), layer_sizes(i + 1), layer_sizes(i))
        MatrixInit(nn.biases(i), layer_sizes(i + 1), 1)
        
        ' Randomize weights and biases
        For j As Integer = 0 To layer_sizes(i + 1) - 1
            For k As Integer = 0 To layer_sizes(i) - 1
                nn.weights(i).data1(j, k) = Rnd * 2 - 1 ' Range: -1 to 1
            Next k
            nn.biases(i).data1(j, 0) = Rnd * 2 - 1
        Next j
    Next i
End Sub

Function NNFeedforward(nn As NeuralNetwork, input1 As Matrix) As Matrix
    Dim layer_output As Matrix = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_output = MatrixMultiply(nn.weights(i), layer_output)
        layer_output = MatrixAdd(layer_output, nn.biases(i))
        MatrixApplyFunc(layer_output, @Sigmoid)
    Next i
    
    Return layer_output
End Function

Sub NNTrain(nn As NeuralNetwork, input1 As Matrix, target As Matrix, learning_rate As Double)
    ' Feedforward
    Dim layer_outputs(nn.num_layers - 1) As Matrix
    layer_outputs(0) = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_outputs(i + 1) = MatrixMultiply(nn.weights(i), layer_outputs(i))
        layer_outputs(i + 1) = MatrixAdd(layer_outputs(i + 1), nn.biases(i))
        MatrixApplyFunc(layer_outputs(i + 1), @Sigmoid)
    Next i
    
    ' Backpropagation
    Dim errors(nn.num_layers - 1) As Matrix
    errors(nn.num_layers - 1) = MatrixSubtract(target, layer_outputs(nn.num_layers - 1))
    
    For i As Integer = nn.num_layers - 2 To 0 Step -1
        Dim gradients As Matrix = layer_outputs(i + 1)
        MatrixApplyFunc(gradients, @SigmoidDerivative)
        For j As Integer = 0 To gradients.rows - 1
            gradients.data1(j, 0) *= errors(i + 1).data1(j, 0) * learning_rate
        Next j
        
        Dim layer_outputs_T As Matrix
        MatrixInit(layer_outputs_T, layer_outputs(i).cols, layer_outputs(i).rows)
        For j As Integer = 0 To layer_outputs(i).rows - 1
            For k As Integer = 0 To layer_outputs(i).cols - 1
                layer_outputs_T.data1(k, j) = layer_outputs(i).data1(j, k)
            Next k
        Next j
        
        Dim weights_deltas As Matrix = MatrixMultiply(gradients, layer_outputs_T)
        nn.weights(i) = MatrixAdd(nn.weights(i), weights_deltas)
        nn.biases(i) = MatrixAdd(nn.biases(i), gradients)
        
        If i > 0 Then
            Dim weights_T As Matrix
            MatrixInit(weights_T, nn.weights(i).cols, nn.weights(i).rows)
            For j As Integer = 0 To nn.weights(i).rows - 1
                For k As Integer = 0 To nn.weights(i).cols - 1
                    weights_T.data1(k, j) = nn.weights(i).data1(j, k)
                Next k
            Next j
            
            errors(i) = MatrixMultiply(weights_T, errors(i + 1))
        End If
    Next i
End Sub


' Extended FreeBasic Neural Network with Multiple Hidden Layers


' ----------------------------- Main -----------------------------------
' Example usage
Dim nn As NeuralNetwork

' Define layer sizes: 2 input nodes, 4 nodes in the first hidden layer,
' 3 nodes in the second hidden layer, 5 nodes in the third hidden layer,
' and 1 output node
Dim layer_sizes(4) As Integer = {2, 4, 3, 5, 1}

' Define all XOR input-output pairs
Dim inputs(3) As Matrix
Dim targets(3) As Matrix

' [1, 0] -> 1
MatrixInit(inputs(0), 2, 1)
inputs(0).data1(0, 0) = 1
inputs(0).data1(1, 0) = 0
MatrixInit(targets(0), 1, 1)
targets(0).data1(0, 0) = 1

' [0, 1] -> 1
MatrixInit(inputs(1), 2, 1)
inputs(1).data1(0, 0) = 0
inputs(1).data1(1, 0) = 1
MatrixInit(targets(1), 1, 1)
targets(1).data1(0, 0) = 1

' [1, 1] -> 0
MatrixInit(inputs(2), 2, 1)
inputs(2).data1(0, 0) = 1
inputs(2).data1(1, 0) = 1
MatrixInit(targets(2), 1, 1)
targets(2).data1(0, 0) = 0

' [0, 0] -> 0
MatrixInit(inputs(3), 2, 1)
inputs(3).data1(0, 0) = 0
inputs(3).data1(1, 0) = 0
MatrixInit(targets(3), 1, 1)
targets(3).data1(0, 0) = 0

' Initialize the neural network
NNInit(nn, layer_sizes())


' Train the network with all XOR pairs
Dim as integer i,j
ReDim output1(3) As Matrix

For i  = 1 To 10000
    For j  = 0 To 3
        NNTrain(nn, inputs(j), targets(j), 0.1)
    Next j
    
    ' Print progress every 1000 iterations
    If i Mod 1000 = 0 Then
        Print "Iteration: "; i
        For j  = 0 To 3
            output1(j) = NNFeedforward(nn, inputs(j))
            Print "Input: ["; inputs(j).data1(0, 0); ", "; inputs(j).data1(1, 0); "] -> Output: "; output1(j).data1(0, 0)
        Next j
        Print "-------------------------"
    End If
    
Next i


end

'
' ======================================================================
'
/'

Iteration: 1000
Input: [1, 0] -> Output: 0.85
Input: [0, 1] -> Output: 0.84
Input: [1, 1] -> Output: 0.12
Input: [0, 0] -> Output: 0.11
-------------------------
Iteration: 2000
Input: [1, 0] -> Output: 0.92
Input: [0, 1] -> Output: 0.91
Input: [1, 1] -> Output: 0.08
Input: [0, 0] -> Output: 0.07
-------------------------
...
Final Test Results:
Input: [1, 0] -> Output: 0.98
Input: [0, 1] -> Output: 0.97
Input: [1, 1] -> Output: 0.02
Input: [0, 0] -> Output: 0.01



Conclusion
   The code supports any number of layers, limited only by memory and system constraints. 
   
   You can experiment with different architectures by modifying the layer_sizes array.
    
   For most practical purposes, networks with 2 to 10 layers are common, but 
deeper networks can also be implemented if needed.

'/


Löwenherz
Posts: 281
Joined: Aug 27, 2008 6:26
Location: Bad Sooden-Allendorf, Germany

Re: Neural Networks 1

Post by Löwenherz »

Nice example luxan. But its too fast on console output.. for detailed checking.. and perhaps some sleep commands are Missing ? I have tried but without big success ..
fxm
Moderator
Posts: 12577
Joined: Apr 22, 2009 12:46
Location: Paris suburbs, FRANCE

Re: Neural Networks 1

Post by fxm »

Aborting due to runtime error 6 (out of bounds array access) at line 110 of C:\...\FBIde0.4.6r4_fbc1.20.0\FBIDETEMP.bas::NNINIT(),
'LAYER_SIZES' accessed with invalid index = 5, must be between 0 and 4
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Feel free to put sleep commands wherever you want.


That's a piece of code I edited, without too much scrutiny,
I don't want to be uploading faulty code.

The error you detected wasn't found when I ran via Geany, perhaps
I should use FBIde also.

Here's a possible correction, this didn't generate an error on my setup.

Code: Select all


Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
    nn.num_layers = UBound(layer_sizes) + 1
    
    
    redim nn.layer_sizes(nn.num_layers-1) ' <<<<<<
    dim i as integer
    for i=0 to nn.num_layers-1   ' <<<<<<
    nn.layer_sizes(i) = layer_sizes(i)
    next i
   

Löwenherz
Posts: 281
Joined: Aug 27, 2008 6:26
Location: Bad Sooden-Allendorf, Germany

Re: Neural Networks 1

Post by Löwenherz »

I am using fbedit too and Winfbe Editor. Your example doesn't Work and Processing properly. Its a Kind of aborting after running the iterations..
fxm
Moderator
Posts: 12577
Joined: Apr 22, 2009 12:46
Location: Paris suburbs, FRANCE

Re: Neural Networks 1

Post by fxm »

Run-time error detected by compiling with the '-exx' option.
Your above correction fixes the bug.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

It's good to have others examine one's code.

I finally used a few more debug options.

So, within the Geany IDE I set the build compile command to fbc -w all -exx -g "%f"
Using that, no errors were indicated.

Then using the terminal I did, fbc -w all -exx -g DS_NN2a.bas
After that I ran gdb ./DS_NN2a, then r, enabled debug info, the output looked like this:

Iteration: 10000
Input: [ 1, 0] -> Output: 0.9903832225133128
Input: [ 0, 1] -> Output: 0.990302918114894
Input: [ 1, 1] -> Output: 0.01882283141163267
Input: [ 0, 0] -> Output: 0.01882289205749268
-------------------------
[Inferior 1 (process 9780) exited normally]
(gdb)


apparently, this means that it ran successfully and without error.



A few more tests of the Neural Network are appropriate, to determine how well it
deals with unexpected inputs.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

Being a hot evening at my location, my brain isn't ready for too much coding.
So, I asked DeepSeek to improve upon the existing code, this is the result, you'll note
that the inference stage uses inputs ranging from 0 to 2.

Code: Select all

/' 
    DS_NN2b.bas

    (c) Copyright 2025, sciwiseg@gmail.com

    Extended FreeBasic Neural Network with Multiple Hidden Layers
    Includes suggestions for loss calculation, matrix transpose, and inference testing.
'/

' Define matrix operations
Type Matrix
    rows As Integer
    cols As Integer
    ReDim data1(1,1) As Double
End Type

' Initialize a matrix
Sub MatrixInit(m As Matrix, rows As Integer, cols As Integer)
    m.rows = rows
    m.cols = cols
    ReDim m.data1(rows - 1, cols - 1)
End Sub

' Multiply two matrices
Function MatrixMultiply(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, b.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To b.cols - 1
            result.data1(i, j) = 0
            For k As Integer = 0 To a.cols - 1
                result.data1(i, j) += a.data1(i, k) * b.data1(k, j)
            Next k
        Next j
    Next i
    
    Return result
End Function

' Add two matrices
Function MatrixAdd(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) + b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Subtract two matrices
Function MatrixSubtract(a As Matrix, b As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, a.rows, a.cols)
    
    For i As Integer = 0 To a.rows - 1
        For j As Integer = 0 To a.cols - 1
            result.data1(i, j) = a.data1(i, j) - b.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Transpose a matrix
Function MatrixTranspose(m As Matrix) As Matrix
    Dim result As Matrix
    MatrixInit(result, m.cols, m.rows)
    
    For i As Integer = 0 To m.rows - 1
        For j As Integer = 0 To m.cols - 1
            result.data1(j, i) = m.data1(i, j)
        Next j
    Next i
    
    Return result
End Function

' Apply a function (e.g., sigmoid) to a matrix
Sub MatrixApplyFunc(m As Matrix, func As Function (x As Double) As Double)
    For i As Integer = 0 To m.rows - 1
        For j As Integer = 0 To m.cols - 1
            m.data1(i, j) = func(m.data1(i, j))
        Next j
    Next i
End Sub

' Sigmoid activation function
Function Sigmoid(x As Double) As Double
    Return 1 / (1 + Exp(-x))
End Function

' Derivative of sigmoid
Function SigmoidDerivative(x As Double) As Double
    Return x * (1 - x)
End Function

' Mean Squared Error (MSE) loss function
Function MeanSquaredError(predicted As Matrix, target As Matrix) As Double
    Dim error1 As Double = 0
    For i As Integer = 0 To predicted.rows - 1
        For j As Integer = 0 To predicted.cols - 1
            error1 += (predicted.data1(i, j) - target.data1(i, j)) ^ 2
        Next j
    Next i
    Return error1 / (predicted.rows * predicted.cols)
End Function

' Neural Network Type
Type NeuralNetwork
    num_layers As Integer
    redim layer_sizes(0) As Integer
    redim weights(0) As Matrix
    redim biases(0) As Matrix
End Type

' Initialize the neural network
Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
    nn.num_layers = UBound(layer_sizes) + 1
    Print " nn.num_layers "; nn.num_layers
    
    ReDim nn.layer_sizes(nn.num_layers - 1)
    For i As Integer = 0 To nn.num_layers - 1
        nn.layer_sizes(i) = layer_sizes(i)
    Next i
    
    ReDim nn.weights(nn.num_layers - 2)
    ReDim nn.biases(nn.num_layers - 2)
    
    For i As Integer = 0 To nn.num_layers - 2
        MatrixInit(nn.weights(i), layer_sizes(i + 1), layer_sizes(i))
        MatrixInit(nn.biases(i), layer_sizes(i + 1), 1)
        
        ' Randomize weights and biases
        For j As Integer = 0 To layer_sizes(i + 1) - 1
            For k As Integer = 0 To layer_sizes(i) - 1
                nn.weights(i).data1(j, k) = Rnd * 2 - 1 ' Range: -1 to 1
            Next k
            nn.biases(i).data1(j, 0) = Rnd * 2 - 1
        Next j
    Next i
End Sub

' Feedforward pass
Function NNFeedforward(nn As NeuralNetwork, input1 As Matrix) As Matrix
    Dim layer_output As Matrix = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_output = MatrixMultiply(nn.weights(i), layer_output)
        layer_output = MatrixAdd(layer_output, nn.biases(i))
        MatrixApplyFunc(layer_output, @Sigmoid)
    Next i
    
    Return layer_output
End Function

' Train the neural network
Sub NNTrain(nn As NeuralNetwork, input1 As Matrix, target As Matrix, learning_rate As Double)
    ' Feedforward
    Dim layer_outputs(nn.num_layers - 1) As Matrix
    layer_outputs(0) = input1
    
    For i As Integer = 0 To nn.num_layers - 2
        layer_outputs(i + 1) = MatrixMultiply(nn.weights(i), layer_outputs(i))
        layer_outputs(i + 1) = MatrixAdd(layer_outputs(i + 1), nn.biases(i))
        MatrixApplyFunc(layer_outputs(i + 1), @Sigmoid)
    Next i
    
    ' Backpropagation
    Dim errors(nn.num_layers - 1) As Matrix
    errors(nn.num_layers - 1) = MatrixSubtract(target, layer_outputs(nn.num_layers - 1))
    
    For i As Integer = nn.num_layers - 2 To 0 Step -1
        Dim gradients As Matrix = layer_outputs(i + 1)
        MatrixApplyFunc(gradients, @SigmoidDerivative)
        For j As Integer = 0 To gradients.rows - 1
            gradients.data1(j, 0) *= errors(i + 1).data1(j, 0) * learning_rate
        Next j
        
        Dim layer_outputs_T As Matrix = MatrixTranspose(layer_outputs(i))
        Dim weights_deltas As Matrix = MatrixMultiply(gradients, layer_outputs_T)
        nn.weights(i) = MatrixAdd(nn.weights(i), weights_deltas)
        nn.biases(i) = MatrixAdd(nn.biases(i), gradients)
        
        If i > 0 Then
            Dim weights_T As Matrix = MatrixTranspose(nn.weights(i))
            errors(i) = MatrixMultiply(weights_T, errors(i + 1))
        End If
    Next i
End Sub

' ----------------------------- Main -----------------------------------
' Example usage
Dim nn As NeuralNetwork

' Define layer sizes: 2 input nodes, 4 nodes in the first hidden layer,
' 3 nodes in the second hidden layer, 5 nodes in the third hidden layer,
' and 1 output node
Dim layer_sizes(4) As Integer = {2, 4, 3, 5, 1}

' Define all XOR input-output pairs
Dim inputs(3) As Matrix
Dim targets(3) As Matrix

' [1, 0] -> 1
MatrixInit(inputs(0), 2, 1)
inputs(0).data1(0, 0) = 1
inputs(0).data1(1, 0) = 0
MatrixInit(targets(0), 1, 1)
targets(0).data1(0, 0) = 1

' [0, 1] -> 1
MatrixInit(inputs(1), 2, 1)
inputs(1).data1(0, 0) = 0
inputs(1).data1(1, 0) = 1
MatrixInit(targets(1), 1, 1)
targets(1).data1(0, 0) = 1

' [1, 1] -> 0
MatrixInit(inputs(2), 2, 1)
inputs(2).data1(0, 0) = 1
inputs(2).data1(1, 0) = 1
MatrixInit(targets(2), 1, 1)
targets(2).data1(0, 0) = 0

' [0, 0] -> 0
MatrixInit(inputs(3), 2, 1)
inputs(3).data1(0, 0) = 0
inputs(3).data1(1, 0) = 0
MatrixInit(targets(3), 1, 1)
targets(3).data1(0, 0) = 0

' Initialize the neural network
NNInit(nn, layer_sizes())

' Train the network with all XOR pairs
Dim As Integer i, j
ReDim output1(3) As Matrix

For i = 1 To 10000
    Dim total_loss As Double = 0
    For j = 0 To 3
        NNTrain(nn, inputs(j), targets(j), 0.1)
        output1(j) = NNFeedforward(nn, inputs(j))
        total_loss += MeanSquaredError(output1(j), targets(j))
    Next j
    
    ' Print progress every 1000 iterations
    If i Mod 1000 = 0 Then
        Print "Iteration: "; i
        Print "Average Loss: "; total_loss / 4
        For j = 0 To 3
            Print "Input: ["; inputs(j).data1(0, 0); ", "; inputs(j).data1(1, 0); "] -> Output: "; output1(j).data1(0, 0)
        Next j
        Print "-------------------------"
    End If
Next i

' Test the trained network with input values between 0 and 2
Print "Testing Trained Network with Inputs Between 0 and 2:"
Dim test_input As Matrix
MatrixInit(test_input, 2, 1)

For i = 0 To 20
    test_input.data1(0, 0) = i / 10.0
    For j = 0 To 20
        test_input.data1(1, 0) = j / 10.0
        Dim test_output As Matrix = NNFeedforward(nn, test_input)
        Print "Input: ["; test_input.data1(0, 0); ", "; test_input.data1(1, 0); "] -> Output: "; test_output.data1(0, 0)
    Next j
Next i

End


Imortis
Moderator
Posts: 1983
Joined: Jun 02, 2005 15:10
Location: USA
Contact:

Re: Neural Networks 1

Post by Imortis »

Luxan wrote: Feb 12, 2025 6:00 ... you'll note that the inference stage uses inputs ranging from 0 to 2...
Why? Why was the range something different to begin with? Is this code actually better than the previous code? As you seem to be asking DeepSeek for a lot of it, I have my doubts that it is better in any meaningful way, yet I don't under stand what the result of the code should be and thus have no way to judge it. If you could please, provide a description of the actual purpose of the code as well as expected or intended results, I would be glad to give it a look.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

This is in preparation for something other than logic gates, which don't always reach levels
0 or 1.

You can always use the previous code when you're assuming a definite 0 or 1, off or on.
The introduction of MSE loss during training might be useful even in that context.

I appreciate the value of keeping code uncluttered and letting others build upon that.

For this code:

### **Key Changes and Additions**

1. **Matrix Transpose Function**:
- Added `MatrixTranspose` to simplify matrix operations in backpropagation.

2. **Mean Squared Error (MSE)**:
- Added `MeanSquaredError` to calculate the loss during training.

3. **Testing Phase**:
- After training, the network is tested with input values between 0 and 2 (in steps of 0.1) to observe its behavior outside the training data.

4. **Improved Progress Monitoring**:
- The average loss is printed every 1,000 iterations to monitor training progress.
Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

A somewhat, related piece of code, requiring improvement.
The type of routine one might make into a standard library, if it doesn't already exist.

Code: Select all




'declare sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
declare Function BitsToInt(bits() As UShort , size As Integer) As Integer
declare function bit2int(a1() as ushort,isize as integer) as integer

declare Function IntToBits(ByVal n As Integer) As String
declare Function BitsToInt2(bits() As ushort) As Integer
declare Sub Main()

'
' ----------------------------------------------------------------------
'
dim as integer a,b
dim as ushort a1(0 to 16),b1(0 to 16),i,j


a=-5
print "a=";a
for i=0 to 15
    b=-Bit(a,i) 
    a1(i)=b
next i
print
for i=0 to 15
    print i;" ";a1(i)
next i

b=0
for i=0 to 15
 a=a1(i)
 if a=1 then  b=b+2^i end if
next i

' Check for negative value in two's complement if the size is 32 bits
If (a1(15) = 1) Then
' If highest bit is set, it's a negative number in two's complement
   b = b - (1 Shl 16) ' Convert to negative
End If
 

print
print "b=";b

b=bit2int(a1() ,16) 
print
print "b'=";b



end
'
' ======================================================================
'
function bit2int(a1() as ushort,isize as integer) as integer
'
'   Convert bits to signed integer representation .
'
dim as integer b,i,int_size
dim as ushort a

int_size=sizeof(b)
print " size ";int_size
if isize<int_size then int_size=isize end if
print " size' ";int_size
b=0
for i=0 to int_size-1
 a=a1(i)
 if a=1 then  b=b+2^i end if
next i

' Check for negative value in two's complement if the size is 32 bits
If (a1(int_size-1) = 1) Then
' If highest bit is set, it's a negative number in two's complement
   b = b - (1 Shl int_size) ' Convert to negative
End If
' 
  return b
'
end function
'
' ----------------------------------------------------------------------
'






' Function to convert a UShort array of bits to an integer
Function BitsToInt(bits() As UShort , size As Integer) As Integer
    Dim As UInteger b = 0
    
    ' Iterate over the array of bits
    For i As Integer = 0 To size - 1
        If bits(i) = 1 Then
            b = b + (1 Shl i) ' Add 2^i for set bits
        End If
    Next
    
    ' Check for negative value in two's complement if the size is 32 bits
    If (bits(size - 1) = 1) Then
        ' If highest bit is set, it's a negative number in two's complement
        b = b - (1 Shl size) ' Convert to negative
    End If
    
    Return b
End Function


/'
sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
'
'         Convert an integer to bits of length lb
'     , assign to matrix elements .
'
dim as integer i, j, nx, ny ' , lb, ub
'
nx = ubound(bsq.m, 1)
ny = ubound(bsq.m, 2)
'
for j = 0 to nb-1
 for i=0 to nx 
    bsq.m(i,j) = -Bit(x,j) 
    bsq.m(i,j+nb) = -Bit(x,j+nb) 
 next i
next j 
'
'
end sub
'
'/
Function IntToBits(ByVal n As Integer) As String
    Dim As String bits = ""
    For i As Integer = SizeOf(Integer) * 8 - 1 To 0 Step -1
        bits += IIf((n And (1 Shl i)) <> 0, "1", "0")
    Next
    Return bits
End Function

Function BitsToInt2(bits() as ushort) As Integer
    Dim As Integer result = 0
    For i As Integer = 0 To Len(bits) - 1
        result = (result Shl 1) Or bits(i)
    Next
    Return result
End Function

Sub Main()
    Dim As Integer num = -4
    Dim As ushort bits(8)
    Print "Integer: "; num
    Print "Bits: "; bits(0)
    Print "Converted back to Integer: "; BitsToInt2(bits())
    Sleep
End Sub

Luxan
Posts: 273
Joined: Feb 18, 2009 12:47
Location: New Zealand

Re: Neural Networks 1

Post by Luxan »

This is what I'm attempting, illustrated via code.

Code: Select all


'  NN_Logic.bas
' ----------------------------------------------------------------------
'
'                       NN for Logic simulation .
'
'   sciwiseg@gmail.com
'
' ----------------------------------------------------------------------
'
screen 12

color 15,0
locate 2,2
print " Convert signed integers to binary representation"
locate 3,2
print " Prior to training NN, to simulate various static logic, ALU circuits"
locate 5,2
color 10,0
print "                         c";
color 15,0
print " =  f(";
color 11,0
print "a";
color 15,0
print",";
color 12,0
print"b";
color 15,0
print") "
'
locate 6,2
color 10,0
print "                         C";
color 15,0
print " = NN(";
color 11,0
print "A";
color 15,0
print",";
color 12,0
print"B";
color 15,0
print") "
'
locate 7,2
color 11,0
print " a -> A"; "     a , signed integer, A binary representation"
locate 8,2
color 12,0
print " b -> B"; "     b , signed integer, B binary representation"
locate 9,2
color 10,0
print " c -> C"; "     c , signed integer, C binary representation"
'
line(24,180)-(40,260),11,b
dim as integer y,x
y=94
line(24,180+y)-(40,260+y),12,b
line(45,180)-(245,260+y),15,b
x=1
line(250,180)-(266,260+y),10,b
'
locate 14,2
color 11,0
print "A"
locate 20,2
color 12,0
print "B"
locate 17,36
color 10,0
print "C"
locate 17,18
color 15,0
print "NN"
'
locate 25,2
color 10,0
print " C -> c"; "     C binary representation, c signed integer"
'



sleep
end
'
' ======================================================================
'



Post Reply