Neural Networks 1
Neural Networks 1
To be updated from time to time.
Re: Neural Networks 1
Refer to Beginners, Easy Matrix for other routines, include
files and library code .
As an introduction I'd suggest you look at this series
of videos.
https://www.youtube.com/watch?v=aircAruvnKk
To a fair extent I'm modelling my NN programs using
that information.
The MNIST handwritten numerals data, mentioned there
is something I've already successfully trialed using
python; eventually I might use this with the BASIC
program.
In the meantime though, I intend to use a much smaller
data set to check the validity of my code, this will
consist of a binary representation of unsigned integers.
The format of the target values will also be a binary
representation of integers.
I don't claim that the Matrix routines in use are perfect.
However much of the code I build upon that has some worth,
even if it's instructional .
Also from this video :
https://www.youtube.com/watch?v=Ilg3gGewQ5U
Others have been suggested by gunslinger .
https://www.youtube.com/watch?v=hfMk-kjRv4c
files and library code .
As an introduction I'd suggest you look at this series
of videos.
https://www.youtube.com/watch?v=aircAruvnKk
To a fair extent I'm modelling my NN programs using
that information.
The MNIST handwritten numerals data, mentioned there
is something I've already successfully trialed using
python; eventually I might use this with the BASIC
program.
In the meantime though, I intend to use a much smaller
data set to check the validity of my code, this will
consist of a binary representation of unsigned integers.
The format of the target values will also be a binary
representation of integers.
I don't claim that the Matrix routines in use are perfect.
However much of the code I build upon that has some worth,
even if it's instructional .
Code: Select all
'
' bit_rep2.bas
'
' Numerical representation of bits.
'
' and
'
' Binary representation of numbers.
'
'
#cmdline "-exx"
#include once "easy_matx.bi"
'
' next include, not required for this frame work .
'
' #include once "easy_maty.bi"
'
declare sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
declare function ubxlb(x as uinteger, lb as integer) as uinteger
declare function bits2int(bsq as Matrix, nb as integer) as uinteger
'
' ----------------------------------------------------------------------
'
dim as uinteger v(0 to 3), w(0 to 3), x(0 to 15), y(0 to 15), i, k, m
'
dim as integer la, pwr
la = 2
'pwr = (2^(la)-1)^2
pwr = 2^(2*la)-1
dim as Matrix ab = Matrix(1, la*2)
'
' Sequence through bit representations .
'
for i=0 to pwr
' print " ";i;
int2bit(ab , i, la ) ' input
prt_m(ab)
print " x "
m = ubxlb(i , la)
int2bit(ab , m, la ) ' target, convert
prt_m(ab)
'
' train , retain previous weights, to adjust then save for
' next [ input , target ] pair .
' output , convert, check .
'
next i
print
print " ---------------------------------------------------------- "
m = bits2int(ab, la)
print " m = ";m
'
' For all valid input data samples .
' test data -> pretrained NN -> output , convert, compare expected.
end
'
' ======================================================================
'
'
'
' |0|1||2|3|
' 1 2 4 8
'
end
'
' ======================================================================
'
'
function bits2int(bsq as Matrix, nb as integer) as uinteger
'
' Matrix bits to uinteger .
'
dim as uinteger x
dim as single bt
dim as integer i, j, nx, ny ' , lb, ub
'
nx = ubound(bsq.m, 1)
ny = ubound(bsq.m, 2)
'
for j = 0 to ny
for i=0 to nx
bt = bsq.m(i,j)
bt = int(bt + 0.5)
bt = bt*(2^j)
x = x + bt
next i
next j
'
return x
'
end function
' ----------------------------------------------------------------------
'
sub int2bit(bsq as Matrix, x as uinteger, nb as integer)
'
' Convert an integer to bits of length lb
' , assign to matrix elements .
'
dim as integer i, j, nx, ny ' , lb, ub
'
nx = ubound(bsq.m, 1)
ny = ubound(bsq.m, 2)
'
for j = 0 to nb-1
for i=0 to nx
bsq.m(i,j) = -Bit(x,j)
bsq.m(i,j+nb) = -Bit(x,j+nb)
next i
next j
'
'
end sub
'
' ----------------------------------------------------------------------
'
function ubxlb(x as uinteger, lb as integer) as uinteger
'
' Upper bits x Lower bits , multiplication.
'
dim as integer pwr
dim as uinteger a, b, c
'
pwr = 2^lb
'
b = int(x/pwr)
a = x - pwr*b
c = a * b
' print " ";a;" , ";b;" , ";c
'
return c
'
'
end function
Also from this video :
https://www.youtube.com/watch?v=Ilg3gGewQ5U
Others have been suggested by gunslinger .
https://www.youtube.com/watch?v=hfMk-kjRv4c
Re: Neural Networks 1
I used the previous code to train, then test my
existing NN .
It tends to remember the last training epoch
using those weights, as the default; irrespective of the input.
Somethings is amok.
If this is over fitting, then there are ways to
compensate for this; ways I'm not yet familiar
with.
Early Termination looks like one possibility .
Time to examine NN code from different languages and replicate
the previous training and test arrangement within those.
With FreeBASIC I only need to read about a method and I'm able to
eventually visualize what needs to be done.
existing NN .
It tends to remember the last training epoch
using those weights, as the default; irrespective of the input.
Somethings is amok.
If this is over fitting, then there are ways to
compensate for this; ways I'm not yet familiar
with.
Early Termination looks like one possibility .
Time to examine NN code from different languages and replicate
the previous training and test arrangement within those.
With FreeBASIC I only need to read about a method and I'm able to
eventually visualize what needs to be done.
Re: Neural Networks 1
I've been searching for a free coding A.I that automatically checks the code it generates
from a user, or other, prompt. There doesn't appear to be such a beast.
This is different than what I usually do, a mostly A.I generated piece of code.
On this occasion, from DeepSeek, I didn't ask what the sources were, maybe next time.
Other coders may want to test this, examine how back propagation is being performed and use
the knowledge to produce their own unique code.
I was about to purchase a $200 book that explained back propagation, now not so necessary.
from a user, or other, prompt. There doesn't appear to be such a beast.
This is different than what I usually do, a mostly A.I generated piece of code.
On this occasion, from DeepSeek, I didn't ask what the sources were, maybe next time.
Other coders may want to test this, examine how back propagation is being performed and use
the knowledge to produce their own unique code.
I was about to purchase a $200 book that explained back propagation, now not so necessary.
Code: Select all
/'
DS_NN2a.bas
(c) Copyright 2025, sciwiseg@gmail.com
Generated from DeepSeek , edited and tested by myself.
Some similarities to code I've written and uploaded.
'/
' Define matrix operations
Type Matrix
rows As Integer
cols As Integer
ReDim data1(1,1) As Double
End Type
' Initialize a matrix
Sub MatrixInit(m As Matrix, rows As Integer, cols As Integer)
m.rows = rows
m.cols = cols
ReDim m.data1(rows - 1, cols - 1)
End Sub
' Multiply two matrices
Function MatrixMultiply(a As Matrix, b As Matrix) As Matrix
Dim result As Matrix
MatrixInit(result, a.rows, b.cols)
For i As Integer = 0 To a.rows - 1
For j As Integer = 0 To b.cols - 1
result.data1(i, j) = 0
For k As Integer = 0 To a.cols - 1
result.data1(i, j) += a.data1(i, k) * b.data1(k, j)
Next k
Next j
Next i
Return result
End Function
' Add two matrices
Function MatrixAdd(a As Matrix, b As Matrix) As Matrix
Dim result As Matrix
MatrixInit(result, a.rows, a.cols)
For i As Integer = 0 To a.rows - 1
For j As Integer = 0 To a.cols - 1
result.data1(i, j) = a.data1(i, j) + b.data1(i, j)
Next j
Next i
Return result
End Function
' Subtract two matrices
Function MatrixSubtract(a As Matrix, b As Matrix) As Matrix
Dim result As Matrix
MatrixInit(result, a.rows, a.cols)
For i As Integer = 0 To a.rows - 1
For j As Integer = 0 To a.cols - 1
result.data1(i, j) = a.data1(i, j) - b.data1(i, j)
Next j
Next i
Return result
End Function
' Apply a function (e.g., sigmoid) to a matrix
Sub MatrixApplyFunc(m As Matrix, func As Function (x As Double) As Double)
For i As Integer = 0 To m.rows - 1
For j As Integer = 0 To m.cols - 1
m.data1(i, j) = func(m.data1(i, j))
Next j
Next i
End Sub
' Sigmoid activation function
Function Sigmoid(x As Double) As Double
Return 1 / (1 + Exp(-x))
End Function
' Derivative of sigmoid
Function SigmoidDerivative(x As Double) As Double
Return x * (1 - x)
End Function
' -------------------------------- new ---------------------------------
Type NeuralNetwork
num_layers As Integer
redim layer_sizes(0) As Integer
redim weights(0) As Matrix
redim biases(0) As Matrix
End Type
Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
nn.num_layers = UBound(layer_sizes) + 1
redim nn.layer_sizes(nn.num_layers)
dim i as integer
for i=0 to nn.num_layers
nn.layer_sizes(i) = layer_sizes(i)
next i
ReDim nn.weights(nn.num_layers - 2)
ReDim nn.biases(nn.num_layers - 2)
For i As Integer = 0 To nn.num_layers - 2
MatrixInit(nn.weights(i), layer_sizes(i + 1), layer_sizes(i))
MatrixInit(nn.biases(i), layer_sizes(i + 1), 1)
' Randomize weights and biases
For j As Integer = 0 To layer_sizes(i + 1) - 1
For k As Integer = 0 To layer_sizes(i) - 1
nn.weights(i).data1(j, k) = Rnd * 2 - 1 ' Range: -1 to 1
Next k
nn.biases(i).data1(j, 0) = Rnd * 2 - 1
Next j
Next i
End Sub
Function NNFeedforward(nn As NeuralNetwork, input1 As Matrix) As Matrix
Dim layer_output As Matrix = input1
For i As Integer = 0 To nn.num_layers - 2
layer_output = MatrixMultiply(nn.weights(i), layer_output)
layer_output = MatrixAdd(layer_output, nn.biases(i))
MatrixApplyFunc(layer_output, @Sigmoid)
Next i
Return layer_output
End Function
Sub NNTrain(nn As NeuralNetwork, input1 As Matrix, target As Matrix, learning_rate As Double)
' Feedforward
Dim layer_outputs(nn.num_layers - 1) As Matrix
layer_outputs(0) = input1
For i As Integer = 0 To nn.num_layers - 2
layer_outputs(i + 1) = MatrixMultiply(nn.weights(i), layer_outputs(i))
layer_outputs(i + 1) = MatrixAdd(layer_outputs(i + 1), nn.biases(i))
MatrixApplyFunc(layer_outputs(i + 1), @Sigmoid)
Next i
' Backpropagation
Dim errors(nn.num_layers - 1) As Matrix
errors(nn.num_layers - 1) = MatrixSubtract(target, layer_outputs(nn.num_layers - 1))
For i As Integer = nn.num_layers - 2 To 0 Step -1
Dim gradients As Matrix = layer_outputs(i + 1)
MatrixApplyFunc(gradients, @SigmoidDerivative)
For j As Integer = 0 To gradients.rows - 1
gradients.data1(j, 0) *= errors(i + 1).data1(j, 0) * learning_rate
Next j
Dim layer_outputs_T As Matrix
MatrixInit(layer_outputs_T, layer_outputs(i).cols, layer_outputs(i).rows)
For j As Integer = 0 To layer_outputs(i).rows - 1
For k As Integer = 0 To layer_outputs(i).cols - 1
layer_outputs_T.data1(k, j) = layer_outputs(i).data1(j, k)
Next k
Next j
Dim weights_deltas As Matrix = MatrixMultiply(gradients, layer_outputs_T)
nn.weights(i) = MatrixAdd(nn.weights(i), weights_deltas)
nn.biases(i) = MatrixAdd(nn.biases(i), gradients)
If i > 0 Then
Dim weights_T As Matrix
MatrixInit(weights_T, nn.weights(i).cols, nn.weights(i).rows)
For j As Integer = 0 To nn.weights(i).rows - 1
For k As Integer = 0 To nn.weights(i).cols - 1
weights_T.data1(k, j) = nn.weights(i).data1(j, k)
Next k
Next j
errors(i) = MatrixMultiply(weights_T, errors(i + 1))
End If
Next i
End Sub
' Extended FreeBasic Neural Network with Multiple Hidden Layers
' ----------------------------- Main -----------------------------------
' Example usage
Dim nn As NeuralNetwork
' Define layer sizes: 2 input nodes, 4 nodes in the first hidden layer,
' 3 nodes in the second hidden layer, 5 nodes in the third hidden layer,
' and 1 output node
Dim layer_sizes(4) As Integer = {2, 4, 3, 5, 1}
' Define all XOR input-output pairs
Dim inputs(3) As Matrix
Dim targets(3) As Matrix
' [1, 0] -> 1
MatrixInit(inputs(0), 2, 1)
inputs(0).data1(0, 0) = 1
inputs(0).data1(1, 0) = 0
MatrixInit(targets(0), 1, 1)
targets(0).data1(0, 0) = 1
' [0, 1] -> 1
MatrixInit(inputs(1), 2, 1)
inputs(1).data1(0, 0) = 0
inputs(1).data1(1, 0) = 1
MatrixInit(targets(1), 1, 1)
targets(1).data1(0, 0) = 1
' [1, 1] -> 0
MatrixInit(inputs(2), 2, 1)
inputs(2).data1(0, 0) = 1
inputs(2).data1(1, 0) = 1
MatrixInit(targets(2), 1, 1)
targets(2).data1(0, 0) = 0
' [0, 0] -> 0
MatrixInit(inputs(3), 2, 1)
inputs(3).data1(0, 0) = 0
inputs(3).data1(1, 0) = 0
MatrixInit(targets(3), 1, 1)
targets(3).data1(0, 0) = 0
' Initialize the neural network
NNInit(nn, layer_sizes())
' Train the network with all XOR pairs
Dim as integer i,j
ReDim output1(3) As Matrix
For i = 1 To 10000
For j = 0 To 3
NNTrain(nn, inputs(j), targets(j), 0.1)
Next j
' Print progress every 1000 iterations
If i Mod 1000 = 0 Then
Print "Iteration: "; i
For j = 0 To 3
output1(j) = NNFeedforward(nn, inputs(j))
Print "Input: ["; inputs(j).data1(0, 0); ", "; inputs(j).data1(1, 0); "] -> Output: "; output1(j).data1(0, 0)
Next j
Print "-------------------------"
End If
Next i
end
'
' ======================================================================
'
/'
Iteration: 1000
Input: [1, 0] -> Output: 0.85
Input: [0, 1] -> Output: 0.84
Input: [1, 1] -> Output: 0.12
Input: [0, 0] -> Output: 0.11
-------------------------
Iteration: 2000
Input: [1, 0] -> Output: 0.92
Input: [0, 1] -> Output: 0.91
Input: [1, 1] -> Output: 0.08
Input: [0, 0] -> Output: 0.07
-------------------------
...
Final Test Results:
Input: [1, 0] -> Output: 0.98
Input: [0, 1] -> Output: 0.97
Input: [1, 1] -> Output: 0.02
Input: [0, 0] -> Output: 0.01
Conclusion
The code supports any number of layers, limited only by memory and system constraints.
You can experiment with different architectures by modifying the layer_sizes array.
For most practical purposes, networks with 2 to 10 layers are common, but
deeper networks can also be implemented if needed.
'/
Re: Neural Networks 1
Nice example luxan. But its too fast on console output.. for detailed checking.. and perhaps some sleep commands are Missing ? I have tried but without big success ..
Re: Neural Networks 1
Aborting due to runtime error 6 (out of bounds array access) at line 110 of C:\...\FBIde0.4.6r4_fbc1.20.0\FBIDETEMP.bas::NNINIT(),
'LAYER_SIZES' accessed with invalid index = 5, must be between 0 and 4
Re: Neural Networks 1
Feel free to put sleep commands wherever you want.
That's a piece of code I edited, without too much scrutiny,
I don't want to be uploading faulty code.
The error you detected wasn't found when I ran via Geany, perhaps
I should use FBIde also.
Here's a possible correction, this didn't generate an error on my setup.
That's a piece of code I edited, without too much scrutiny,
I don't want to be uploading faulty code.
The error you detected wasn't found when I ran via Geany, perhaps
I should use FBIde also.
Here's a possible correction, this didn't generate an error on my setup.
Code: Select all
Sub NNInit(nn As NeuralNetwork, layer_sizes() As Integer)
nn.num_layers = UBound(layer_sizes) + 1
redim nn.layer_sizes(nn.num_layers-1) ' <<<<<<
dim i as integer
for i=0 to nn.num_layers-1 ' <<<<<<
nn.layer_sizes(i) = layer_sizes(i)
next i
Re: Neural Networks 1
I am using fbedit too and Winfbe Editor. Your example doesn't Work and Processing properly. Its a Kind of aborting after running the iterations..
Re: Neural Networks 1
Run-time error detected by compiling with the '-exx' option.
Your above correction fixes the bug.
Your above correction fixes the bug.
Re: Neural Networks 1
It's good to have others examine one's code.
I finally used a few more debug options.
So, within the Geany IDE I set the build compile command to fbc -w all -exx -g "%f"
Using that, no errors were indicated.
Then using the terminal I did, fbc -w all -exx -g DS_NN2a.bas
After that I ran gdb ./DS_NN2a, then r, enabled debug info, the output looked like this:
Iteration: 10000
Input: [ 1, 0] -> Output: 0.9903832225133128
Input: [ 0, 1] -> Output: 0.990302918114894
Input: [ 1, 1] -> Output: 0.01882283141163267
Input: [ 0, 0] -> Output: 0.01882289205749268
-------------------------
[Inferior 1 (process 9780) exited normally]
(gdb)
apparently, this means that it ran successfully and without error.
A few more tests of the Neural Network are appropriate, to determine how well it
deals with unexpected inputs.
I finally used a few more debug options.
So, within the Geany IDE I set the build compile command to fbc -w all -exx -g "%f"
Using that, no errors were indicated.
Then using the terminal I did, fbc -w all -exx -g DS_NN2a.bas
After that I ran gdb ./DS_NN2a, then r, enabled debug info, the output looked like this:
Iteration: 10000
Input: [ 1, 0] -> Output: 0.9903832225133128
Input: [ 0, 1] -> Output: 0.990302918114894
Input: [ 1, 1] -> Output: 0.01882283141163267
Input: [ 0, 0] -> Output: 0.01882289205749268
-------------------------
[Inferior 1 (process 9780) exited normally]
(gdb)
apparently, this means that it ran successfully and without error.
A few more tests of the Neural Network are appropriate, to determine how well it
deals with unexpected inputs.