## Making an ANN

User projects written in or related to FreeBASIC.
BasicCoder2
Posts: 3391
Joined: Jan 01, 2009 7:03

### Making an ANN

For some time I have always wanted to code an ANN black box but couldn't find a suitable tutorial that could show a math challenged reader the arithmetic involved in backpropagation instead explaining it with pure calculus.
This code is based on the tutorial at,
https://mattmazur.com/2015/03/17/a-step ... n-example/
Because it gives the actual numbers it is possible to check that you have coded correctly when your numbers come out the same.
The computations in the example below have been expanded out rather than each layer of the ANN having its own for/next loop. Hopefully I will fix that in the near future. I still don't understand it enough to add more hidden layers but ultimately I would like to end up with a generic ANN that I know how to use in my own AI examples.

Code: Select all

`' This code is based on the tutorial at,' https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/' It prints out the new values of weights after one forward and back passscreenres 1280,600,32color rgb(0,0,0),rgb(255,255,255):clsfunction sigmoid(X as double) as double    return 1/( 1 + EXP((-1)*X))end functiondim shared as double i1,i2                      'inputsdim shared as double w1,w2,w3,w4,w5,w6,w7,w8    'weightsdim shared as double nw1,nw2,nw3,nw4,nw5,nw6,nw7,nw8  'updated weightsdim shared as double netH1,netH2,neto1,neto2    'sum of weights * inputsdim shared as double outH1,outH2,outo1,outo2    'sigmoid outputsdim shared as double targeto1,targeto2          'targetsdim shared as double b1,b2                      'biasdim shared as double Etotal,Eo1,Eo2             'total errordim shared as double learningRatelearningRate = 0.5'intialize inputsi1 = 0.05i2 = 0.10'intialize biasb1 = 0.35b2 = 0.60'initialize weightsw1 = 0.15w2 = 0.20w3 = 0.25w4 = 0.30w5 = 0.40w6 = 0.45w7 = 0.50w8 = 0.55'target valuestargeto1 = 0.01targeto2 = 0.99'============================================================================='        THE FORWARD PASS'=============================================================================netH1 = w1*i1 + w2*i2 + b1 * 1outH1 = sigmoid(netH1)netH2 = w3*i1 + w4*i2 + b1 * 1outH2 = sigmoid(netH2)neto1 = w5*outH1 + w6*outH2 + b2 * 1outo1 = sigmoid(neto1)neto2 = w7*outH1 + w8*outH2 + b2 * 1outo2 = sigmoid(neto2)'COMPUTE ERROR'=============Eo1 = (targeto1 - outo1)^2 * 0.5Eo2 = (targeto2 - outo2)^2 * 0.5Etotal = Eo1 + Eo2'============================================================================='          THE BACKWARD PASS'=============================================================================dim as single pX,pA,pB,pC   'used to hold results'=============================================================================' COMPUTE NEW WEIGHT VALUES IN OUTPUT LAYER'=============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)pC = outH1pX = pA * pB * pCnw5 = w5 - learningRate * pX'=============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)pC = outH2                    '<--- changepX = pA * pB * pCnw6 = w6 - learningRate * pX'=============================================================================pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)pC = outH1pX = pA * pB * pCnw7 = w7 - learningRate * pX'=============================================================================pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)pC = outH2pX = pA * pB * pCnw8 = w8 - learningRate * pX'=============================================================================dim as single temp,J,K,L,M,N   'used to hold results'============================================================================='  COMPUTE NEW WEIGHT VALUES IN HIDDEN LAYER'============================================================================='      NEW w1'=============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)temp = pA * pBJ = temp * w5pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)temp = pA * pBK = temp * w7L = J + KM = outH1 * (1- outH1)N = L * M * i1nw1 = w1 - learningRate * N'=============================================================================='       NEW w2'==============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)temp = pA * pBJ = temp * w5pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)temp = pA * pBK = temp * w7L = J + KM = outH2 * (1- outH2)N = L * M * i2nw2 = w2 - learningRate * N'============================================================================='     NEW w3'=============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)temp = pA * pBJ = temp * w5pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)temp = pA * pBK = temp * w7L = J + KM = outH2 * (1- outH2)N = L * M * i2nw3 = w3 - learningRate * N'=============================================================================='             NEW w4'==============================================================================pA = -(targeto1-outo1)pB = outo1 * (1 - outo1)temp = pA * pBJ = temp * w5pA = -(targeto2-outo2)pB = outo2 * (1 - outo2)temp = pA * pBK = temp * w7L = J + KM = outH2 * (1- outH2)N = L * M * i2nw4 = w4 - learningRate * N'=============================================================================' these should have the same values as given in the tutorialprint "new w1 = ";nw1print "new w2 = ";nw2print "new w3 = ";nw3print "new w4 = ";nw4print "new w5 = ";nw5print "new w6 = ";nw6print "new w7 = ";nw7print "new w8 = ";nw8sleep`
Last edited by BasicCoder2 on Jul 09, 2018 5:52, edited 1 time in total.
dafhi
Posts: 1241
Joined: Jun 04, 2005 9:51

### Re: Making an ANN

I highly recommend Luis Serrano's videos, especially this one