Neural net

Post your FreeBASIC source, examples, tips and tricks here. Please don’t post code without including an explanation.
angros47
Posts: 2323
Joined: Jun 21, 2005 19:04

Neural net

Post by angros47 »

Here is a simple neural net (I ported the code from here: http://www.blitzbasic.com/codearcs/code ... ?code=1158)

Code: Select all

Const C_maxInputs = 5000,C_maxNPL = 500,C_maxLayers = 5
Const C_maxNet = 16'50000 'Max number of neurons any single neruon can be linked to(On a per neuron basis, so 2 neurons is limit*2 and so on, up to n(Inifinity)
Const nBias = - 1 'Do not change!(cue everyone changing it, crashing their machines..;->
Const C_sypAct = 1 'Activation value. Change to suit your needs.
Const V_maxVec = 500 
Const c_maxHis = 500


Type vector
    size as integer
    v(V_maxVec) as single
End Type

Type neuron
    numNet as integer
    weight(C_maxNet+1) as single
    net (C_maxNet) as neuron ptr
End Type

Type nLayer
    neuron(C_maxNPL) as neuron ptr
    numNeurons as integer
    numIn as integer
End Type

Type neuralNet 
	numInputs as integer
	numOutputs as integer
	numLayers as integer
        numNPL as integer
	nLayer(C_maxLayers) as nLayer ptr
End Type

Dim shared vTmp as vector ptr'IO vector used by neural nets
Dim shared as neuron ptr nFire(c_maxHis)
Dim shared as neuron ptr nNull(c_maxHis) 'For the learning, history of a cycles, null and fired neurons.

sub initFFnet() 
    vTmp = New vector
End sub

Function neuron(numNet as integer) as neuron ptr
	dim as neuron ptr neur = New neuron 
	For i as integer= 1 To numNet + 1
		neur->weight(i) = Rnd(1)*.4
	Next
	neur->weight(C_maxNet+1) = Rnd(1)*.6
	Return neur
End Function

Function nLayer(numNeurons as integer,numIn as integer) as nLayer ptr
	dim as nLayer ptr nLay = New nLayer
	nLay->numNeurons = numNeurons
	nLay->numIn = numIn
	For n as integer=1 To numNeurons
		nLay->neuron(n) = neuron(numIn)
	Next
	Return nLay
End Function

sub linkLayers(l1 as nLayer ptr,l2 as nLayer ptr,chains as integer = 0, preser as integer = 1) '1> 2<> 3<
	For n as integer= 1 To l1->numNeurons
		If preser=0 then
			l1->neuron(n)->numNet=0
		End If
		For t as integer= 1 To l2->numNeurons
			l1->neuron(n)->net(t+l1->neuron(n)->numNet) = l2->neuron(t)
		Next
		l1->neuron(n)->numNet = l1->neuron(n)->numNet+l2->numNeurons
	Next
	If chains<>0 then linkLayers(l2,l1,0,preser)
End sub 'To double chain two layers, set chain to true


Function neuralNet(numInputs as integer,numHidden as integer,numOutputs as integer,populate as integer= 1,initVecTmp as integer = 1) as neuralNet ptr
	dim as neuralNet ptr outp = New neuralNet
	outp->numInputs = numInputs
	outp->numOutputs = numOutputs
	outp->numLayers = numHidden
	If populate then
		outp->nLayer(1) = nLayer(numInputs,numInputs)
		outp->nLayer(2) = nLayer(numHidden,numInputs)
		outp->nLayer(3) = nLayer(numOutputs,numHidden)
		linkLayers(outp->nLayer(1),outp->nLayer(2),1)
		linkLayers(outp->nLayer(2),outp->nLayer(3),1,1)
	End If
	Return outp
End Function

sub punishNet()
	If c_maxHis=0 then End
	For j as integer = 1 To c_maxHis
		If nFire(j) <>0 then
			For n as integer = 1 To c_maxNet
				nFire(j)->weight(n) = nFire(j)->weight(n) - rnd(1)
			Next
		EndIf
		If nNull(j) <>0 then
			For n as integer = 1 To c_maxNet
				nNull(j)->weight(n) = nNull(j)->weight(n) + rnd(1)
			Next
		EndIf
	Next
End Sub

sub rewardNet()
	If c_maxHis=0 then End
	For j as integer = 1 To c_maxHis
		If nFire(j) <>0 then
			For n as integer = 1 To c_maxNet
				nFire(j)->weight(n) = nFire(j)->weight(n) + 0.05 
			Next
		EndIf
		If nNull(j) <>0 then
			For n as integer = 1 To c_maxNet
				nNull(j)->weight(n) = nNull(j)->weight(n) - 0.02
			Next
		EndIf
	Next
End Sub

sub clearHistory()
	For j as integer = 1 To c_maxHis
		nFire(j) = 0
		nNull(j) = 0
	Next
End sub

sub pushVector(in as vector ptr,v as single) 
	If in->size < V_maxVec then in->size=in->size+1 Else exit sub
	If in->Size > 1 then
		For i as integer=in->size-1 To 1 Step - 1
			in->v(i+1)=in->v(i)
		Next
	EndIf
	in->v(1) = v
End sub

Function sigmoid(in as single,round as single) as single
	Return ( 1. / (1. + Exp(-in / round)))
End Function


sub FFnetCycle(in as neuralNet ptr) ' input->[?> hidden ?> output >]->user/GA
	dim tWeight as single
	clearHistory
	For layer as integer = 1 To 3
		For i as integer = 1 To in->nLayer(layer)->numNeurons
		tWeight = 0


		For n as integer = 1 To in->nLayer(layer)->numIn
			tWeight = tWeight + (vTmp->v(n) * in->nLayer(layer)->neuron(i)->weight(n))

		Next
		tWeight = tWeight + (in->nLayer(layer)->neuron(i)->weight(c_maxNet+1)) * nBias
		pushVector(vTmp,sigmoid(tWeight,C_sypAct))

		If vTmp->v(1) > in->nLayer(layer)->neuron(i)->weight(C_maxNet+1) then
			For j as integer = 1 To c_maxHis
				If nFire(j) = 0 then
					nFire(j) = in->nLayer(layer)->neuron(i)
					Exit for
				End If
			Next
		Else
			For j as integer= 1 To c_maxHis
				If nNull(j) = 0 then
					nNull(j) = in->nLayer(layer)->neuron(i)
					Exit for
				End If
			Next
		End If
		Next
	Next
End sub

sub setInput(i as integer,v as single) 
	If i>vTmp->size then vTmp->size = i
	vTmp->v(i) = v
End sub


Function getInput(i as integer)as single
	Return vTmp->v(i)
End Function

sub clearNetIO() 'needed after every cycle's results/inputs are not needed.
	vTmp->size = 0
	vTmp->v(1) = 0
End sub


Function getOutput(net as neuralNet ptr,n as integer,sum as integer = 0) as integer
	'If sum=0 then Return int(.5+ net->nLayer(3)->neuron(n)->weight(C_maxNet+1))


	For j as integer = 1 To c_maxHis
		If nFire(j) = net->nLayer(3)->neuron(n) then return 1
		If nNull(j) = net->nLayer(3)->neuron(n) then return 0
	Next
	return 0

End Function


A neural net is a system that can learn through virtual "neurons".

It's possible to define a network with many variable as inputs; the network will return some bits as outputs. At first, it will return random results, but it's possible to train it (by "rewarding" and "punishing" it) to return correct results. The net will learn.

Here is a sample (with two inputs, and two outputs)

Code: Select all

initFFnet
 
dim shared neur as neuralNet ptr

neur=neuralNet(2,8,2) 
 
dim shared as integer val_out 
 
 
dim i as single
dim c as string
dim e as integer

do
input i,e	 

	clearNetIO
	 
	setinput(1,i) 
	setinput(2,e) 
	 
	FFnetCycle(neur) 
 
        ? getOutput(neur,1), getOutput(neur,2)  

        print "is this correct?":c=ucase(input(1))

	If c<>"Y" Then punishnet():?"No" else rewardnet:?"Yes"
loop
It will take a lot of tries, to teach something to the network. So, you can use a "trainer" (put it just before the "do")

Code: Select all

for x as integer=0 to 100
if rnd(1)>.2 then i=1-i
if rnd(1)>.2 then e=1-e

	clearNetIO
	 
	setinput(1,i) 
	setinput(2,e) 
	 
	FFnetCycle(neur) 
 
        if getOutput(neur,1)=e and getOutput(neur,2) =i then c="Y" else c="N"



	If c<>"Y" Then punishnet():x=0 else rewardnet
next
It will teach to the network to simply return "1" if the input was "1", and zero if the input was zero. Of course, you can also teach something else (try swapping the results, for example)

Neural networks are used for many tasks, such as OCR or speech recognition.
pestery
Posts: 493
Joined: Jun 16, 2007 2:00
Location: Australia

Post by pestery »

I had a play round with this type of stuff a while ago. Very interesting although I never had much luck getting meaningful results from it. I tried rigging it into my webcam for kicks. Also had a look at FANN (Fast Artificial Neural Networks) from source forge somewhere. I've still the headers I ported and lib file (windows) from version 2.1.0 if anyone is interested. I think they were complete.
pestery
Posts: 493
Joined: Jun 16, 2007 2:00
Location: Australia

Post by pestery »

Back again. Today I thought I'd have another go at neural nets and for once its actually working pretty well with character (number) recognition. It takes a few cycles before it settles, say 20 seconds because of the Sleep commands, but I think it looks cool :)

Code: Select all

' Declare functions (xxx_len reference to 1)
Declare Sub draw_value(ByVal x As Integer, ByVal y As Integer, ByVal v As Single Ptr, ByVal w As Integer, ByVal h As Integer, ByVal scale As Integer = 5)
Declare Sub setup(ByVal v As Single Ptr, ByVal w As Integer, ByVal h As Integer, ByVal value As Integer = 1)
Declare Sub solve_fwd(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr)
Declare Sub solve_rev(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr)
Declare Function mult_increase(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr, ByVal factor As Single = 0.1) As Integer ' Returns the number of changes
Declare Function mult_decrease(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr, ByVal factor As Single = 0.1) As Integer ' Returns the number of changes

' This sets up a feedforward neural network, with 2 binary layers (in and out)

' Define constants
Const As Integer in_width = 11 ' Input image width
Const As Integer in_height = 11 ' Input image height
Const As Integer in_size = in_width * in_height ' Image image total length
Const As Integer out_size = 20 ' Output layer total length

' Define variables
Dim As Single in1(1 To in_size) ' Input layer 1 (source data)
Dim As Single in2(1 To in_size) ' Input layer 2 (calculated using out1 and mult, feed backward)
Dim As Single out1(1 To out_size) ' Output layer 1 (calculated using in1 and mult, feed forward)
Dim As Single out2(1 To out_size) ' Output layer 2 (calculated using in2 and mult, feed forward)
Dim As Single mult(1 To (in_size * out_size)) ' Multiplier array


' ---- Main program ----
Dim As Integer count = 0, value = 0, check, count_last = -1
ScreenRes 640, 480, 32

' Fill the mult array with random weight ranging from -1.0 to 1.0
For i As Integer = 1 To (in_size * out_size)
   mult(i) = (Rnd * 2) - 1
Next

' Fill input layer 1 with the pixel data from value
setup(@in1(1), in_width, in_height, value)

' Repeat until a key is pressed
While Inkey = ""

   ' Use the feed foreward method to solve output layer 1 using input layer 1
   ' This sets the "brain state" based in the input is sees
   solve_fwd(@in1(1), in_size, @out1(1), out_size, @mult(1))

   ' Use the feed foreward method IN REVERSE to solve input layer 2 using output layer 1
   ' This uses the "brain state" to imagine what it is seeing
   solve_rev(@in2(1), in_size, @out1(1), out_size, @mult(1))

   ' Use the feed foreward method to solve output layer 2 using input layer 2
   ' This recalculated the brain state and only for training purposes
   solve_fwd(@in2(1), in_size, @out2(1), out_size, @mult(1))

   ' Update the mult matrix
   ' Increase the mult values for neurons that match their input when the input is real
   ' Decrease the mult values for neurons that match their input when the input is imagined
   ' Don't ask me how this works because I only partially understand it but it does work
   check = 0
   check += mult_increase(@in1(1), in_size, @out1(1), out_size, @mult(1), 0.1)
   check -= mult_decrease(@in2(1), in_size, @out2(1), out_size, @mult(1), 0.1)

   ' Display everything
   ScreenLock
   Cls
   draw_value( 10, 50, @in1(1), in_width, in_height, 10) ' Show input layer 1
   draw_value( 10, 20, @out1(1), out_size, 1, 10)        ' Show output layer 1
   draw_value(200, 50, @in2(1), in_width, in_height, 10) ' Show input layer 2
   draw_value(200, 35, @out2(1), out_size, 1, 10)        ' Show output layer 2
   Print "Count: " & count_last
   ScreenUnLock

   ' If the number of changes in the mult_increase equals the number in
   ' mult_decrease then presume its good and move on to the next number
   If check = 0 Then
      If value < 9 Then value += 1 Else value = 0
      Sleep 300
      setup(@in1(1), in_width, in_height, value)
      count_last = count
      count = 0
   Else
      count += 1
      Sleep 150
   EndIf

Wend
End
' ---- End of main program ----


' Functions
Sub draw_value(ByVal x As Integer, ByVal y As Integer, ByVal v As Single Ptr, ByVal w As Integer, ByVal h As Integer, ByVal scale As Integer = 5)

   ' Display the layer on the screen graphically
   ' x and y are the screen position origin
   ' w and h are the width and height of the layer in neurons, referenced to 1
   ' When setting w and h be careful not to overrun the end of the layer, it is just an array after all

   Line (x - 1, y - 1)-(x - 1 + (w * scale), y - 1 + (h * scale)), RGB(127, 127, 127), BF
   w -= 1
   h -= 1
   Dim As Integer i, j, xx, yy, s = scale - 2, c
   Dim As Single Ptr p = v
   If s < 0 Then s = 0
   For j = 0 To h
      For i = 0 To w
         xx = x + (i * scale)
         yy = y + (j * scale)
         c = *p * 255
         p += 1
         If c < 0 Then c = 0 Else If c > 255 Then c = 255
         Line (xx, yy)-(xx + s, yy + s), RGB(c, c, c), BF
      Next
   Next
End Sub
Sub setup(ByVal v As Single Ptr, ByVal w As Integer, ByVal h As Integer, ByVal value As Integer = 1)

   ' This function draws a single digit number on to the screen
   ' It then copies the state of the pixels (on or off) in to
   ' the layer. The size of the layer is width (w) * height (h)

   Dim As Integer x, xmax = w - 1
   Dim As Integer y, ymax = h - 1
   Cls
   Draw String (100, 100), Str(value), &hFFFFFFFF
   For y = 0 To ymax
      For x = 0 To xmax
         If Point(x + 98, y + 98) = &hFFFFFFFF Then ' Check if the pixel is lit
            v[x + (y * w)] = 1
         Else
            v[x + (y * w)] = 0
         EndIf
      Next
   Next
End Sub
Sub solve_fwd(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr)

   ' Use the standard feed foreward method to calculate layer 2 using layer 1 and the mult matrix
   ' The result of the neurons in layer 2 will be:
   '   If the sum is greater or equal to zero, the result will be 1
   '   If the sum is less than zero, the result will be 0

   Dim As Integer i, imax = v1_len - 1
   Dim As Integer j, jmax = v2_len - 1
   For j = 0 To jmax
      v2_ptr[j] = 0
      For i = 0 To imax
         v2_ptr[j] += v1_ptr[i] * mult[i + (j * v1_len)]
      Next
      If v2_ptr[j] >= 0 Then v2_ptr[j] = 1 Else v2_ptr[j] = 0 ' Gives binary output, 0 or 1
      'v2_ptr[j] = 1 / (1 + (2.718281828 ^ (-v2_ptr[j] * 5))) ' Give float output, 0.0 to 1.0, the 5 is just a multiplier, the 2.7 is Exp(1)
   Next
End Sub
Sub solve_rev(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr)

   ' Use the standard feed foreward method, however swap the order of the layers
   ' This creates a feed backward method to calculate layer 1 using layer 2 and the mult matrix
   ' The result of the neurons in layer 1 will be:
   '   If the sum is greater or equal to zero, the result will be 1
   '   If the sum is less than zero, the result will be 0

   Dim As Integer i, imax = v1_len - 1
   Dim As Integer j, jmax = v2_len - 1
   For i = 0 To imax
      v1_ptr[i] = 0
      For j = 0 To jmax
         v1_ptr[i] += v2_ptr[j] * mult[i + (j * v1_len)]
      Next
      If v1_ptr[i] >= 0 Then v1_ptr[i] = 1 Else v1_ptr[i] = 0 ' Gives binary output, 0 or 1
      'v1_ptr[i] = 1 / (1 + (2.718281828 ^ (-v1_ptr[i] * 5))) ' Give float output, 0.0 to 1.0, the 5 is just a multiplier, the 2.7 is Exp(1)
   Next
End Sub
Function mult_increase(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr, ByVal factor As Single = 0.1) As Integer ' Returns the number of changes

   ' If the neuron in layer 1 is on (1, or greater than 0.5 in this case) and the
   ' neuron in layer 2 is also on then the connecting multiplier value should be increased

   ' The count value is only used as a reference and can be removed

   Dim As Integer count = 0
   Dim As Integer i, imax = v1_len - 1
   Dim As Integer j, jmax = v2_len - 1
   For j = 0 To jmax
      For i = 0 To imax
         If (v1_ptr[i] > 0.5) = (v2_ptr[j] > 0.5) Then
            mult[i + (j * v1_len)] += factor
            count += 1
         EndIf
      Next
   Next
   Return count
End Function
Function mult_decrease(ByVal v1_ptr As Single Ptr, ByVal v1_len As Integer, ByVal v2_ptr As Single Ptr, ByVal v2_len As Integer, ByVal mult As Single Ptr, ByVal factor As Single = 0.1) As Integer ' Returns the number of changes

   ' If the neuron in layer 1 is on (1, or greater than 0.5 in this case) and the
   ' neuron in layer 2 is also on then the connecting multiplier value should be decreased

   ' The count value is only used as a reference and can be removed

   Dim As Integer count = 0
   Dim As Integer i, imax = v1_len - 1
   Dim As Integer j, jmax = v2_len - 1
   For j = 0 To jmax
      For i = 0 To imax
         If (v1_ptr[i] > 0.5) = (v2_ptr[j] > 0.5) Then
            mult[i + (j * v1_len)] -= factor
            count += 1
         EndIf
      Next
   Next
   Return count
End Function
Last edited by pestery on May 09, 2011 13:47, edited 1 time in total.
kiyotewolf
Posts: 1009
Joined: Oct 11, 2008 7:42
Location: ABQ, NM
Contact:

Post by kiyotewolf »

I can feel the brain of my computer getting smarter each second.



~Kiyote!

I need to study your code & print it out.
Fortunately I have a lazor printer now.

What's the coolest thing that someone's done with a NN?
BasicCoder2
Posts: 3906
Joined: Jan 01, 2009 7:03
Location: Australia

Post by BasicCoder2 »

kiyotewolf wrote: I need to study your code & print it out.
Fortunately I have a lazor printer now.

It would be nice to have a plain English explanation. Unfortunately the program uses C like pointers which defeats the reason I like BASIC, readability for someone who doesn't do much programming.

The program appears to be a bit like the one at,

http://www.fourmilab.ch/documents/c64neural.html

from which I tried to make my own version,

Code: Select all

screenres 1200,620,32
dim shared as integer t(8,8,10)   'hold ten different patterns to learn
dim shared as integer f1(64),f2(64)
dim shared as integer m(64,64)    'array of weights

'create 10 patterns in array t() from screen data
print "0123456789"
for i as integer = 0 to 9
    for y as integer = 0 to 7
        for x as integer = 0 to 7
            if point(x+i*8,y)=rgb(255,255,255) then
                t(x,y,i)=1
            else
                t(x,y,i)=-1  'NOTE -1 not 0
            end if
        next x
    next y
next i

'test contents of templates
'for i as integer = 0 to 9
'    for y as integer = 0 to 7
'        for x as integer = 0 to 7
'            if t(x,y,i)=1 then
'                line (x*8,y*8)-(x*8+6,y*8+6),rgb(255,255,255),bf
'            else
'                line (x*8,y*8)-(x*8+6,y*8+6),rgb(0,0,0),bf
'            end if
'        next x
'    next y
'    sleep
'next i

sub displayF1()
    for y as integer = 0 to 7
        for x as integer = 0 to 7
            if F1(y*8+x)=1 then
                line (x*8,y*8)-(x*8+6,y*8+6),rgb(255,255,255),bf
            else
                line (x*8,y*8)-(x*8+6,y*8+6),rgb(255,0,0),bf
            end if
        next x
    next y
end sub

sub displayF2()
    dim as integer xx
    xx = 72
    for y as integer = 0 to 7
        for x as integer = 0 to 7
            if F2(y*8+x)=1 then
                line (x*8+xx,y*8)-(x*8+6+xx,y*8+6),rgb(255,255,255),bf
            else
                line (x*8+xx,y*8)-(x*8+6+xx,y*8+6),rgb(255,0,0),bf
            end if
        next x
    next y
end sub

sub getPattern(key as integer)
    for y as integer = 0 to 7
        for x as integer = 0 to 7
            F1(y*8+x) = t(x,y,key)
        next x
    next y
end sub

sub trainPattern()
    for i as integer = 0 to 63
        for j as integer = 0 to 63
            if i<>j then
                m(i,j)=m(i,j)+F1(i)*F1(j)
            else
                m(i,j)=0
            end if
        next j
    next i
end sub

sub printPartOfMatrix()
    print
    print "PART OF MATRIX OF WEIGHTS"
    print
    for i as integer = 0 to 33
        for j as integer = 0 to 33
            print using "###";m(i,j);
        next j
        print
    next i
end sub

sub randomizePattern()
    dim as integer x,y
    for i as integer = 0 to 5 'make 6 pixel changes
        x = int(rnd(1)*8)
        y = int(rnd(1)*8)
        F1(y*8+x)=-F1(y*8+x) 'invert value
    next i
end sub

        
sub RecallPattern()   'line 1290 in c64 code
    dim as integer v,c
    
    for i as integer = 0 to 63
        F2(i)=F1(i)
    next i
    
    displayF2()

    do
        'F1 to F2 pass
        for j as integer = 0 to 63
            v = 0
            for i as integer = 0 to 63
                v = v + F1(i)*m(i,j)
            next i
            v = sgn(v)
            if v<>0 then F2(j)=v
        next j
        displayF1()
        
        'F2 to F1 pass
        c = 0
        for i as integer = 0 to 63
            v = 0
            for j as integer = 0 to 63
                v = v + F2(j)*m(i,j)
            next j
            v = sgn(v)
            if v<>0 and v<>F1(i) then
                F1(i)=v
                c = 1
            end if
        next i
        displayF2()
        
    loop while c<>0
    
end sub

'TRAIN PATTERNS 0 to 9
for i as integer = 0 to 9
    getPattern(i) 'copy into F1()
    displayF1()
    displayF2()
    trainPattern()
next i

dim as integer key
cls
locate 10,1
PRINT "press ESC to end"
DO
    locate 11,1
    print "hit key 0 to 9"
    key = getkey
    if key>47 and key<58 then
        getPattern(key-48)  'copy template(x,y,key-48)into F1(x,y)
        displayF1()
        randomizePattern()  'create a varient pattern
        displayF1()
        locate 12,1
        print "randomized pattern .. hit key to continue"
        sleep
        locate 12,1
        print "                                         "
        recallPattern()     'recall origonal before radomized variant
        displayF1()
        displayF2()
        locate 12,1
        print "DONE... hit key to continue OR ESC"
        sleep
        locate 12,1
        print "                                  "
    end if
LOOP UNTIL key = 27

printPartOfMatrix()
sleep
print "hit key to exit"

end

but it doesn't seem to work. Without knowing the logic behind it I am not sure how to fix it or what is missing.

It seems to be a hopfield network,

http://www4.rgu.ac.uk/files/chapter7-hopfield.pdf

BC
pestery
Posts: 493
Joined: Jun 16, 2007 2:00
Location: Australia

Post by pestery »

BasicCoder2 wrote:It would be nice to have a plain English explanation.
Fair point, I over looked that in my little program and I've now updated the original post to include lots of comments. Although if you were referring to angros47's code then I can't help that, sorry mate :)

With regard to my code, its a 2 layer neural network with binary neurons, not floating point (as an end result anyway). It creates an image, uses the image to setup a brain state, and lastly uses the brain state to imagine what its actually seeing. The idea being to make what it sees and what it imagines one and the same. The method I used I got from this video:
http://www.youtube.com/watch?v=AyzOUbkUf3M
Be warned, its very very long, although also quite interesting.

Also, the reason for the pointers in my case is to make the functions easily useable for different layers of different sizes, say if you had a 256->200->100->10 network.

I had a shot as finding the problem in your example but no luck. I did notice one difference in the code in the link you provided, unless I was looking at the wrong place :)

Code: Select all

Sub trainPattern()
   For i As Integer = 0 To 63
      For j As Integer = 0 To 63
            m(i,j)=m(i,j)+F1(i)*F1(j)
      Next j
   Next i
End Sub
angros47
Posts: 2323
Joined: Jun 21, 2005 19:04

Post by angros47 »

@kiyotewolf

Basically, a neural network is made of many virtual "neurons", plus a reward/punishment system.

A neuron has many inputs channel, and only one output channel (like a real neuron, that has multiple dendrites for the input, and a single axon for the output).

Every input is multiplied by a coefficient (every neuron has a different coefficent for each one of its input channel); the coefficient might be positive or negative. The results are added, and, if the final result is greater than a limit, the neuron will "fire" (it will return a value of 1 as its output), otherwise it will return 0.

So, if an input channel has a positive coefficient, an input in such channel will stimulate "firing", while if the coefficient is negatiwe, an input will inhibit "firing"; a zero coefficient means that the input can be ignored, because it's irrilevant.

Let's say that our neuron has these inputs:

A: Is it liquid?
B: Does it smell?
C: Is it transparent?
D: Is it salty?

And only one output: "It is water"

It could work in a similar way:

if (A*1+B*-1+D*1+D*0)>1 then print "It is water"

Of course, 1, -1, 1, 0 are the coefficients (if it's transparent, and it's liquid, it's water; but, if it smells, it's not water. No matter if it's salty or not, because water can be salty)

But... how you can set the coefficients? The answer is: by training it!

When the neuron returns the wrong answer, we should encourage it to behave in a different way: so, if the neuron fired (and it shouldn't have fired) we will decrease all the coefficient of the input channel that have been stimulated, and increase the others: so, when the neuron will encounter again the same situation, won't fire.
Of course, if the neuron didn't fire, and it should have fired, we'll do the opposite: we will increase the coefficient of the input channel that have been stimulated, and decrease the others: so, the neuron will fire, next time.

In that way, any channel that should have a positive value, will be increased at every error (either if the neuron didn't fire, and the channel was stimulated, or if the neuron fired, and the channel was not stimulated, the coefficient will be increased), and any channel that should have a negative value will be decreased. Channels that are irrelevant will be increased and decreased randomly, and their value will be close to zero.



In a neural network, there are many neurons, working in the same way: but, many neurons will take the output of other neuron as input: so, more complex analysis are possible.

A bigger neural network, of course, might require a longer training.
kiyotewolf
Posts: 1009
Joined: Oct 11, 2008 7:42
Location: ABQ, NM
Contact:

Post by kiyotewolf »

"But... how you can set the coefficients? The answer is: by training it!"

I don't get how the NN is able to learn multiple things, without over-writing the information for the first thing it ever learned.

In that visual example, with the ASCII characters it's looking at and trying to replicate, how does it remember MULTIPLE things?

In my mind, we shouldn't be able to store the "memory" of multiple things, because I thought it was one NN, one kind of "memory" it could contain.

:.:

Also, that horizontal bar of pixels, is that akin to a hash of the graphical data it's being fed?

I'm sure I could come up with some uses for NN, if I could just get my brain past the front doorway.



~Kiyote!

[edit]

Btw, I'm downloading that video for later reading.
Richard
Posts: 3096
Joined: Jan 15, 2007 20:44
Location: Australia

Post by Richard »

kiyotewolf wrote:I don't get how the NN is able to learn multiple things, without over-writing the information for the first thing it ever learned.
It works for the same reason that spread spectrum communication works. You can have many AM radio stations all added together in the same broadcast band, they do not corrupt each other. Your radio is able to pick the one you want.

Fourier analysis separates out the amount of the individual frequencies from the power received in the time domain. The FFT does the same numerically. A FFT is really a very efficient NN that looks at all inputs over time and assigns each output to a different target pattern to be recognised. The FFT does not need to be taught as it uses sine and cosine functions at the models.

A NN gradually adjusts it's coefficients as it learns, it makes no sudden changes. The coefficients are adjusted by trial and error to get the best correlation between the presence of an input and the detected output.
pestery
Posts: 493
Joined: Jun 16, 2007 2:00
Location: Australia

Post by pestery »

kiyotewolf wrote:I don't get how the NN is able to learn multiple things, without over-writing the information for the first thing it ever learned.
That's the tricky bit. Creating a neural network and running it is easy. Training it to give meaningful output base on the input can be very difficult. There are a number of different techniques to do it such as backpropagation and the reward/punishment method mentioned although these are only useful if you have an input and you know what the output should be.

angros47 already said this, but I'll try and put a different angle on it. The basic rundown of a neural network is that you have an input layer (the ASCII pixels in my case), one or more layers of hidden neurons (the horizontal bar) and an output layer (not present in my example because I wasn't actually trying to detect anything).
Image
Every neuron in each layer (except for the input) connects to every neuron in the previous layer, through a multiplier value. So if there are 3 inputs and 4 hidden then there will be 3*4 connections, ie 3*4 values in the mult array. Because a neuron has a unique connection to another neuron in the previous layer the multiplier can be set so that the the previous layer neuron will have a strong "on" or "off" effect in the current layer neuron, or no effect at all. This means that one neuron may be strongly effected by some previous layer neurons and not others. The idea is that one neuron can detect a particular feature or characteristic in the previous layer. So if you have many neurons in the hidden layer, and maybe multiple layers, then you can detect many combinations of features (ie, different ASCII characters).

To train the network one way to look at it is that you want to make minimal changes to any neurons that are important for previously set feature detecting and many changes to neurons that are currently not important. This means that for the number of neurons and number of layers there is a maximum capacity of learning. Also with regard to overwriting previous data, how often do you forget where you put your keys, or what your password was, or that you have a huge assignment due in less than a week :)

A couple of links I found useful when I was starting this are:
http://en.wikipedia.org/wiki/Artificial_neural_network
http://www.ai-junkie.com/ann/evolved/nnt1.html (there are 8 pages)
kiyotewolf
Posts: 1009
Joined: Oct 11, 2008 7:42
Location: ABQ, NM
Contact:

Post by kiyotewolf »

@Richard & @pestery

"
you have an input layer (the ASCII pixels in my case), one or more layers of hidden neurons (the horizontal bar) and an output layer (not present in my example because I wasn't actually trying to detect anything).
"

"trying to detect anything?"

Can you elaborate on that, and how this pet NN would detect something?
It's doing pattern recognition, what if you fed it broken ASCII (some missing and noise) after you trained it, and it reached equilibrium?

At that point, when it's trained, can you feed it the input layer, noise added, without punishing or rewarding? We don't want the NN to learn something bad, just react like it's been "learned (I say learned boy)" / trained to do.

Wait a sec!

"
and an output layer (not present in my example because I wasn't actually trying to detect anything).
"

What was that being displayed, on the right? That WASN'T the output layer? What was that, the 2nd character, being displayed, where did that X/Y data come from, which layer, and what would the output layer have looked like, using the example AS IS, right now?

o.o I thought that I was seeing the output layer on the right.
Now I don't know where that data on the right came from, I can guess, but I don't wanna guess.



~Kiyote!

This stuff is FACINATING.

[edit]

Would a red-black binary tree have ANYTHING to do with a NN?
BasicCoder2
Posts: 3906
Joined: Jan 01, 2009 7:03
Location: Australia

Post by BasicCoder2 »

@ kiyotewolf

This stuff is FACINATING.

=====
For the layman it is hard to separate the hype and hand waving about ANNs from what they are actually doing.

They appear to be good at some things and lousy at other things.

The Hopfield network appears to amount to the training patterns putting in "grooves" in the form of weight patterns into which future patterns "flow" and thus is a kind of spatial similarity pattern detector but not a very good one as far as I can tell.

One ANN success story is TDGammon that was able to learn to value a backgammon game state in an evaluation function. In chess the evaluation function uses a set of heuristics to score a game state.

The thing is: the brain is a collection of connected networks so somehow it must be possible for networks to do anything the brain can do including symbol manipulation.

You can think of an add/subtract circuit in a cpu as a network of logic gates that map an input pattern into an output pattern.

An ANN that learns is essentially able to change its weighted connections until it produces the kind of mapping you want.

The brain appears to start with working connections evolved over many generations which are then changed by experience.

I have always had an interest in machine vision,

http://www.freebasic.net/forum/viewtopi ... ght=target

but can't see how a single ANN can do it as they all seem to work on the whole image not part of it. For example the programs might use a set of template characters to be recognized but in practice characters can appear anywhere (and any size) in a sensory input array. So you need a bog standard program to locate and resize them to fit your ANN. In the meantime you could have used feature analysis instead!

I see feature analysis as more powerful, as features can be very abstact, so very different looking fonts can still be recognized as belonging to a particular character class even though they are visually very different in many ways.


John
Richard
Posts: 3096
Joined: Jan 15, 2007 20:44
Location: Australia

Post by Richard »

@ kiyotewolf.
Why use a NN if you do not want to “detect” some particular input situations?
Recognition of a particular input state is the fundamental application of NNs.

Character recognition works well with the human brain which has a good preprocessor to handle scale, centre and rotate. It takes several years and a lot of help to teach a child to read. A NN is not necessarily the best way to recognise optical text when silicon based processors are available.

Consider your input layer as an array of 8 x 8 pixels = 64 bits. The middle layer is the NN that must be educated to recognise all blurry versions of one character. Consider the output layer as a single bit that goes high when the NN recognises it's particular character.

Now consider 255 more single symbol NN pattern detectors, each for it's own character.That makes a total of 256 NNs that can now detect all characters. Now use a NN to condense 1/256 single bits into an 8 bit byte = ASCII code. You then have a hybrid NN with 64 bits in, 8 bit out, that recognises or “detects”ASCII pixel patterns.

Now some parts of the 256 individual NNs are similar or duplicated. By condensing the sub-NNs into a single pool these common parts can be shared. It may take several years to teach an organic NN to read, but it will get faster as it learns.
pestery
Posts: 493
Joined: Jun 16, 2007 2:00
Location: Australia

Post by pestery »

BasicCoder2 wrote:They appear to be good at some things and lousy at other things
I agree.

kiyotewolf, what my example was doing was getting an input and calculating an output (of sorts). I was then running it backward to see what the input would have to be to get the output (the stuff on the right). Or at least that was the idea anyway, it was a part of larger plan that I wanted to get working first. Today I tried pushing it a bit further. I tested your suggestion of adding noise, and also experimented with my webcam. The result was that it fell to pieces, oh well :)

Neural networks are very interesting, but they're are hard to do properly. If your doing anything more than tinkering then really you would have to have a specific need that would warrant use of one.
TESLACOIL
Posts: 1769
Joined: Jun 20, 2010 16:04
Location: UK
Contact:

NN are lousy at everything

Post by TESLACOIL »

NN are lousy at everything

well anything you might want to code as a NN will basically suck

why ?

well its like teaching a donkey to swim , NNs work ok given the hardware architecture of the human brain , to try an copy them on a silicone computer just introduces layers of needless complexity and wasted computation

If nature could build quad-core AMD's thats what would be inside your head...or at least half of it anyway


NN = fun to play with in silicone land , but NN's swim like a donkey & not like a fish when run on current silicone architecture.....its just the way it is

horses for courses


Current Silicone Hware sucks when it comes to building artificial minds, it can be done but you have to throw 90% of traditional computer science out of the window to get it to happen

the perfect mind would constructed to be a 'part serial & a part parallel computer' ...because computation always boils down to these two systems , or a big mix of one and a dash of the other

it would look like HAL9000 attached to a chimpanzee or a human holding an iPhone the 2111ad model


both systems would exhibit critical levels of real world common sense + a great big dolop of serial or parallel computing power....and would ace each other in opposing fields of complex real word computation. Note : both these systems would whoop a standalone humans butt at absolutely everything from chess to table tennis

aka , building the horse to fit the course

define your problem , if it is non trivial & totally unsuited to your current architecture then alter the architecture
Post Reply