Associative Memory

User projects written in or related to FreeBASIC.
sean_vn
Posts: 265
Joined: Aug 06, 2012 8:26

Associative Memory

Postby sean_vn » Jun 10, 2017 3:31

There where a lot of papers in the 1970s and 1980s about associative memory. It actually turns out there is an easy solution to the problem:

Code: Select all


type associativememory
   veclen as ulong   
   density as ulong
   hash as ulong         '32 bit unsigned integer
   weights as single ptr   'pointer to 32 bit floats
   binarize as boolean ptr
   work as single ptr
    declare sub init(veclen as ulong,density as ulong,hash as ulong)
    declare sub free()
    declare sub train(target as single,inVec as single ptr)
    declare function recall(inVec as single ptr) as single
    declare sub signflip(vec as single ptr,h as long)
    declare sub wht(vec as single ptr)
end type

sub associativememory.init(veclen as ulong,density as ulong, hash as ulong)
   this.veclen=veclen
   this.density=density
   this.hash=hash
   weights=callocate(veclen*density,sizeof(single))  'allocate zeroed memory
   binarize=callocate(veclen*density,sizeof(boolean))   
   work=callocate(veclen,sizeof(single))
end sub

sub associativememory.free()
   deallocate(weights)
   deallocate(binarize)
   deallocate(work)
end sub

function associativememory.recall(inVec as single ptr) as single
   dim as ulong i,j
   dim as single x=0f
   dim as single ptr wtidx=weights
   dim as boolean ptr bidx=binarize
   for i=0 to veclen-1
      work[i]=inVec[i]
   next
   for i=0 to density-1
      signflip(work,hash+i)   'sign flip and wht=random projection
      wht(work)
      for j=0 to veclen-1
          if work[j]>0f then   
            x+=wtidx[j]
            bidx[j]=true
         else
            x-=wtidx[j]
            bidx[j]=false
         end if
      next
      wtidx+=veclen
      bidx+=veclen
   next
   return x
end function

sub associativememory.train(target as single,inVec as single ptr)
   dim as ulong i,j
   dim as single ptr wtidx=weights
   dim as boolean ptr bidx=binarize
   dim as single e=target-recall(inVec)   'get the prediction error
   e/=veclen*density   'scale the error correctly so that it will be fully corrected
   for i=0 to density-1
      for j=0 to veclen-1
         if bidx[j] then
            wtidx[j]+=e
         else
            wtidx[j]-=e
         end if
      next
      wtidx+=veclen
      bidx+=veclen
   next
end sub

' Pseudorandomly flip the sign of the values in vector using h as a seed
' h is a 32 bit signed interger
sub associativememory.signflip(vec as single ptr,h as long)
   dim as ulong i
   for i=0 to veclen-1
      h*=1664525
      h+=1013904223
      if h<0 then vec[i]=-vec[i]
   next
end sub

'Fast Walsh Hadamard transform. Leaves vector length unchanged
sub associativememory.wht(vec as single ptr)
   dim as ulong i,j,hs=1
   dim as single a,b,scale=1.0/sqr(veclen)
    while hs<veclen
      i=0
      while i<veclen
         j=i+hs
         while i<j
            a=vec[i]
            b=vec[i+hs]
            vec[i]=a+b
            vec[i+hs]=a-b
            i+=1
         wend
         i+=hs
      wend
      hs+=hs
   wend
   for i=0 to veclen-1
      vec[i]*=scale
   next
end sub


screenres 300,300,32
print "Please wait!"

dim as associativememory net
dim as single ptr vec=callocate(256,sizeof(single)) 'allocate zeroed
net.init(256,3,1234567)

for i as ulong=0 to 99
   for j as ulong=0 to 254
      vec[j]=1f
      net.train(sin(j*.06)*100,vec)
      vec[j+1]=1f
      net.train(cos(j*.2)*100,vec)
      vec[j]=0f
      vec[j+1]=0f
   next
next
cls
for i as ulong=0 to 254
   vec[i]=1f
   pset (i,150-sin(i*.06)*100),rgb(0,255,0)
   pset (i,150-net.recall(vec)),rgb(255,255,0)
   vec[i]=0f
next
for i as ulong=0 to 254
   vec[i]=1f
   vec[i+1]=1f
   pset (i,150-cos(i*.2)*100),rgb(0,255,0)
   pset (i,150-net.recall(vec)),rgb(255,0,255)
   vec[i]=0f
   vec[i+1]=0f
next
deallocate(vec)
net.free()
getkey



It's based on this observation:
"There has been a lack of discussion about binarization in neural networks. Multiplying those +1/-1 values by weights and summing allows you to store values with a high degree of independence. For a given binary input and target value you get an error. You divide the error by the number of binary values and then you simply correct each of the weights by the reduced error taking account of the binary sign. That gives a full correction to get the correct target output. In higher dimensional space most vectors are orthogonal. For a different binary input the adjustments you made to the weights will not align at all. In fact they will sum to Gaussian noise by the central limit theorem. The value you previously stored for a second binary input will now be contaminated by a slight amount of Gaussian noise which you can correct for. This will now introduce an even smaller amount of Gaussian noise on the value for the first binary input. Iterating back and forth will get rid of the noise entirely for both binary inputs.
This has high use in random projection,reservoir and extreme learning machine computing. And in fact turns a simple locality sensitive hash formed by random projection followed by binarization into a useful single layer neural network. "

https://software.intel.com/en-us/forums ... pic/734095

If you use excess memory (to get an over-determined solution) it will provide a manner of error correction.
Last edited by sean_vn on Jun 10, 2017 3:33, edited 1 time in total.
sean_vn
Posts: 265
Joined: Aug 06, 2012 8:26

Re: Associative Memory

Postby sean_vn » Jun 10, 2017 3:32

I wrote the code in such a way that it can be easily translated into C.
sean_vn
Posts: 265
Joined: Aug 06, 2012 8:26

Re: Associative Memory

Postby sean_vn » Jun 24, 2017 5:43

There are better constants you can use in the signflip sub.

Code: Select all

' h is a 32 bit signed interger
sub associativememory.signflip(vec as single ptr,h as long)
   dim as ulong i
   for i=0 to veclen-1
      h*=&h9E3779B9
      h+=&h6A09E667
      if h<0 then vec[i]=-vec[i]
   next
end sub

Anyway you don't need to use all the input dimensions. However I did find it best if using say x in the range 0 to 1, to use 1-x as an input as well. It's interesting to see the sort of crystalline (polytope) approximation you get. The example is kinda slow, however if someone created a specialized chip (or you just used a GPU) then massive speedup are possible.

Code: Select all

screenres 512,256,32
print "Please wait!"

dim as associativememory net
dim as single ptr vec=callocate(256,sizeof(single)) 'allocate zeroed
net.init(256,10,1234567)

for i as ulong=0 to 100
   for xi as ulong=0 to 99
      for yi as ulong=0 to 99
            dim as single x=xi*0.01f,y=yi*0.01f
             vec[0]=x
             vec[1]=y
             vec[2]=1-x
             vec[3]=1-y
            net.train(sin(x*6)*sin(y*6),vec)
       next
    next
next
            
cls
for xi as ulong=0 to 249
   for yi as ulong=0 to 249
      dim as single x=xi*0.004f,y=yi*0.004f
      vec[0]=x
      vec[1]=y
      vec[2]=1-x
           vec[3]=1-y
      dim as long c1=255*sin(x*6)*sin(y*6)
      dim as long c2=255*net.recall(vec)
      if c1>255 then c1=255
      if c1<0 then c1=0
      if c2>255 then c2=255
      if c2<0 then c2=0
      pset (xi,yi),rgb(c1,50,0)
      pset (xi+256,yi),rgb(50,c2,0)
   next
next

deallocate(vec)
net.free()
getkey
dafhi
Posts: 1046
Joined: Jun 04, 2005 9:51

Re: Associative Memory

Postby dafhi » Aug 10, 2018 1:58

Code: Select all

' .. paste the udt from above

type float as single
type myint as integer

function clamp(in as float, hi as float=1, lo as float=0) as float
  if in < lo then return lo
  if in > hi then return hi
  return in
End Function

sub main
    var w = 800
    var h = 400
    screenres 800,400,32
    print "Please wait!"

    var network_area = 30 '' small as possible for post-training performance

    var dens = 9
    dim as myint c = network_area \ dens

    dim as associativememory net(c-1)
    dim as float ptr vec=callocate(256,sizeof(float)) 'allocate zeroed

    for i as long = 0 to ubound(net)
      net(i).init(256,dens,i*3+11)
    next

    var siz = 8 ' recommended minimum: 8
    var inc = 1.2 / siz
   
    for i as long=1 to 60 '' good baseline
       for xi as long=0 to siz-1
          for yi as long=0 to siz-1
                dim as float x=xi*inc,y=yi*inc
                 vec[0]=x
                 vec[1]=y
                 vec[2]=1-x
                 vec[3]=1-y
                for j as long = 0 to ubound(net)
                net(j).train(sin(x*6)*sin(y*6),vec)
                next
           next
        next
    next
               
         
    cls
    var scale = 22
    siz *= scale
    inc /= scale

    for xi as ulong=0 to siz-1
       for yi as ulong=0 to siz-1
          dim as float x=xi*inc,y=yi*inc
          vec[0]=x
          vec[1]=y
          vec[2]=1-x
               vec[3]=1-y
          dim as long c1=clamp(255*sin(x*6)*sin(y*6), 255)
          dim as single c2
          for j as long = 0 to ubound(net)
            c2 += net(j).recall(vec)
          Next:  c2 = clamp(255*c2/c, 255)
          pset (xi,yi),rgb(c1,c1,c1)
          pset (xi+siz,yi),rgb(c2,c2,c2)
       next
       if inkey<>"" then exit for
       sleep 1
    next

    deallocate(vec)
    for i as long = 0 to ubound(net)
      net(i).free()
    next

    getkey
end sub

main

Return to “Projects”

Who is online

Users browsing this forum: No registered users and 2 guests