Switch Net 4 neural network
Re: Switch Net 4 neural network
i assumed 'width' meant array size. i only glanced at the blog b/c i read slowly.
i highly recommend GreenInk's mutator concept for simplicity
the idea is to save a portion of the network which will be changed
in .mutate(),
pIdx() stores location
previous() stores value
[edit] if you want, you could post here and i'll do my best to add a mutator. the concept is simple enough. i'll also add (depending on how well i grasp your code) synthetic data generator of some sort.
i highly recommend GreenInk's mutator concept for simplicity
the idea is to save a portion of the network which will be changed
in .mutate(),
pIdx() stores location
previous() stores value
[edit] if you want, you could post here and i'll do my best to add a mutator. the concept is simple enough. i'll also add (depending on how well i grasp your code) synthetic data generator of some sort.
Re: Switch Net 4 neural network
After referring to the Mathwork site, and my local installation
of GNU Octave, I determined that my version of the Fast Walsh Hadamard
Transform and that presented by the originator of this discussion,
are correct.
The following code verifies this ; I haven't included my routine
WHT() within this.
Examining the use of the xwht() type function, in the original code,
the expansion of parameters appears to happen within the switch network.
A cost function is easy to write, adjusting the weights to minimise
this; is more involved.
of GNU Octave, I determined that my version of the Fast Walsh Hadamard
Transform and that presented by the originator of this discussion,
are correct.
The following code verifies this ; I haven't included my routine
WHT() within this.
Examining the use of the xwht() type function, in the original code,
the expansion of parameters appears to happen within the switch network.
A cost function is easy to write, adjusting the weights to minimise
this; is more involved.
Code: Select all
/'
wht_wn4.bas
sciwiseg@gmail.com
fast Walsh Hadamard Transform and possible wnet4.
https://au.mathworks.com/help/signal/ug/walshhadamard-transform.html
0 1 2 3 4 5 6 7
x = [4 2 2 0 0 2 -2 0]
y = fwht(x) "sequency" format
y = [1 1 0 1 0 0 1 0]
x1 = ifwht(y)
x1 = [4 2 2 0 0 2 -2 0]
Using GNU Octave , with signal package loaded and this script :
x = [4 2 2 0 0 2 -2 0]
y = fwht(x)
y = fwht(x,8)
y = fwht(x,8,"hadamard")
returns :
x = 4 2 2 0 0 2 -2 0
y = 1 1 0 1 0 0 1 0
y = 1 1 0 1 0 0 1 0
y = 1 0 1 0 1 1 0 0
'/
declare sub flipper(A() as single)
declare sub genrand1(g1() as single)
declare sub genrand2(g2() as single)
declare sub prt_1ds(g1() as single)
declare sub swnet4(x() as single, Wp() as single, Wn() as single, n as integer, m as integer)
'
declare sub xwht( vec() as single, hs_pow2_shl as long = 0 )
'
dim as long L
dim as integer n,m,i
dim as single x
L=3
n=2^L
m=2^(L-2)
dim vec(0 to n-1) as single
'
' ......................................................................
'
restore xdat
for i=0 to n-1
read vec(i)
next i
'
print " Input "
for i=0 to n-1
print vec(i);" , ";
next i
print
'
xwht(vec() , 0 )
'
print " xwht(x) output "
for i=0 to n-1
print vec(i)/n;" , ";
next i
print
'
print " Expected output "
restore ydat
for i=0 to n-1
read x
print x;" , ";
next i
print
print
'
' ......................................................................
'
dim Wn(0 to 3,0 to m-1) as single
dim Wp(0 to 3,0 to m-1) as single
'
'
'
genrand2(Wp())
genrand2(Wn())
genrand1(vec())
flipper(vec())
'
xwht(vec() , 0 )
swnet4(vec(),Wp(),Wn(),n,m)
xwht(vec() , 0 )
'
'
print " xwht(), swnet4(), xwht() output "
prt_1ds(vec())
'
'
end
'
'=======================================================================
'
'
'' Fast Walsh Hadamard Transform
'' This routine is from another author .
'' As illustrated
'' Input vec() , output vec()
sub xwht( vec() as single, hs_pow2_shl as long = 0 ) '' 1 or 2 for Partial Fast W.H.T.
static as long n
n = ubound(vec)+1
var hs = 1 shl hs_pow2_shl
'print " xwht, n , hs ";n;" , ";hs
while (hs < n)
var i = 0
while (i < n)
var j = i + hs
while (i < j)
var a = vec(i)
var b = vec(i + hs)
vec(i) = a + b
vec(i + hs) = a - b
i += 1
wend
i += hs
wend
hs += hs
wend
end sub
' _________________________________________________________________
'
'
sub swnet4(x() as single, Wp() as single, Wn() as single, n as integer, m as integer)
'
' 2 way switches and net composed of Wn and Wp adjustable weights
'
static as integer i,j,k,p
static as single z,s
static as single a(0 to 4)
static as single y(0 to n-1)
'
k=0
for p=0 to m-1
j=0
for i=k to k+3
z=x(i)
if z<0 then s=z*Wn(j,p) else s=z*Wp(j,p) end if
a(j)=s
j=j+1
next i
j=0
for i=k to k+3
y(i)=y(i) + a(j)
j=j+1
next i
k=k+4
next p
'
for i=0 to n-1
x(i)=y(i)
next i
'
'
end sub
'
' _________________________________________________________________
'
'
sub genrand1(g1() as single)
'
' Generate signed random values for 1d array
'
static as integer i,n
static as single z
'
n=ubound(g1)
for i=0 to n-1
z=rnd
if z<0.5 then z=-z else z=z end if
g1(i)=z
next i
end sub
'
' _________________________________________________________________
'
sub genrand2(g2() as single)
'
' Generate signed random values for 2d array
'
static as integer i, j, n, m
static as single z
'
n=ubound(g2,1)
m=ubound(g2,2)
'
for j=0 to m
for i=0 to n
z=rnd
if z < 0.5 then z=-z end if
g2(i,j)=z
next i
next j
'
'
end sub
'
' _________________________________________________________________
'
sub flipper(A() as single)
'
' Random sign flips, use with input .
'
static as integer n,i
static as single s
'
n=ubound(A)
for i=0 to n
s=rnd
if s<0.5 then s=-1 else s=1 end if
A(i)=s*A(i)
next i
'
'
end sub
' _________________________________________________________________
'
'
sub prt_1ds(g1() as single)
'
' print 1d, single precision, array .
'
static as integer n,i
'
'
n=ubound(g1)
for i=0 to n
print g1(i);" , ";
next i
print
'
end sub
'
' _________________________________________________________________
'
xdat:
data 4, 2, 2, 0, 0, 2, -2, 0
ydat:
data 1, 0, 1, 0, 1, 1, 0, 0
Re: Switch Net 4 neural network
This program illustrates a few well known procedures that are possible
before and after a transform, these aren't limited to a doubling of the
size of the vector.
The question is, did the originator use any of these.
before and after a transform, these aren't limited to a doubling of the
size of the vector.
The question is, did the originator use any of these.
Code: Select all
'
' ft_pc.bas
'
' General function transform procedures for discrete, and possibly
' continuous, data .
'
' sciwiseg@gmail.com
'
'
screen 12
'line(0,0)-(639,479),11,b
window screen(-0.1,-0.1)-(100.1,100.1)
'view screen (0,0)-(639,479)
line(-0.1,-0.1)-(100.1,100.1),14,b
dim as integer i,j,n
dim as single x,x1,y,y1,a,b
'
n=8
a=5
'
for i=0 to n-1
locate i+2,2
print i
next i
'
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
line(x,y)-(x1,y1),12,bf
line(x,y)-(x1,y1),11,b
next i
'
locate 23,5
print "x()"
'
' ...........................................................
'
n=16
a=20
'
for i=0 to n-1
locate i+2,13
print i
next i
'
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
if i<(n/2) then line(x,y)-(x1,y1),12,bf end if
line(x,y)-(x1,y1),11,b
next i
'
locate 23,15
print "x(), zero extended"
'
' ......................................................................
'
n=8
a=60
'
for i=0 to n-1
locate i+2,45
print i
next i
'
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
line(x,y)-(x1,y1),12,bf
line(x,y)-(x1,y1),11,b
next i
'
locate 23,47
print " x()"
'
' ......................................................................
'
n=16
a=75
'
for i=0 to n-1
locate i+2,56
print i
next i
'
j=0
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
if j mod 2 = 0 then line(x,y)-(x1,y1),12,bf end if
line(x,y)-(x1,y1),11,b
j=j+1
next i
'
locate 23,58
print "x(), zero insert"
'
' ......................................................................
'
sleep 6000
line(-0.1,-0.1)-(100.1,100.1),0,bf
line(-0.1,-0.1)-(100.1,100.1),14,b
'
n=8
a=5
'
for i=0 to n-1
locate i+2,2
print i
next i
'
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
line(x,y)-(x1,y1),i+1,bf
line(x,y)-(x1,y1),11,b
next i
'
locate 23,5
print "x()"
'
' ...........................................................
'
n=16
a=20
'
for i=0 to n-1
locate i+2,13
print i
next i
'
for i=0 to n-1
y=i*3.4 + 3
x=a
y1=y+3.4
x1=x+5
line(x,y)-(x1,y1),int(i/2)+1,bf
line(x,y)-(x1,y1),11,b
next i
'
locate 23,15
print "x(), replicate"
'
' ......................................................................
'
sleep
end
Re: Switch Net 4 neural network
A slight update.
It's slow going with the coding, so many unrelated notions
on my mind and too many late nights.
I haven't thoroughly read the articles.
I have found that microfiber cloth is effective at cleaning
my eye ware.
It's slow going with the coding, so many unrelated notions
on my mind and too many late nights.
I haven't thoroughly read the articles.
I have found that microfiber cloth is effective at cleaning
my eye ware.
Code: Select all
'
' swnet4f.bas
' Switchnet 4, preliminary
'
' sciwiseg@gmail.com
'
declare function round(in as double, places as ubyte = 2) as string
declare function max( a as double, b as double ) as double
declare function costL2(vec() as single, tar() as single ) as double
declare sub xwht( vec() as single, hs_pow2_shl as long = 0 )
'
declare sub genrand1(g1() as single)
declare sub genrand2(g2() as single)
declare sub flipper(A() as single)
declare sub swnet4(x() as single, Wp() as single, Wn() as single, n as integer, m as integer)
declare sub prt_1ds(g1() as single)
declare sub UWCopy(Av() as single, Bv() as single, m as integer)
declare sub mutate(Wn() as single, Wp() as single, m as integer)
declare function bp_ramp(x as single) as single
declare sub apply_activation(g1() as single)
'
'
dim as integer L,n,m,i,j,k
dim as double cost, cost2
'
L=3
n=2^L
m=2^(L-2) ' using 4 elements per bank .
print
print " L= ";L;" n= ";n;" m= ";m
print
dim as single x(0 to n-1)
dim as single y(0 to n-1)
dim Wn(0 to 3,0 to m-1) as single
dim Wp(0 to 3,0 to m-1) as single
'
dim Un(0 to 3,0 to m-1) as single
dim Up(0 to 3,0 to m-1) as single
'
'
'
genrand2(Wp())
genrand2(Wn())
genrand1(x())
'
for i=0 to n-1
y(i)=x(i) ' [0,1]
next i
'
flipper(x())
xwht( x() , 0 )
swnet4(x(),Wp(),Wn(),n,m)
apply_activation(x())
xwht( x() , 0 )
'
prt_1ds(x())
cost = costL2(x() , y())
print
print " Initial cost = "; cost
'
' .............................. repeats ...............................
'
for k=1 to 256
'
for i=0 to n-1
x(i)=y(i)
next i
' copy, mutate; before wht & swnet4
UWCopy(Wp() , Up() , m )
UWCopy(Wn() , Un() , m )
mutate(Wn() , Wp(), m)
'
flipper(x())
xwht( x() , 0 )
swnet4(x(),Wp(),Wn(),n,m)
apply_activation(x())
xwht( x() , 0 )
cost2 = costL2(x() , y())
'
if cost2>cost then
UWCopy(Up() , Wp() , m )
UWCopy(Un() , Wn() , m )
' print " cost2 > cost "
else
' print k;" cost : "; cost2
cost = cost2
end if
'
next k
print " final cost = ";cost
'
end
'
' ================================================================
'
sub genrand1(g1() as single)
'
' Generate signed random values for 1d array
'
static as integer i,n
static as single z
'
n=ubound(g1)
for i=0 to n-1
z=rnd
if z<0.5 then z=-z else z=z end if
g1(i)=z
next i
end sub
'
' _________________________________________________________________
'
sub genrand2(g2() as single)
'
' Generate signed random values for 2d array
'
static as integer i, j, n, m
static as single z
'
n=ubound(g2,1)
m=ubound(g2,2)
'
for j=0 to m
for i=0 to n
z=rnd
if z < 0.5 then z=-z end if
g2(i,j)=z
next i
next j
'
'
end sub
'
' _________________________________________________________________
'
sub flipper(A() as single)
'
' Random sign flips, use with input .
'
static as integer n,i
static as single s
'
n=ubound(A)
for i=0 to n
s=rnd
if s<0.5 then s=-1 else s=1 end if
A(i)=s*A(i)
next i
'
'
end sub
' _________________________________________________________________
'
'
sub swnet4(x() as single, Wp() as single, Wn() as single, n as integer, m as integer)
'
' 2 way switches and net composed of Wn and Wp adjustable weights
'
static as integer i,j,k,p
static as single z,s
static as single a(0 to 4)
static as single y(0 to n-1)
'
k=0
for p=0 to m-1
j=0
for i=k to k+3
z=x(i)
if z<0 then s=z*Wn(j,p) else s=z*Wp(j,p) end if
a(j)=s
j=j+1
next i
j=0
for i=k to k+3
y(i)=y(i) + a(j)
j=j+1
next i
k=k+4
next p
'
for i=0 to n-1
x(i)=y(i)
next i
'
'
end sub
' _________________________________________________________________
'
'
sub prt_1ds(g1() as single)
'
' print 1d, single precision, array .
'
static as integer n,i
'
'
n=ubound(g1)
for i=0 to n
print g1(i);" , ";
next i
print
'
end sub
' _________________________________________________________________
'
'
sub mutate(Wn() as single, Wp() as single, m as integer)
'
' Randomly alter a few weights, retain if cost less else retain
' previous few weights .
'
static as integer i,j,k,p,q
static as single nv,pv
static as double s
'
' Assuming this selection criteria produces only a few changes.
'
Randomize ,1
'
q=int(m*rnd)
for p=0 to m-1
for j=0 to 3
nv = Wn(j,p)
s=rnd
if (s<0.5) then nv=nv*(1+csng(s)) else nv=nv end if
pv = Wp(j,p)
s=rnd
if (s<0.5) then pv=pv*(1+csng(s)) else pv=pv end if
Wn(j,i)=nv
Wp(j,i)=pv
next j
next p
'
'
end sub
' _________________________________________________________________
'
'
sub UWCopy(Av() as single, Bv() as single, m as integer)
'
' Copy values from Av to Bv
'
static as integer j,p
'
' Assuming this selection criteria produces only a few changes.
'
for p=0 to m-1
for j=0 to 3
Bv(j,p) = Av(j,p)
next j
next p
'
end sub
' _________________________________________________________________
'
'
function bp_ramp(x as single) as single
' bipolar ramp function .
static as single y
if abs(x) >1 then y=x/abs(x) else y=x end if
return y
end function
'
sub apply_activation(g1() as single)
'
' Apply activation function to x()
'
static as integer n,i
static as single y
'
'
n=ubound(g1)
for i=0 to n
y=g1(i)
y=bp_ramp(y)
g1(i) = y
next i
'
end sub
' _________________________________________________________________
'
'
' Next 3 functions, From another author
function round(in as double, places as ubyte = 2) as string
dim as long _mul = 10 ^ places
return str(cdbl(int(in * _mul + .5) / _mul))
End Function
'
' ----------------------------------------------------------------------
'
function max( a as double, b as double ) as double
return iif( a > b, a, b)
end function
'
' ----------------------------------------------------------------------
'
'' Sum of squared difference
' Inputs vec() , tar
' Outputs cost
function costL2(vec() as single, tar() as single ) as double
static as integer i
dim as double cost
for i as long= 0 to ubound(vec)-1
var e = vec(i) - tar(i)
cost += e*e
next i
return cost
end function
'
' ----------------------------------------------------------------------
'
'
'' Fast Walsh Hadamard Transform
'' This routine is from another author .
'' As illustrated
'' Input vec() , output vec()
sub xwht( vec() as single, hs_pow2_shl as long = 0 ) '' 1 or 2 for Partial Fast W.H.T.
static as long n , i
n = ubound(vec)+1
var hs = 1 shl hs_pow2_shl
'print " xwht, n , hs ";n;" , ";hs
while (hs < n)
var i = 0
while (i < n)
var j = i + hs
while (i < j)
var a = vec(i)
var b = vec(i + hs)
vec(i) = a + b
vec(i + hs) = a - b
i += 1
wend
i += hs
wend
hs += hs
wend
' scale
for i=0 to n
vec(i)=vec(i)/n
next i
end sub
' _________________________________________________________________
'
'
Re: Switch Net 4 neural network
will have a look, thanks. i train hyperparams to find faster network convergence, had a realization that some hyperparam sets are obviously bad so i break loop early. not quite ready for release tho
here is a great video on 'mutation' Christoph Adami - TED talk
[edit] here's my hyperparam thing
here is a great video on 'mutation' Christoph Adami - TED talk
[edit] here's my hyperparam thing
Code: Select all
/' hyperparam search for switchNet4 demo - 2023 Sep 13.1 - by dafhi
proof-of-concept hyper-parameter trainer.
my hyper-parameters represent linear interpolation
i linearly interpolate 3 tihngs for this demonstration
1. mutator size
2. swap chance
3. flip chance
the hyper-parameters are 3 variables, for each of the above:
base + percentage * delta.
so that's 9 variables in total which get mutated until fast-ish
network performance is discovered.
mutation formula:
sub hyperparam.new_val
var big_nudge = lo+rnd*(hi-lo)
var nudge = curr * (1 + .025*(rnd-.5))
curr = iif( rnd < .11, big_nudge, nudge )
clamp( curr )
you can see network performance after some hyperparameter training,
comment out #define train_hyperparams, below
'/
#define train_hyperparams
'#include "../ba_ecs.bas"
'#include once "util.bas"
/' -- util.bas - 2023 Aug 17 - by dafhi
'/
'#include "boilerplate.bas"
/' -- boilerplate.bas - 2023 May 12 - by dafhi
+ ------------------------ +
| freebasic | c++ |
+ ----------- + ---------- +
| true = -1 | true = 1 |
| 0.99 = 1 | 0.99 = 0 | .. i hope that covers it
+------------- ----------- +
'/
#define sng as single
#define dbl as double
function min( a as double, b as double ) as double
return iif( a < b, a, b)
end function
function max( a as double, b as double ) as double
return iif( a > b, a, b)
end function
function clamp( in dbl, hi dbl = 1, lo dbl = 0) dbl
return min( max(in, lo), hi ) '' June 12
End Function
union uptrs
as any ptr a
as ubyte ptr b
as ushort ptr sho
as ulong ptr ul
as ulongint ptr uli
As Single Ptr s
as double ptr d
End Union
' -------------- boilerplate.bas
' ------- util.bas continued ..
#macro sw( a, b, tmp )
tmp = a: a = b: b = tmp
#endmacro
'
' -------- util.bas
' -------- hyperparams proof-of-concept continued ..
'
#include "string.bi" '' format() [rounding]
'' switch net 4
'' https://editor.p5js.org/congchuatocmaydangyeu7/sketches/IIZ9L5fzS
'' translation w/o hyperparameter trainer:
'' https://freebasic.net/forum/viewtopic.php?p=299998#p299998
type SwNet4
'' vecLen must be 4,8,16,32.....
declare sub setup( vecLen as long, as long )
declare sub recall( () as single, byref as single ptr ) '' () arrays
declare sub _abcd( () as single, as long )
declare sub _flips_backup
declare sub _flips_restore
as long depth
as single scale
sng params( any )
as ubyte _flips( any )
as ubyte _flip_undo( any )
as single _a,_b,_c,_d
as long _paramIdx, _j_base
end type
sub SwNet4.setup( vecLen as long, _depth as long)
depth = _depth
scale = 1 / sqr( vecLen shr 2 )
redim params ( 8 * vecLen * depth - 1 )
var j = 0
for i as long = 0 to ubound(this.params) step 8
this.params(i+j) = this.scale
this.params(i+4+j) = this.scale
j=(j+1) and 3
next
redim _flips( vecLen - 1 )
for i as long=0 to vecLen-1
this._flips(i)= rnd
next
redim _flip_undo ( ubound(_flips) )
end sub
'' Fast Walsh Hadamard Transform
sub wht( vec() as single, hs_pow2_shl as long = 0 ) '' 1 or 2 for Partial Fast W.H.T.
dim as long n = ubound(vec)+1
var hs = 1 shl hs_pow2_shl
while (hs < n)
var i = 0
while (i < n)
var j = i + hs
while (i < j)
var a = vec(i)
var b = vec(i + hs)
vec(i) = a + b
vec(i + hs) = a - b
i += 1
wend
i += hs
wend
hs += hs
wend
end sub
sub SwNet4._abcd( result() as single, j as long )
_paramIdx += 8
dim sng x=result( j+_j_base )
#if 1
if(x<0)then
_a+=x*params(_paramIdx)
_b+=x*params(_paramIdx+1)
_c+=x*params(_paramIdx+2)
_d+=x*params(_paramIdx+3)
else
_a+=x*params(_paramIdx+4)
_b+=x*params(_paramIdx+5)
_c+=x*params(_paramIdx+6)
_d+=x*params(_paramIdx+7)
endif
#else
if(x<0)then
_a+=x*param_ecs.val_read(_paramIdx)
_b+=x*param_ecs.val_read(_paramIdx+1)
_c+=x*param_ecs.val_read(_paramIdx+2)
_d+=x*param_ecs.val_read(_paramIdx+3)
else
_a+=x*param_ecs.val_read(_paramIdx+4)
_b+=x*param_ecs.val_read(_paramIdx+5)
_c+=x*param_ecs.val_read(_paramIdx+6)
_d+=x*param_ecs.val_read(_paramIdx+7)
endif
#endif
_j_base += 1
end sub
sub SwNet4.recall( result() as single, byref inVec as single ptr )
for i as long = 0 to ubound(result)
result(i) = inVec[i] * (scale/9) * iif( this._flips(i)and 1, 1, -1 )
next
wht( result() )
_paramIdx = -8
for i as long = 0 to this.depth - 1
for j as long = 0 to ubound(result) step 4
_j_base = 0
_a=0:_b=0:_c=0:_d=0
_abcd result(), j
_abcd result(), j
_abcd result(), j
_abcd result(), j
result(j)=_a
result(j+1)=_b
result(j+2)=_c
result(j+3)=_d
next
const pow2_shl = 2 '' August 1
wht( result(), pow2_shl )
next
end sub
sub SwNet4._flips_backup
for i as long = 0 to ubound(_flips)
_flip_undo(i) = _flips(i)
next
end sub
sub SwNet4._flips_restore
for i as long = 0 to ubound(_flips)
_flips(i) = _flip_undo(i)
next
end sub
function costL2(vec() as single, byref tar as single ptr) as double
dim dbl cost
for i as long= 0 to ubound(vec)-1
var e = vec(i) - tar[i]
cost += e*e
next
return cost
end function
type hyperparam
declare constructor
declare constructor( sng, sng )
declare operator cast sng
declare operator cast as string
declare operator let( sng )
sng hi, lo, curr
declare sub new_val
declare sub val_from_file
end type
constructor hyperparam
end constructor
constructor hyperparam( _lo sng, _hi sng )
hi = _hi
lo = _lo
end constructor
operator hyperparam.cast sng
return curr
end operator
operator hyperparam.cast as string
return str(curr)
end operator
operator hyperparam.let( s sng)
curr = s
end operator
sub hyperparam.val_from_file
get #1,, curr
end sub
sub hyperparam.new_val
var big_nudge = lo+rnd*(hi-lo)
var nudge = curr * (1 + .025*(rnd-.5))
curr = iif( rnd < .14, big_nudge, nudge )
curr = clamp(curr,hi,lo)
end sub
type hypers_base
declare sub dbg( sng, as string )
declare sub quick_vals( sng, sng, sng )
declare sub print
declare sub from_file
as hyperparam bas, delt, dcay
sng delt0 '' for copying pre-decay delta
end type
sub hypers_base.dbg( smax sng, msg as string )
if bas + delt > smax orelse dcay > 1 then
? msg
? bas + delt
? dcay
: sleep
endif
end sub
sub hypers_base.quick_vals( b sng, del sng, dcy sng )
bas = b
delt = del
dcay = dcy
end sub
sub hypers_base.print
? bas'format(bas, ".##")
? delt'format(delt0, ".##")
? dcay'format(dcay, ".###")
end sub
sub hypers_base.from_file
bas.val_from_file
delt.val_from_file
dcay.val_from_file
end sub
type hyper_set
as hypers_base muta_size
as hypers_base swap_chance
as hypers_base flip_chance
dbl cost
end type
sub backup_delta( byref t as hyper_set )
t.muta_size.delt0 = t.muta_size.delt
t.swap_chance.delt0 = t.swap_chance.delt
t.flip_chance.delt0 = t.flip_chance.delt
end sub
sub restore_decayed_delta( byref t as hyper_set )
t.muta_size.delt = t.muta_size.delt0
t.swap_chance.delt = t.swap_chance.delt0
t.flip_chance.delt = t.swap_chance.delt0
end sub
sub print_hypers( byref t as hyper_set )
? " muta_size": t.muta_size.print
? " swap_chance": t.swap_chance.print
? " flip_chance": t.flip_chance.print
end sub
type Mutator
declare constructor( as long, as long, as single )
declare sub mutate( byref as SwNet4, byref as hyper_set )
declare sub undo( byref as SwNet4 )
as long size, precision
as single limit
as single previous( any)
as long pIdx( any)
end type
constructor mutator( _size as long, precis as long, limit as single )
size = _size
redim this.previous( size-1 )
redim this.pIdx( size-1 )
this.precision = precis
this.limit = limit
end constructor
sub Mutator.mutate( byref net as SwNet4, byref hypers as hyper_set )
dim sng sc = ubound(net.params) + .499 '' c++ .999
dim as long rpos, rpos2, c
dim sng vm, m
'' previous() and pIdx() detail a small set for mutation
for i as long= 0 to ubound( this.pidx ) ' relatively small array
'' to continue GreenInk's sub-random idea, i try swapping - dafhi
if rnd < hypers.swap_chance.bas + rnd*hypers.swap_chance.delt andalso i < size-1 _
then
rpos = rnd * sc
this.pIdx(i) = rpos '' muta location
this.previous(i) = net.params(rpos) '' save pre-mutate
i += 1
rpos2 = rnd * sc
this.pIdx(i) = rpos2 '' muta location
this.previous(i) = net.params(rpos2) '' save pre-mutate
static as typeof(net.params) tmp '' custom swap
sw( net.params(rpos), net.params(rpos2), tmp )
else
rpos = rnd * sc ' random elem
this.pIdx(i) = rpos '' muta location
this.previous(i) = net.params(rpos) '' save pre-mutate
m = 2 * this.limit * exp(rnd*-this.precision)
vm = net.params(rpos) + iif(rnd<.5,m,-m)
if (vm > this.limit)orelse(vm < -this.limit) then continue for
net.params(rpos) = vm
endif
next
hypers.swap_chance.delt *= hypers.swap_chance.dcay '' new
'' new: mutate flips (Aug 23
net._flips_backup
sc = ubound(net._flips)+.499 '' c++ .999
c = ubound(net._flips)+1
var u = c*( hypers.flip_chance.bas + hypers.flip_chance.delt) - 1
for i as long = 0 to u
dim as long dest_index = rnd*sc
net._flips( dest_index ) = rnd
next
hypers.flip_chance.delt *= hypers.flip_chance.dcay
end sub
sub Mutator.undo( byref net as SwNet4 )
for i as long = ubound(pIdx) to 0 step -1
net.params(pIdx(i))= previous(i)
next
net._flips_restore
end sub
'#include "swnet_hypers_demo.bas"
namespace demo
'' Test with Lissajous curves
dim as ulong c1 = rgb(0,0,0),c2 = rgb(255,255,0)
dim sng ex(8,255)
dim sng work(255)
dim dbl parentCost
dim as long w,h
dim as SwNet4 parentNet
const success_frames_scalar = 1.02
sub visualize
cls ' clearscreen
locate 2,1
? "Training Data"
for i as long = 0 to 7
for j as long= 0 to 255 step 2
var y=44 + 18 * ex(i,j + 1)
pset (25 + i * 40 + 18 * ex(i,j), y), c2
next
next
locate 10,1
? "Recall"
for i as long = 0 to 7
parentNet.recall( work(), @ex(i,0) )
for j as long = 0 to 255 step 2
pset(25 + i * 40 + 18 * work(j), 104 + 18 * work(j + 1)), c2
next
next
end sub
sub random_hypers( byref t as hyper_set )
const chance = 1.
if rnd < chance then t.muta_size.delt.new_val
if rnd < chance then t.muta_size.dcay.new_val
if rnd < chance then t.muta_size.bas.new_val
if rnd < chance then t.swap_chance.delt.new_val
if rnd < chance then t.swap_chance.dcay.new_val
if rnd < chance then t.swap_chance.bas.new_val
if rnd < chance then t.flip_chance.delt.new_val
if rnd < chance then t.flip_chance.dcay.new_val
if rnd < chance then t.flip_chance.bas.new_val
end sub
sub best_to_file( byref t as hyper_set )
restore_decayed_delta t
open "hypers.txt" for output as #1
put #1,, t.muta_size.bas.curr
put #1,, t.muta_size.delt.curr
put #1,, t.muta_size.dcay.curr
put #1,, t.swap_chance.bas.curr
put #1,, t.swap_chance.delt.curr
put #1,, t.swap_chance.dcay.curr
put #1,, t.flip_chance.bas.curr
put #1,, t.flip_chance.delt.curr
put #1,, t.flip_chance.dcay.curr
write #1, ""
write #1, "size base " + str(t.muta_size.bas.curr)
write #1, "delta "+ str(t.muta_size.delt.curr)
write #1, "delta decay "+ str(t.muta_size.dcay.curr)
write #1, "swap base " + str(t.swap_chance.bas.curr)
write #1, "swap delta "+ str(t.swap_chance.delt.curr)
write #1, "swap decay "+ str(t.swap_chance.dcay.curr)
write #1, "flip base " + str(t.flip_chance.bas.curr)
write #1, "flip delta "+ str(t.flip_chance.delt.curr)
write #1, "flip decay "+ str(t.flip_chance.dcay.curr)
close
end sub
dim as double perf
dim sng cost_best(2)
dim as hyper_set hypers_best
sub save_new_stats( hardening as boolean, byref t as hyper_set )
for i as long = ubound(cost_best) to 1 step -1
cost_best(i) = cost_best(i-1)
next
cost_best(0) = parentCost
restore_decayed_delta t
hypers_best = t
end sub
sub training_info( hardening as boolean )
locate 22
? "best: "; format(cost_best(0), ".##")
locate 24
if hardening then
? "hardening result from file"
else
? "bad set if > 1 --> "; format(perf, ".##"); " "
endif
end sub
dim as long epoch
dim as single outer_frame, _outer_frame
dim as hyper_set hyp(4)
sub run_epoch( byref t as hyper_set, hardening as boolean = false )
parentNet.setup 256,2
'' after a bug hunt, i realized i need to save / restore 'delta'
'' because after a run, delta has decayed and if net was good,
'' i save to hypers_best
backup_delta t
demo.parentCost = 1/0
t.muta_size.dbg 255.499, "muta size"
t.swap_chance.dbg 1, "swap"
t.flip_chance.dbg 1, "swap"
_outer_frame = outer_frame
#ifdef train_hyperparams
if not hardening then _outer_frame = outer_frame * .7 + _
rnd * outer_frame * .5
#endif
for frame as long = 1 to _outer_frame
var precision = 35
var limit = 2*parentNet.scale
'' mutator with hyperparams. 2023 Aug 8 - by dafhi
dim as Mutator mut = type( t.muta_size.bas + rnd*t.muta_size.delt, precision, limit )
t.muta_size.delt.curr *= t.muta_size.dcay.curr '' reduce max over time
for i as long = 0 to 6 '' 100 originally. reduced for some cpu sleep
mut.mutate( parentNet, t )
dim dbl cost = 0
for j as long = 0 to 7
parentNet.recall( work(), @ex(j,0) )
cost += costL2( work(), @ex(j,0) )
next
if (cost < parentCost) then
parentCost = cost
else
mut.undo( parentNet )
endif
next
#ifdef train_hyperparams
if hardening then
visualize
else
var break_early = .8
perf = (parentCost / cost_best(0)) ^ break_early * _
(frame / _outer_frame) ^ 1.2
if perf > .99 then parentCost = 1/0: exit for
endif
training_info hardening
#else
visualize
#endif
locate 21
?"Cost: "; ; format(parentCost, ".##"); " "
locate 19
? "frame "; frame; " of"; _outer_frame; " "
sleep 1 '' large sleep val to keep laptop cool-ish
dim as string kstr = lcase(inkey)
select case kstr
case ""
case chr(27)
end
case "z","x","c","v", " "
parentCost *= (frame / _outer_frame) ^ .5
exit for
end select
next frame
t.cost = parentCost
#ifdef train_hyperparams
if parentCost < cost_best(0) then
save_new_stats hardening, t
if not hardening then
best_to_file t
endif
visualize
if not hardening then outer_frame = _outer_frame
endif
#endif
end sub ' ---------- run_epoch
sub print_cost( byref t as hyper_set, y as long )
locate 26+y
print t.cost
end sub
sub run_array
run_epoch hyp(0)
hyp(0) = hypers_best
random_hypers hyp(0)
end sub
sub setup
dim as hyper_set ptr p = @hyp(0)
'' hyperparam ranges
p->muta_size.bas = type(4,43)
p->muta_size.delt = type(65,160)
p->muta_size.dcay = type(.91,.99999)
p->swap_chance.bas = type( .001, .4)
p->swap_chance.delt = type(.001, .5)
p->swap_chance.dcay = type(.89, .99999)
p->flip_chance.bas = type( .001, .4)
p->flip_chance.delt = type(.001, .5)
p->flip_chance.dcay = type(.88, .99999)
w = 400
h = 400
screenres w,h,32
const tau = 8*atn(1)
for i as long = 0 to 127
'' Training data
dim sng t = (i * tau) / 127
ex(0,2 * i) = sin(t)
ex(0,2 * i + 1) = sin(2 * t)
ex(1,2 * i) = sin(2 * t)
ex(1,2 * i + 1) = sin(t)
ex(2,2 * i) = sin(2 * t)
ex(2,2 * i + 1) = sin(3 * t)
ex(3,2 * i) = sin(3 * t)
ex(3,2 * i + 1) = sin(2 * t)
ex(4,2 * i) = sin(3 * t)
ex(4,2 * i + 1) = sin(4 * t)
ex(5,2 * i) = sin(4 * t)
ex(5,2 * i + 1) = sin(3 * t)
ex(6,2 * i) = sin(2 * t)
ex(6,2 * i + 1) = sin(5 * t)
ex(7,2 * i) = sin(5 * t)
ex(7,2 * i + 1) = sin(2 * t)
next
open "hypers.txt" for input as #1
p->muta_size.from_file
p->swap_chance.from_file
p->flip_chance.from_file
close
var file_found = p->muta_size.bas > 0
cost_best(0) = 1/0
#ifdef train_hyperparams
outer_frame = 659
if file_found then
var hardening = true
for i as long = 1 to 2
run_epoch *p, hardening
next
visualize
outer_frame *= .93
else
random_hypers *p
hypers_best = *p
endif
#else
outer_frame = 5499
#endif
end sub
end namespace
randomize
demo.setup
#ifdef train_hyperparams
dim as long epoch_max = 9999
#else
dim as long epoch_max = 1
#endif
for demo.epoch = 1 to epoch_max
locate 18
? "epoch "; demo.epoch; " of "; epoch_max
demo.run_array
sleep 24
next
locate 30
?"done!"
sleep
Last edited by dafhi on Sep 13, 2023 12:13, edited 6 times in total.
Re: Switch Net 4 neural network
The video is interesting, reminds me a little of Conway's Game of Life.
Re: Switch Net 4 neural network
i notice WHT similarity to FFT. my grasp of FFT is weak but i understand how both highlight that 'typical' NN's are overkill.
working out a variant of switch net in my head where input layer has equal size layer #2, connections mutate-shuffled
working out a variant of switch net in my head where input layer has equal size layer #2, connections mutate-shuffled
Re: Switch Net 4 neural network
FFTs have some versatility , even a size 8 represents a fair amount of calculation and connectedness .
Reading what's on the internet, deep learning involves larger NN's ;
my matrix approach to NN might qualify if that's the instance.
Reading what's on the internet, deep learning involves larger NN's ;
my matrix approach to NN might qualify if that's the instance.
Re: Switch Net 4 neural network
In an attempt to re familiarise myself with NNs I examined some
introductory examples from the internet.
There the notion of regularisation is discussed; this can take many
forms, including normalising the weights, selectively subtracting a random
amount, nullifying certain connections.
In this instance an activation function, and its derivative were used with
back propagation.
Your approach is a little different.
introductory examples from the internet.
There the notion of regularisation is discussed; this can take many
forms, including normalising the weights, selectively subtracting a random
amount, nullifying certain connections.
In this instance an activation function, and its derivative were used with
back propagation.
Your approach is a little different.
Re: Switch Net 4 neural network
GreenInk described years ago the idea
1. mutate
2. keep vs. revert
and i stuck with it.
my hyper-parameters interest spans a few years. discovering optimizations and realizing oversight from previous projects.
1. mutate
2. keep vs. revert
and i stuck with it.
my hyper-parameters interest spans a few years. discovering optimizations and realizing oversight from previous projects.
Re: Switch Net 4 neural network
There was a question about the FFT.
The FFT is similar to the WHT, except there are
complex additions and multiplications by a complex
value, known as the twiddle factor, this might be
interpreted as being like a fixed weight.
Just as the WHT has a decimation in frequency arrangement,
so does the FFT.
If you run this code you'll see that even a small FFT, or WHT,
is already considerably larger than the first NN illustrated.
The first NN was effectively trained, after 10000 iterations,
to simulate an exclusive or gate and, perhaps, that's all.
The WHT [FFT] version is almost like a deep neural network in
comparison.
There are various properties of the FFT that might be useful.
For instance, by having extra values in the last inputs that
are zero ; we accomplish interpolation at the last pass.
This provides extra, related , values for the main switched
NN ; possibly guarding somewhat against over fitting .
I'm gradually pondering your linear hyper parameter approach.
The FFT is similar to the WHT, except there are
complex additions and multiplications by a complex
value, known as the twiddle factor, this might be
interpreted as being like a fixed weight.
Just as the WHT has a decimation in frequency arrangement,
so does the FFT.
If you run this code you'll see that even a small FFT, or WHT,
is already considerably larger than the first NN illustrated.
The first NN was effectively trained, after 10000 iterations,
to simulate an exclusive or gate and, perhaps, that's all.
The WHT [FFT] version is almost like a deep neural network in
comparison.
There are various properties of the FFT that might be useful.
For instance, by having extra values in the last inputs that
are zero ; we accomplish interpolation at the last pass.
This provides extra, related , values for the main switched
NN ; possibly guarding somewhat against over fitting .
I'm gradually pondering your linear hyper parameter approach.
Code: Select all
' nn2_draw2a.bas
' Draw Neural Network structure from selection variables.
declare sub nn_illustrate(numInputs as integer,numHiddenNodes as integer,numOutputs as integer,numTrainingSets as integer)
' NN
dim as integer numInputs = 3
dim as integer numHiddenNodes = 4 ' 2
dim as integer numOutputs = 1
dim as integer numTrainingSets = 8
dim as integer numHiddenLayers=1
'
' ______________________________________________________________________
'
nn_illustrate(numInputs ,numHiddenNodes ,numOutputs ,numTrainingSets)
print"done"
sleep
end
'
' ======================================================================
'
sub nn_illustrate(numInputs as integer,numHiddenNodes as integer,numOutputs as integer,numTrainingSets as integer)
'
' Illustrate the structure of the Neural Network .
'
dim as integer i,j
'
screen 12
locate 2,30
print" Neural Network Structure "
locate 3,4
print "Inputs"
for i=0 to numInputs - 1
locate i+4,1
print i
line(30,i*15+50)-(60,(i+1)*15+50),14,b
next i
'
' HiddenNodes
'
locate 3,12
color 11,0
print "Hidden Nodes"
for i=0 to numHiddenNodes - 1
' locate i+4,1
' print i
line(90,i*15+50)-(120,(i+1)*15+50),11,b
next i
'
' Outputs
'
locate 3,41
print "Outputs"
for i=0 to numOutputs-1
line(320,i*15+50)-(350,(i+1)*15+50),10,b
next i
'
'
' HiddenNodes Biases
'
locate 8,17
color 13,0
print "Hidden Nodes Biases"
for i=0 to numHiddenNodes - 1
' locate i+4,1
' print i
line(125,i*15+50)-(155,(i+1)*15+50),13,b
next i
'
' Outputs Biases
'
locate 8,46
print "Outputs Biases"
for i=0 to numOutputs-1
line(355,i*15+50)-(385,(i+1)*15+50),13,b
next i
'
'
' Training sets .
'
for j=0 to numTrainingSets-1
locate 13,j*5+5
print j
for i=0 to numInputs - 1
locate i+14,1
print i
line(30+j*40,i*15+210)-(60+j*40,(i+1)*15+210),14,b
next i
next j
'
'
dim as integer k
k= numInputs*15+15
for j=0 to numTrainingSets-1
' locate 13+numInputs,j*5+5
' print j
for i=0 to numOutputs - 1
locate numInputs+15,1
print i
line(30+j*40,k+i*15+210)-(60+j*40,k+(i+1)*15+210),10,b
next i
next j
'
locate 12,6
print " Training Set"
line(25,170)-( numTrainingSets*40+30,(numInputs+1+numOutputs)*15+5+210),7,b
'
locate 30,54
color 12,0
print " press any key to continue"
sleep
'
'
' Structure of length 8 fft , or wht .
'
cls
dim as integer n
n=8
locate 2,30
color 15,0
print" FFT or WHT Structure, N=8 "
locate 3,4
print "Inputs"
for i=0 to n - 1
locate i+4,1
print i
line(30,i*15+50)-(60,(i+1)*15+50),14,b
next i
'
locate 3,10
color 10,0
print " Bit Reversed"
for i=0 to n - 1
' locate i+4,1
' print i
line(65,i*15+50)-(95,(i+1)*15+50),10,b
next i
'
locate 12,14
color 11,0
print " Pass 1"
for i=0 to n - 1
line(120,i*15+50)-(150,(i+1)*15+50),11,b
next i
'
locate 12,22
color 11,0
print " Pass 2"
for i=0 to n - 1
line(175,i*15+50)-(205,(i+1)*15+50),11,b
next i
'
locate 12,29
color 11,0
print " Pass 3"
for i=0 to n - 1
line(230,i*15+50)-(260,(i+1)*15+50),11,b
next i
'
locate 3,38
color 15,0
print"Switched NN "
i=0
line(290,i*15+50)-(380,(n)*15+50),15,b
'
locate 30,54
color 12,0
print " press any key to continue"
sleep
dim as ulong SCREEN_EXIT=&h80000000
Screen 0, , , SCREEN_EXIT' (with SCREEN_EXIT=&h80000000)
' The previous command returns to native console/terminal window.
'screen
'
end sub
Re: Switch Net 4 neural network
xor gate .. yeah okay. one of my past projects i attempted 'all the gates'
input
00
01
10
11 .. sparsenet
.. [edit] ..
my hyper-parameter comments need updating. linear interp on its own could be replaced by one variable.
my interp is .. base + [decaying] delta
input
00
01
10
11 .. sparsenet
.. [edit] ..
my hyper-parameter comments need updating. linear interp on its own could be replaced by one variable.
my interp is .. base + [decaying] delta
Re: Switch Net 4 neural network
This code will illustrate some of the concepts I perceive
as being used within switchnet4.
The switch section selectively uses the positive or negative
Neural Network weights. These weights reside within banks of size
four.
The main banks contain the Neural Network weights for each layer,
separated into negative [ cyan ] and positive [ red ] banks.
The stored banks are copied from the main banks.
Then conditionally written back to the main banks, depending upon
the difference between the output values and the expected output values.
The section labeled "extra" is to be determined.
Now I have a few questions.
How much of the main banks is randomly altered, is it just a few banks
of size four; or all of the main banks.
Are these alterations dependent upon whether the positive or negative
banks in use at any particular epoch.
In the code for SwNet4(), firstly wht( result() ) is used, then
after some calculations, wht( result(), pow2_shl ) is used.
What is the effect of this, are we sampling from a larger data set.
as being used within switchnet4.
The switch section selectively uses the positive or negative
Neural Network weights. These weights reside within banks of size
four.
The main banks contain the Neural Network weights for each layer,
separated into negative [ cyan ] and positive [ red ] banks.
The stored banks are copied from the main banks.
Then conditionally written back to the main banks, depending upon
the difference between the output values and the expected output values.
The section labeled "extra" is to be determined.
Now I have a few questions.
How much of the main banks is randomly altered, is it just a few banks
of size four; or all of the main banks.
Are these alterations dependent upon whether the positive or negative
banks in use at any particular epoch.
In the code for SwNet4(), firstly wht( result() ) is used, then
after some calculations, wht( result(), pow2_shl ) is used.
What is the effect of this, are we sampling from a larger data set.
Code: Select all
' sciwiseg@gmail.com
' Rectangle drawing
screen 12
window screen (0,0)-(100,100)
type pt
x as single
y as single
end type
dim as integer n,i,cm,j
n=64
dim pts(0 to n-1) as pt
dim chrome(0 to n-1) as integer
'
' Read data
'
restore datax
for i=0 to n-1
read pts(i).x
next i
restore datay
for i=0 to n-1
read pts(i).y
next i
restore colorp
for j=0 to (n/2)-1
read chrome(j)
next j
'
' Display connected pts
'
dim as single x,y,x1,y1
'
' Line [target,] [[STEP]|(x1, y1)]-[STEP] (x2, y2) [, [color][, [B|BF][, style]]]
' or
' Line - (x2, y2) [, [color][, [B|BF][, style]]]
'
for i=0 to n-1 step 2 'n-1
x=pts(i).x
y=pts(i).y
cm=chrome(int(i/2))
x1=pts(i+1).x
y1=pts(i+1).y
if x1 > 0 and x > 0 then line(x,y)-(x1,y1),cm,b
next i
'
' ______________________________________________________________________
'
color 10,0
locate 2,2
print "in";
color 13,0
locate 2,8
print "wht"
color 14,0
locate 2,12
print " switch"
color 15,0
locate 2,21
print "main banks"
color 6,0
locate 2,32
print " extra"
color 13,0
locate 2, 40
print "wht"
color 9,0
locate 2,46
print "out"
color 8,0
locate 18,3
print "stored banks"
color 11,0
locate 14,50
print "- banks"
color 4,0
locate 15,50
print "+ banks"
' within SwNet4()
' wht( result() )
' wht( result(), pow2_shl )
color 13,0
locate 12,1
print "wht(result())"
locate 12,38
print "wht(result(), pow2_shl )"
sleep
end
'
' ======================================================================
'
' ----------------------------------- x --------------------------------
'
datax:
'input
data 2,5
' wht
data 7,15
' switch
data 17,20
' block
data 22,39
' - bank
data 24,27,24,27,29,32,29,32,34,37,34,37
' + bank
data 24,27,24,27,29,32,29,32,34,37,34,37
' xtra
data 41,44
' wht 2
data 46,54
' output
data 56,59
'
' -------------------------- banks 2 ----------------------------------
'
' block
data 22,39
' - bank
data 24,27,24,27,29,32,29,32,34,37,34,37
' + bank
data 24,27,24,27,29,32,29,32,34,37,34,37
'
' ------------------------------ y ------------------------------------
'
datay:
' input
data 10,30
' wht
data 10,30
' switch
data 10,52
' block
data 8,53
' - bank
data 10,18,21,30,10,18,21,30,10,18,21,30
' + bank
data 32,40,43,51,32,40,43,51,32,40,43,51
' xtra
data 10, 52
' wht 2
data 10,30
' output
data 10,30
'
' ................... banks 2 ......................
'
' block
data 55,100
' - bank
data 57,65,68,76,57,65,68,76,57,65,68,76
' + bank
data 79,87,90,98,79,87,90,98,79,87,90,98
'
' ------------------------- colour -------------------------------------
'
colorp:
data 10,13,14,15,11,11,11,11,11,11,4,4,4,4,4,4,6,13,9
data 8,11,11,11,11,11,11,4,4,4,4,4,4
Re: Switch Net 4 neural network
i haven't coded a wht from scratch. my version with pow2_shl is an all-in-one, after examining GreenInk's .. yes i did note he used different wht at different stage. it's cool you made an illustration; i may stare at it a bit longer, i code from inspiration.
in GreenInk's diagram, the switches seem to reroute connections, whereas in the lissajous curve demo, switches flip values.
also, GreenInks demo initializes said flip values a only reads later. i'm often mis-analyze questions and guess mis-matched inspiration behind other peoples ideas, so bear with.
also, one reason i see WHT potential is it doesn't use multiply. there are also potential single thread parallelism through masking
for example in graphics you can mull Red and Green in one go
result = color * FF00FF
[edit]
for a rate i suggest 28%
in GreenInk's diagram, the switches seem to reroute connections, whereas in the lissajous curve demo, switches flip values.
also, GreenInks demo initializes said flip values a only reads later. i'm often mis-analyze questions and guess mis-matched inspiration behind other peoples ideas, so bear with.
also, one reason i see WHT potential is it doesn't use multiply. there are also potential single thread parallelism through masking
for example in graphics you can mull Red and Green in one go
result = color * FF00FF
[edit]
for a rate i suggest 28%
Re: Switch Net 4 neural network
I'm glad the diagram is of interest, and possible use, to you; feel free to construct your own
diagrams with the Rectangle Drawing code.
Ah, I forgot to illustrate the sign flipping portion of the diagram.
As the WHT doesn't use multiply, or complex numbers, it's fairly easy to implement .
I definitely need to sructinize the original code and also unravel what Greenlnk was intending.
diagrams with the Rectangle Drawing code.
Ah, I forgot to illustrate the sign flipping portion of the diagram.
As the WHT doesn't use multiply, or complex numbers, it's fairly easy to implement .
I definitely need to sructinize the original code and also unravel what Greenlnk was intending.