Here is something... interesting...

General discussion for topics related to the FreeBASIC project or its community.
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Here is something... interesting...

Post by voodooattack »

http://www.biologic.com.au/bugbrain/

Ever tried to design neural networks? I do not mean an engine/library that simulates them, what I mean is, add and program each and every neuron and synapse then tune them manually?

This game is seriously awesome, you get to control various creatures (starting with a ladybug, then worms, and later on ants) and vehicles (also later on, in the workshop), and every time, you'll be faced with a challenge, given some input nodes and output nodes, your task is to add neurons, link them, and adjust weights, in order to get your vessel to perform the task successfully.

The challenge escalates with your progress, in each stage, as you progress through levels, new obstacles, enemies and new input/output nodes are added, so try to think ahead.

--------------------------------

So, after finishing this game (took me about 4 days of non-stop grind to do so), My head was beaming with ideas; and it got me thinking, with all the knowledge I've now obtained about the inner-workings of neural networks, and learning various things like logical operators (how to perform AND/OR/NOT/XOR/etc), how to make neuron "loops/timers" in which neurons inhibit/trigger each other in a timely fashion, how to perform math, and how logic works at the neuron level, Wouldn't it be possible to write a neural network compiler?

By that I mean a compiler that takes some code and outputs a neural network model, that performs the exact task your program does, in BASIC terms, the code for a "mushroom seeker" worm would be something like:

Code: Select all

'' this code is very loose in concept, since in a neural network, almost everything would be running in parallel..

sub move_forward()
if mid_height = 0 then
signal grab_front()
signal raise_mid()
elseif mid_height = 100 then
signal grab_back()
signal lower_mid()
end if
end sub

If (smellLeft - smellRight > 5) then 
signal grab_back()
signal bend_Left()
signal grab_front()
signal bend_right()
elseif (smellRight - smellLeft > 5) then
signal grab_back()
signal bend_right()
signal grab_front()
signal bend_Left()
else
transfer move_forward() '' transfers control to a different subnet of neurons
end if
Not sure how tough it would be to design/implement such a language, but think of the endless possibilities, a piece of code that can learn (through back-propagation or other methods), your code would simply give the neural network an initial state/design, and when it goes "live" it could morph into something completely different, but does its assigned task better than your original version..

I have plans to experiment heavily on the subject, any thoughts?
John Spikowski
Posts: 453
Joined: Dec 24, 2005 2:32
Location: WA - USA
Contact:

Post by John Spikowski »

Microsoft is playing with idea as well.

Kudu clip

The cool thing is there is a Basic like scripting language.

Microsoft trains next-gen coders with XNA's Kodu
Last edited by John Spikowski on Jan 31, 2009 11:49, edited 1 time in total.
Hexadecimal Dude!
Posts: 360
Joined: Jun 07, 2005 20:59
Location: england, somewhere around the middle
Contact:

Post by Hexadecimal Dude! »

Wow, this looks really interesting, I have to give that game a try. Surprisingly just a few days ago a friend and I were talking about various AI techniques, and I said one thing I don't like about NNs is that you loose your control and understanding of how the problem came to be solved (maybe high level researchers don't but it's all magic to me), but this thing might solve that problem.

Regarding your language, it sounds cool, I'm excited to hear you want to experiment heavily on the subject. You might be interested in having a look at hardware description languages (eg verilog) since that will be essentially the family in which you're writing (Albeit for a virtual piece of "hardware"), these languages have to have some notion of parallelism, although maybe not the degree of parallelism you'd need.

Good luck!

(Regarding that MS thing, that link was interesting, but I think the post missed the point a little bit, the interesting thing here isn't the fact that it's a user-generated-control based game, that idea is pretty old hat [although MS seem to have created a pretty nice version of the concept there], rather the crux of the matter is the by-hand design of neural networks)
rolliebollocks
Posts: 2655
Joined: Aug 28, 2008 10:54
Location: new york

Post by rolliebollocks »

I encourage you to experiment but you don't need a neural network to for a worm which seeks mushrooms. You need a neural network to design a worm which can learn to design worms which can eat mushrooms.

What you're really replicating when you create a neural network is the exteriority of the consciousness from the environment. By all means the worm could more easily be panoptic and scour the entire landscape knowing where each mushroom is w/o a neural network.

So what you're doing in a sense is crippling the worms intelligence by creating a complicated system of stimulus response. In a sense, the neural network is a piece of art with no imagination. It isn't real, it's only a representation, it's slow, it's complicated, it's unnecessary.

Instead of reinventing the wheel, I'd like to see someone come out with a new approach.

That's just me though...

rb
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Post by voodooattack »

John Spikowski: That's more like Chipwits II with enhanced graphics, I like the concept of "machinist" games and visual programming, I've had my share playing them, cool stuff, and I know it somewhat resembles this game, but not quite, the concept of this one is totally different, crafting neural networks to produce the logic you want is different and much more complex, even at this simple scale.

Hexadecimal Dude!: I'm glad you found it useful, I used to look at neural nets as a useful, but impossible-to-grasp-the-inner-workings-of concept, and this game literally changed my whole view on the subject.

I like your HDL suggestion, and by thinking about it a bit more, I actually figured out one of the things I needed for the syntax to fit its purpose, a new "When" keyword, since that suits spiking neural networks and parallel programming much more, as in:

Code: Select all

When smell_left do
turn_right, etc
End when
A wholly different approach would be events, but that's another topic.

As I see it, the real problem is variables, since there is no such thing as a "memory buffer" in spiking neural nets, variables that need to be kept must be propagated throughout the network in order to be preserved, I'm thinking of creating a subnet of linear neurons that hand over the value in a big loop, but there could be an easier method of doing this.. we'll see.

rolliebollocks: The mushroom was just a simplistic example, the way I see it, the designer in this game is almost capable of controlling complex robotics, check ants if you want to know more.. (if it's too hard, you can skip through the levels by downloading solution files from the website)
What you're really replicating when you create a neural network is the exteriority of the consciousness from the environment. By all means the worm could more easily be panoptic and scour the entire landscape knowing where each mushroom is w/o a neural network.
Well, I find it funny that organic brains work with the same concepts then.
So what you're doing in a sense is crippling the worms intelligence by creating a complicated system of stimulus response. In a sense, the neural network is a piece of art with no imagination. It isn't real, it's only a representation, it's slow, it's complicated, it's unnecessary.
The word is "it's an imperfect and very primitive simulation".
Instead of reinventing the wheel, I'd like to see someone come out with a new approach.
I'm not trying to reinventing the wheel, I'm trying to invent a new method of propulsion.
Let's call it Neuro-programming as opposed to linear/procedural programming. (not to be confused with NLP) =P
Hexadecimal Dude!
Posts: 360
Joined: Jun 07, 2005 20:59
Location: england, somewhere around the middle
Contact:

Post by Hexadecimal Dude! »

Yeah, your "when" block will probably make things more elegant. I don't know if you need the "do" after the condition though ;P, it just seems odd (since while, for etc don't have it), unless you're going for a different style of language than a BASIC.
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Post by voodooattack »

Hexadecimal Dude! wrote:Yeah, your "when" block will probably make things more elegant. I don't know if you need the "do" after the condition though ;P, it just seems odd (since while, for etc don't have it), unless you're going for a different style of language than a BASIC.
I guess you're right about that one, What had me add that "do" part was that I wanted it similar to the "if/then" syntax, but making it closer to the loop/while blocks seems more logical.

On the other hand, I was thinking I might make it a C-like language, as in "when (condition) { dostuff(); return; }", since C is considerably easier to parse.

Eitherway, I'm open to suggestions :)
Mentat
Posts: 332
Joined: Oct 27, 2007 15:23
Location: NC, US
Contact:

Post by Mentat »

Awesome link. The farthest I can get is 'hit the bells thrice." My neural-foo isn't very impressive.
rolliebollocks
Posts: 2655
Joined: Aug 28, 2008 10:54
Location: new york

Post by rolliebollocks »

Well, I find it funny that organic brains work with the same concepts then.
It is funny. Organic brains invented the concept!
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Post by voodooattack »

Mentat wrote:Awesome link. The farthest I can get is 'hit the bells thrice." My neural-foo isn't very impressive.
That's the tutorial, finish that and you'll be faced with real challenges :)

rolliebollocks wrote:
Well, I find it funny that organic brains work with the same concepts then.
It is funny. Organic brains invented the concept!
Actually, organic evolution of cellular organisms invented the concept, and now, the sentient, sophisticated and utterly complex neural networks (that are our brains) are aware of this very concept and trying to study, reverse-engineer and mimic it at a new level.

Now that's a neural network that can be trained to design other neural networks.

And I predict that by the time we make artificial neural nets that can design other nets, we'll be dealing with sentient machines.
--------

Anyways, today I started sketching the concepts of this project in my favorite toy language, C#, and with the GUI facilities at hand, I think I might even go for a full visual network designer at a later time, once the core work is done.
rolliebollocks
Posts: 2655
Joined: Aug 28, 2008 10:54
Location: new york

Post by rolliebollocks »

Actually, organic evolution of cellular organisms invented the concept, and now, the sentient, sophisticated and utterly complex neural networks (that are our brains) are aware of this very concept and trying to study, reverse-engineer and mimic it at a new level.
Well, we can quibble over this and that definition and why one word is better here than another word there, but not today.

Instead I'd rather expand on your question of sentient machines.

When you consider animal intellect in terms of human intellect you're faced with the desire to make some sort of distinction which rightly differentiates the human intellect and gives to it special attributes that animal intellect lacks. But it starts with rats, and local memory, the ability to navigate a terrain, to return to places that a full of food, to remember where the water is, and to keep this image in your mind for future usage.

So the question of memory becomes extremely important. Consider that ants/worms etc... (I'm fairly certain) do not have any memory at all.

So to me, the entire question of intelligence revolves around memory.

And by extension, the ability of a spider to spin its web is really no more impressive than a seed knowing it has to grow downwards. That's precisely what its programmed to do.

And biological evolution does not invent concepts. Scientists observe biological processes and then reorganize them to fit inside their little brains which are not big enough to contain the full story, only one little piece at time.

rb
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Post by voodooattack »

rolliebollocks wrote:When you consider animal intellect in terms of human intellect you're faced with the desire to make some sort of distinction which rightly differentiates the human intellect and gives to it special attributes that animal intellect lacks. But it starts with rats, and local memory, the ability to navigate a terrain, to return to places that a full of food, to remember where the water is, and to keep this image in your mind for future usage.

So the question of memory becomes extremely important. Consider that ants/worms etc... (I'm fairly certain) do not have any memory at all.

So to me, the entire question of intelligence revolves around memory.
All creatures have a memory, even worms, however, memory is subjective to brain capacity, and intelligence comes from memory, indeed, I'll explain below.

Memory is the sum of all neurons in a brain working together, and the programmatic/mathematical representation is the following; Imagine an infinite array of multiple dimensions, each entry represents the output behavior for a certain probability, and each index is the state of an input, while x specifies the required output, as in:

Output[x] = Behavior(x, Input1, Input2, Input3, Input4, ...)

It's rather more complex, but this is a simple illustration.

Learning is done by altering the weights/thresholds; thus altering the neural paths taken with the target probability, in the case of the array representation, learning would be done by changing the value of said entry in the array, and possibly with a lesser degree the values that "surround" it.

This array would be infinite (due to the indexing values being a floating point number, let's say ranging from -1 to 1) and thereby impractical to reproduce using conventional memory storage methods, but neural networks get it done quite well, the only limit here is the number of the neurons that said network is composed of, a smaller amount of neurons means less learning capability, less memory, and so on.

And the only limit on this is the number of neurons (which is finite) versus the number of probabilities (which is infinite), there should be nice diagrams that illustrate this falloff effect around the web, but my googling skills are not helping me at the moment.

Think of it like a bitmap image, you can zoom in, but once you reach a certain point, no matter how big your bitmap's dimensions are, things will start to pixelate.

This is why primal species with lesser developed brains seem to be "programmed" to do things, also, learning in nature initially happens by genetic selection, and in biological beings, you get other "fizzy" inputs (and outputs), consisting of biological secretions in the blood affecting the brain, hormones, chemicals, etc; allowing for a degree of randomness and unpredictability.

rolliebollocks wrote:And by extension, the ability of a spider to spin its web is really no more impressive than a seed knowing it has to grow downwards. That's precisely what its programmed to do.
And who programmed the spider? :)
rolliebollocks wrote:And biological evolution does not invent concepts.
Biological evolution happens to "use" the concept, and natural selection builds upon it by eliminating the unfit to survive, resulting a better population with every generation.
rolliebollocks wrote:Scientists observe biological processes and then reorganize them to fit inside their little brains which are not big enough to contain the full story, only one little piece at time.

rb
That is true, and I concur, but that's the point, we're all seeking to improve, and better understand the universe around us, trying to artificially imitate neural networks helps us understand the concept better, and how our own brains work.
rolliebollocks
Posts: 2655
Joined: Aug 28, 2008 10:54
Location: new york

Post by rolliebollocks »

All creatures have a memory, even worms, however, memory is subjective to brain capacity, and intelligence comes from memory, indeed, I'll explain below.
Right. I got schooled on this one. I thought otherwise so I looked up. Insects have a simple associative memory with the power to associate visual and nasal (I dunno) signals with foodstuffs. Also, some bees have developed complex strategies for defending their nests against hornets. Most bees have no natural defense and a few hornets can eradicate an entire nest in a matter of minutes. There is a species of bee in Asia however that has learned to defend their nests by simultaneously attacking and suffocating an invading hornet. This is an instinct which at one time or another, must have "occurred" to the bees. Bees can also signal the location of food to other bees from a decent distance.
Memory is the sum of all neurons in a brain working together, and the programmatic/mathematical representation is the following; Imagine an infinite array of multiple dimensions, each entry represents the output behavior for a certain probability, and each index is the state of an input, while x specifies the required output, as in:

Output[x] = Behavior(x, Input1, Input2, Input3, Input4, ...)

It's rather more complex, but this is a simple illustration.
The key to it all, as far as adaptive intelligence goes... Where you have a being, an environment, and the only rule is survive to screw... Is the ability of the AI to generate such equations. The ability of the AI to self-program, based on a more supple algorithm. Have you ever considered what your existence would be life if sex and food did not give you pleasure?

At any rate, in order to pull of an adaptive intellect you must have impetus and incentive (I apologize for my pretentious diction, but this cup of coffee is really hitting the sweet spot). Incentive guides the I to this or that, and impetus breaks the equilibrium which is caused by a satisfied state.

BTW: My background is not in science (which is probably evident to you). My background is Psychology, Linguistics, and postmodern philosophy (thinking about thinking). Psychology and linguistics do not qualify as sciences but rather are more or less mythologies with opaque nomenclatures (mmm. Coffee.) The best you can do is create statistical representations of 'norms.' The fact is that psychology and language both have their roots in the body, but to talk about such things in terms of genetics would be considered Naziesque. The great idea that there is a genetic memory or hardwired behaviors gets eradicated by political correctness. The science of linguistics and psychology lies somewhere in the future when a science of the Signal is more fully realized than it is now. The great paradigmatic archetype (there I go again) for the beginning of this is the computer. We cannot (hard as we try) understand the brain using the brain. Thus the neccessity of the MAP. And then you get into topography which is too complicated to admit tangentially to this discussion.
This array would be infinite (due to the indexing values being a floating point number, let's say ranging from -1 to 1) and thereby impractical to reproduce using conventional memory storage methods, but neural networks get it done quite well, the only limit here is the number of the neurons that said network is composed of, a smaller amount of neurons means less learning capability, less memory, and so on.
What about a file based strategy to LTM? We have STM which can hold 9 items for around 30 seconds, LTM which is more a less a "hard drive" to the STM's "RAM". LTM in humans maybe more advanced/is allocated more neurons, but question really doesn't boil down to LTM/STM it boils down to usefulness as goverened by the pleasure principle and repetition or recurrence of the Signal/Response "EVENT".
And the only limit on this is the number of neurons (which is finite) versus the number of probabilities (which is infinite), there should be nice diagrams that illustrate this falloff effect around the web, but my googling skills are not helping me at the moment.
Here's where your recurrence algorhithm would decide what signals are important/deserve the most neurons, based on frequency of occurrence/importance etc...
Think of it like a bitmap image, you can zoom in, but once you reach a certain point, no matter how big your bitmap's dimensions are, things will start to pixelate.
I like this analogy. It also seems to indicate, the more generalized your knowledge is, the fuzzier things seem to appear to you. My knowledge of neurobiology is indeed pixelated.
This is why primal species with lesser developed brains seem to be "programmed" to do things, also, learning in nature initially happens by genetic selection, and in biological beings, you get other "fizzy" inputs (and outputs), consisting of biological secretions in the blood affecting the brain, hormones, chemicals, etc; allowing for a degree of randomness and unpredictability.
Set your fuzzy's to be relative to a homeostasis... I'm a little cold, I think I'll cover myself because I remember the last time I was cold I found a place to burrow into and I wasn't cold anymore. So the action *negates* the stimulus. Sets it back to its defaults incrementally.

Code: Select all

And who programmed the spider? :)
Well, according to my research, it's either you or God.

:)

My argument is actually that the environment programs the spider, and that the spider is a part of the environment, and thus other species can adapt to the spider. The ability to see a web would give a fly a nice advantage. So my science-answer is the environment. But even the staunchest atheist must finally reckon with the Voice behind the echoes. Philosophically it boils down to the question of the origin and the source. Or the first mover in Aristotle's terms. There are certain facts we humans grapple with, certain questions better left unanswered. The only thing I know for certain is that we have a creation, a program. To what extent we will ever be able (as empiricists) to resist the temptation to assume that the creation has a creator, or that the program has a programmer....

Well, you just never get around the regress, and yet you can't assume God without botching your science. Sucks to be human. What's left to say?
Biological evolution happens to "use" the concept, and natural selection builds upon it by eliminating the unfit to survive, resulting a better population with every generation.
Well, this is true for every species on earth save one. Scary topic. Very scary. Very interesting, and very nerve racking, and very scary.
That is true, and I concur, but that's the point, we're all seeking to improve, and better understand the universe around us, trying to artificially imitate neural networks helps us understand the concept better, and how our own brains work.
Ah yes. And so does self-awareness. There is a remarkable almost fundamentalist style hubris that scientists bring to the construction of reality that I find extremely distasteful. Especially considering all the advancements in science have been corrections and refinements of prior science, and that trend will continue until there is no desire for science. The hubris is genuinely misplaced. If you can't look at the complexity of our world with anything but awe and humility then fudge you up the wazoo. Some genius smarter than yourself will come along and correct you and your name will be a footnote under his. That's the prize one way or the other. In some ways science is a fundamentalist mythology with no ethos. It would turn to scoial darwinism and nazism without religion constantly keeping it in check. Praise God for religion. This from an anti-theist.

rb

Before there was metallurgy there was alchemy.[/quote]
voodooattack
Posts: 605
Joined: Feb 18, 2006 13:30
Location: Alexandria / Egypt
Contact:

Post by voodooattack »

rolliebollocks wrote:The key to it all, as far as adaptive intelligence goes... Where you have a being, an environment, and the only rule is survive to screw... Is the ability of the AI to generate such equations. The ability of the AI to self-program, based on a more supple algorithm. Have you ever considered what your existence would be life if sex and food did not give you pleasure?

At any rate, in order to pull of an adaptive intellect you must have impetus and incentive (I apologize for my pretentious diction, but this cup of coffee is really hitting the sweet spot). Incentive guides the I to this or that, and impetus breaks the equilibrium which is caused by a satisfied state.
And that's what fascinates me about neural networks and how they work and adapt in many ways, In computer simulations, artificial neural networks can be taught to do specific tasks, learning is done by giving them an input, and an output, the learning algorithm simply alters the weights on synapses, and the thresholds of every neuron involved, until the error margin drops to an acceptable value.

But in organic beings, you don't have the luxury of an optimal output value for said inputs, and in this situation, this is not how learning is done, take an example of a 2 years old child, he/she sees a candle, and goes "oooh.. shiny", then the kid tries to touch it, but the flame hurts him or her, at this very moment, when he/she first experiences the burning sensation for the first time.. what happens in their brain?

I personally believe that pain, and satisfaction, are the real tutors here, I think that pain stimulates and triggers the learning process in this case, and in my belief; this is how we all learn not to play with fire, and that such memory engraves itself deeply in that child's mind, that's a lesson well learned.

Same goes for when you teach your dog not to go on the carpet, you spray them with water, the dog learns that going inside the house means getting wet, and stops doing it gradually.

Thinking of this further, how do we all learn that eating takes away that awful sensation in your stomach, we learn this in the very early childhood, when a toddler is hungry, they simply cry, and the mother feeds them, hunger goes away.

I know that this is all trivial, and that I'm not writing about anything novel or new, such topics were studied, discussed and experimented with a lot, but what I think of is, how can we apply this knowledge to simulate organic learning?

Spiking neural networks are the perfect medium for this sort of thing, but the problem still is: how do you simulate the neurons teaching neurons teaching other neurons like the organic brains do.. all on a lousy computer; when a single neuron is like a processor by itself?
rolliebollocks wrote:BTW: My background is not in science (which is probably evident to you). My background is Psychology, Linguistics, and postmodern philosophy (thinking about thinking). Psychology and linguistics do not qualify as sciences but rather are more or less mythologies with opaque nomenclatures (mmm. Coffee.) The best you can do is create statistical representations of 'norms.' The fact is that psychology and language both have their roots in the body, but to talk about such things in terms of genetics would be considered Naziesque. The great idea that there is a genetic memory or hardwired behaviors gets eradicated by political correctness. The science of linguistics and psychology lies somewhere in the future when a science of the Signal is more fully realized than it is now. The great paradigmatic archetype (there I go again) for the beginning of this is the computer. We cannot (hard as we try) understand the brain using the brain. Thus the necessity of the MAP. And then you get into topography which is too complicated to admit tangentially to this discussion.
I think psychology is outright tied to the subject, but if we couldn't understand the workings of the brain using the brain (like you wrote), I think that would void the whole field of psychology in the first place. =)

I know that the psyche of one's self is different than the brain, and that psychology tries to study the mind, not the brain, however; the mind is nothing but the outputs of the brain, and if you study the source of said output, you might as well have a broader look and deeper understanding of how things come to pass, you don't have to study automotive engineering to drive a car, but it would sure help a lot if you knew how the engine works.
rolliebollocks wrote:What about a file based strategy to LTM? We have STM which can hold 9 items for around 30 seconds, LTM which is more a less a "hard drive" to the STM's "RAM". LTM in humans maybe more advanced/is allocated more neurons, but question really doesn't boil down to LTM/STM it boils down to usefulness as goverened by the pleasure principle and repetition or recurrence of the Signal/Response "EVENT".
I honestly don't believe in STM and LTM, we only have one memory, but how we retain and develop it, that's another story.

I believe that short-term memory exists, but it just works a bit differently than what we're told, I believe that short term memory is simply how long it takes for new information to propagate throughout the brain, during this process, the information is not being "stored" somewhere, it's being handed over from one part of the brain to the other.

This is an ongoing process, as things fade away from STM, new information are added, but, what happens to the items we forget after their STM cycle is over?

They are simply used in the process of learning, weights and threshold adjusted as they complete their cycle and this is how they are effectively stored permanently if found useful.

However, the real conversion to RTM, as I am convinced, happens during sleeping; Your brain makes use of all the information it gathered and makes physical changes to optimize the neural interconnections in the brain, new neural pathways are formed and some old ones forsaken, as newer (and more important) memories are literally engraved into the cerebral cortex and the various parts of your brain, and less important memories fall out of context and fade away.

This is more evident in early childhood, as the brain is still developing and is much more flexible and capable of making such changes, and as we grow older our ability to learn (and to remember things) degrades with time.

Linear memory storage is completely incapable in comparison.
rolliebollocks wrote:
Think of it like a bitmap image, you can zoom in, but once you reach a certain point, no matter how big your bitmap's dimensions are, things will start to pixelate.
I like this analogy. It also seems to indicate, the more generalized your knowledge is, the fuzzier things seem to appear to you. My knowledge of neurobiology is indeed pixelated.
What I meant there was; Humans have bigger bitmaps (brains), we have more pixels (neurons), and thus, we have more detail in our behavior.

Like, take for example, fish, they seem "programmed" to do things, they have much less neurons in their brains than mammals, and that's why they have almost no memory at all, they just swim, feed, reproduce, repeat; even if they form "schools" and exhibit group behavior, they're still practically programmed to do what they do, learning is minimal, and the new "smarter" fish occurs through reproduction, a better generation that's more fit to live.
rolliebollocks wrote:Set your fuzzy's to be relative to a homeostasis... I'm a little cold, I think I'll cover myself because I remember the last time I was cold I found a place to burrow into and I wasn't cold anymore. So the action *negates* the stimulus. Sets it back to its defaults incrementally.
Exactly, this is how back-propagation works, although at a more abstract level; set it (the error margin) back to its defaults (0).
rolliebollocks wrote:

Code: Select all

And who programmed the spider? :)
Well, according to my research, it's either you or God.

:)

My argument is actually that the environment programs the spider, and that the spider is a part of the environment, and thus other species can adapt to the spider. The ability to see a web would give a fly a nice advantage. So my science-answer is the environment. But even the staunchest atheist must finally reckon with the Voice behind the echoes. Philosophically it boils down to the question of the origin and the source. Or the first mover in Aristotle's terms. There are certain facts we humans grapple with, certain questions better left unanswered. The only thing I know for certain is that we have a creation, a program. To what extent we will ever be able (as empiricists) to resist the temptation to assume that the creation has a creator, or that the program has a programmer....
Only a fool would think that such marvelous creations have no creator, if a program exists, there's always a programmer behind it, same goes to creation, it is evident that there is a creator behind all of this.

Science does not embrace chance, and does not believe in miracles, and as such, I'd really love to see a scientist step up and tell me that universe we're living in, this galaxy, and this planet (which is conveniently perfectly fit to support life), with its ecosystem, every creature taking part in it, along with its sentient beings, including him, are here by accident.

I completely agree that the environment programs the spider, but who made the spider programmable in the first place? that's the right question. :)
rolliebollocks wrote:Well, you just never get around the regress, and yet you can't assume God without botching your science. Sucks to be human. What's left to say?
Well, you can; science is not perfect, and like you said, all of our new discoveries are simply emendations of older knowledge, but that's the scientific process, you assume, you experiment, you amend your knowledge, and then repeat.
rolliebollocks wrote:
Biological evolution happens to "use" the concept, and natural selection builds upon it by eliminating the unfit to survive, resulting a better population with every generation.
Well, this is true for every species on earth save one. Scary topic. Very scary. Very interesting, and very nerve racking, and very scary.
To be honest, I don't think it's that scary ;)

It's just flat out cruel, and that's life as we know it, but what makes it intimidating to humans is the fact, that you can't observe it, you can't "evolve", the cycle is just bigger than you are, and you simply reproduce then die, leaving your genes behind; or just die and have your genes wiped out of the genetic pool, cruel? yeah, but very effective.
rolliebollocks wrote:
That is true, and I concur, but that's the point, we're all seeking to improve, and better understand the universe around us, trying to artificially imitate neural networks helps us understand the concept better, and how our own brains work.
Ah yes. And so does self-awareness. There is a remarkable almost fundamentalist style hubris that scientists bring to the construction of reality that I find extremely distasteful. Especially considering all the advancements in science have been corrections and refinements of prior science, and that trend will continue until there is no desire for science. The hubris is genuinely misplaced. If you can't look at the complexity of our world with anything but awe and humility then fudge you up the wazoo. Some genius smarter than yourself will come along and correct you and your name will be a footnote under his. That's the prize one way or the other. In some ways science is a fundamentalist mythology with no ethos. It would turn to scoial darwinism and nazism without religion constantly keeping it in check. Praise God for religion. This from an anti-theist.

rb
Before there was metallurgy there was alchemy.
I couldn't have worded this better. =)

P.S: Sorry it took me a while to reply, as I'm currently preoccupied, working on this project amongst other things, still coding the lexer and writing a neural network library from scratch (since I couldn't find any that fit the model I'm after), more updates are to follow soon (I hope).
TheMG
Posts: 376
Joined: Feb 08, 2006 16:58

Post by TheMG »

That is literally the biggest post I have ever seen.
Post Reply