“Cells that fire together wire together.” – see it on wiki
A Hebbian AI is the term I used to describe my AI project, yet, what a Hebbian AI might be is not clearly defined. The mathematical description is extremely limited and is totally different from what I’m using for this AI.
SO what do I mean by Hebbian AI ?
- it has a learning algorithm based on: ” Cells that fire together wire together “, so it is not using regression for learning, so no back-propagation function. It does not do ANY kind of regressions with all the downsides it brings.
- The network is made WITHOUT connections. Connections between neurons are made dynamically. There is both creation and destruction of connections.
- The neurons are using only 0 and 1 as responses, so every type of signal can be correlated and understood, for example an IR sensor can be integrated with an audio signal … They are both treated the same.
Possible advantages over current AIs:
- It could be way faster for some tasks. That remains to be determined with some sort of benchmarks. So far the code is in Python with zero optimization… no multi thread no GPU use.. nothing. Even so, for what it does at this moment, should still prove faster than whatever sophisticated AI out there.
- It can immediately integrate all kind of data, visual, audio, pressure, IR, 2-3 video inputs .. everything.
- It has causality, as far as causality can be determined from observable data, built in.
- It has temporal awareness, so it matters what is learned first
- Understanding is probabilistic, sort of a Bayesian learning
Possible dis-advantages over current AIs:
- It does not do regressions, so it should not be able to do some task, that are possible today with current AI.
- It is somehow more imprecise (no values between 0 and 1).
- It can’t be made totally parallel when computing neuronal responses… many processes need to remain serial (so postsynaptic neuron cannot fire before the pre-synaptic neuron ).
I’ll update this article once I understand more :).