A new year 2025 !

If I were to summarize my work from 2024 in a single graph, then this would be it 🙂

So I finalized the LTP/D algorithm. It’s meant to calculate neuronal response, in terms of cycles to activation, when the number of synapses changes, frequency changes, synapses with different latency or different distances, synapse learning… It took me a whole year and I’m still not sold on it … I tested it on all scenario and the response is consistent but in many ways is not what I expected. What I obtained is murky, not clear cut. Anyway is good enough to use it in a network.

So for 2024 I had a single goal, to have a LTP/D algorithm.. For sure I was hoping for more… much, much more, but in the end I’m happy that I have a clear starting point for 2025..

For 2025 I want to be able to “learn” and reproduce an arbitrary sequence such as: AABAAC … something like that.. I abandoned for now the vision side of the network… Turns out that is much more complicated than I imagined, because the cell from the eye (horizontal, bipolar and amacrine cells) do very complex changes on the input.. So for now I will work with non variable input… so for neuron N(x,y), there is a single input, say A.. or white… This may seem simple, but it’s not simple at all for my system.. It is easy enough to record MOUSE, working in parallel, but if I want to record HAPPY, because I have two “P”, repeating letters, then I need to use a sequence rather than working in parallel. But how to work with a sequence is totally unknown to me..

What is learning #3 ?

I still can’t define the problem clearly. Anyway, it seems there are 2 ways of learning:

  1. Bayesian, in which you count the evidence. The downside: assuming I keep seeing the movie Superman, if I only count occurrences, at some point I should start believing that humans can fly.
  2. Feedback learning, where an input modulates another type of input. The downside: the feedback is not always right. If you ask a girl out and she says no, doesn’t mean much. You may get correlation without much causation.

So it seems #2 is still a subset of #1 and #1 lacks a clear definition of what is possible or real. The question is how should I implement learning in my network. In either cases what I struggle with is deciding when something is learned. Assume I do counting. Assume my “synaptic strength” is a change in cta (cycles to activation). So how many times should I encounter a sequence of events before deciding to change cta, meaning to “learn” ? How much should I change cta ? Mind you, cta is small, I work with an arbitrary number of 35, so I can’t change it that much anyway .. Bottom line I can’t store much in the cta value. Regardless of using #1 or #2. So the question is then, what else is left ? I’m thinking of not using “learning” at all, it seems pointless to change default cta value with +/- some small number. But this leaves me with everything is learned or nothing is …. I still need to use a criteria to store data.. If it’s not based on the 2 hypotheses of learning then what else ? I’m very much confused about this topic..

Artificial Intelligence -Update- One Neuron in action !

After many trials and many grandiose ideas, I simplified everything to bare minimum. Only a single neuron. Fix that part first. I gave up on sophisticated connections, but those functions are still available, gave up on complex distributions. And here I am… a single neuron + 3 input Neurons. The input neurons do almost nothing, they are there to establish input connections with “real” neurons. And without further due, this is it:

I have a pattern of 3 digits but the pattern can be infinite. The learning part comes from repetition. See the same pattern twice and that is it… the third time the pattern is recognized. I taught the AI 2 patterns [1,1,0] and [0,0,1]. Then I asked it for other 3 patterns: [1,0,0], [1,1,1] and [0,1,0]. I was hoping for a “Pattern Unknown”, all over, but instead I got an unexpected: “Pattern recognized !” for pattern [1,1,1]. So if it’s composed of two know patterns, is still considered known.

Keep in mind: NO BACK-PROPAGATION.

I went for an actual neural net, meaning many neurons connected somehow. The full connection (each neuron with each neuron), proved useless, all neurons became symmetric with same output. So next step is to go for an asymmetrical connection between neurons. I’ll first try the asymmetrical_1D and then asymmetrical_2D, and eventually get back to random spherical distribution.