What I call a computational cell is made up of 4 normal neurons governed by an inhibitory neuron as shown in my previous post. Now I increased the number of computational cells to 4 and everything became a big mess… Some neurons are now part of up to 4 cells… or 2.. or 3.. As synapses are further away from the neuron body, their contribution to the overall potential (activation potential), decreases. So the more synapses on a single neuron the more complexity resulting in a poor understanding of what’s going on and adjusting/optimizing the algorithms 🙁
Still some cell behave more or less predictable, predictable enough to realized that I have an additional problem, which I predicted, but I was hoping it will get somehow solved by the added complexity, something that I could not predict, but I could observe when actually running the full network. It did not happen. Far away patterns cannot connect to each other because I don’t have a fully connected network, neurons bind only in a limited area around their position (in the matrix) .Moreover things that worked on a single computational cell, such as a single neuron responding to all vertical lines, do not work among multiple computational cells. Last problem that I encountered is that now, the network, has become visible slower and I only have 128 neurons running wild… That’s because I’m forming now, tones of dendrites to respond to various patterns or perhaps there is something wrong within my latest modifications some wild loops working for no reasons… doesn’t bode well for the future though…
Overall I’d say I’m stuck with no clear way going forward.
Why am I writing this blog ? Because nobody is reading it 🙂 Well I write because by doing so I’m usually clarifying some thing in my head, sometimes laying out the problem clearly is a big step forward . For the same reasons I also speak to my friends about AI. I’m such a bore, I can tell you that much 🙂