Going somewhere

All small parts seem to be working, I have gotten small invariance for position and size and now I’m working to put them all together and hopefully have some sort of object identification. Soon I hope, I’m still dealing with some bugs and some things are working for the wrong reason and perhaps I’m procrastinating too, because I’m afraid that I’m missing something and it will not work..

Why this uncertainty ? isn’t all math ? There is the deep issue that I don’t see how this is working … In small simulations I know for sure what I’m doing but after adding some complexity and multiple layers it’s impossible to predict how thing are going to turn out… much like the current AI algorithms. I’m sure Google team that released the “Transformer” algorithm did not see ChatGPT in the future. Imagine Google having exclusivity on the Transformer … being the only one with generative AI…

I’ve been thinking long and hard of what to do next, say I get to identify objects.. what next ? I have many unexplored ideas such as linking unrelated objects as being the same, such as “A” and “a” or even the sound of “A”.. But this is not adding much value to the whole. The value should only come from an AI that can “reason”.. can plan.. I found it impossible to make the AI algorithm understand things that don’t come from outside as information. If I were to say : “Move the mouse cursor 2 pixels to the left”, most difficult is “Move”… move is nothing, while the others are something.. How can I convey that IT should do something ? I can of course cheat, program in some key word, once detected the word “Move” will trigger a scripted action.. But that is zero value to my mind.. So in fact I don’t see a way past data processing (image identification, sound, anything that can be converted in some arbitrary numbers)…

Can it work as a chatbot ? No… ChatGPT is an anomaly in the world of AI.. I agree that our brain also relies on statistics but if some one says ” Move the mouse cursor 2 pixels to the left”, my brain does a lot of processing .. it has to identify that I’m the one that should do the action by analyzing the environment, the context.. Am I close to a computer ? I’m I the only one in the room ? Do I want to answer this request, is it in my best interest to do so ? Only after many processing I will do something…. ChatGPT avoids all this hard processing and relies on what has learned as statistics from examples to formulate and answer immediately. Still in some parts, we as humans, work as ChatGPT… respond with learned answers without any other type of processing .. In this context “learn” is an arbitrary association in between parts of information (aka symbols). Much like “A” = “a” ..

So after I’m done with object recognition, I will seek help, some sort of collaboration with whomever might be interested …

Some Updates

I have reached some limits using Python, Tkinter and Matplotlib and I had to switch to DearPyGUI and converting more code into C through Cython. This took awhile but results are good for now, I did not realized how slow was Matplotlib with plot drawing..

Now that I have more tools to see what’s going on, I realized that LTD/P effects are more complex than I envisioned.. if a neuron responds faster to an input signal (say because of an LTP effect, or higher frequency input signal or because it has more connections or because is less inhibited ) then it immediately alters the flow of information … Weak synapses become oriented resulting in (as of now) unpredictable results.

Inhibition is also more complex, a neuron under inhibition could be forced to wait for the second activation signal to become activated, but I’m still not sure of how to do it… Right now the neuron remains in an undefined state when inhibited… Is not firing but is not in re-pause either, I haven’t decided what to do with it..

With more speed I could let the synapses form and break indefinitely and it became clear that there are strong synapses defined by data flow and weak synapses that form and break immediately, about 10% are weak synapses..

Firing Rates and Neuronal Synchronization

I’m defining firing rate as:

Cycles needed by a single presyanptic neuron to activate a postsynaptic neuron,

Assume a single presynaptic neuron could activate a single postsynaptic neuron. If the presynaptic neuron has to activate 3 times before it can activate the postysnaptic neuron then the firing rate is 1/3. The firing rate is then a function of how much potential can a single synapse bring per activation. So close (proximal) synapses to the neuron body would generate higher firing rates than distal synapses. So the firing rate is not entirely depended on the firing rate of the presynapstic neuron.

Another definition is for “semantics of dendrites” .

The semantics describes the relationship between two dendrites. If we define a Dendrite a set of synapses (with values ON/OFF) , then the semantic between dendrite A and B will be the percentage of synapses from dendrite B being ON, while dendrite A is generating an activation potential.. Still working on this definition.

I hit a dead end with the current development branch. I calculated the theoretical dendrites and then tried to actually obtain all of them within my simulation by doing a precise training. It did not take long to realize that most of the dendrites could not actually form because they were being blocked on way or another. So I decided to switch to a more advance model in which I included the semantics of dendrites. Till now dendrites were independent. The model is actually what I started with but was too complex to handle and it had a theoretical drawback, the AI would go “blind”. I implemented most of the changes and indeed the blindness problem is there and I don’t have yet a solution. What do I mean by blindness ? Is some sort of over-training which leads to synapses being definitively removed from neurons.

But more concerning than blindness is still neuronal synchronization. I also implemented a form of “firing rates”. But that immediately added a more chaotic behavior. Running same pattern would end up with different responses on different cycles, first would show up most probable response then the neurons will go through a cooling cycle and the next most probable response (if any possible), would show up…

Firing rates bring more chaos to an already chaotic model