Synaptic strengths and updates

I worked hard in the past month to implement the new theory but I made little to no progress. Every time I think I understand something and solve some problems, I find that things are much more complicated than I previously believe them to be. New complexities that I did not think of… at all.. So for the first time I’m actually pessimistic…

Synaptic strengths, in my model is defined by 3 main variables, the synapse is defined in total by 5 variables. At some point I realized that there is a play between learning new things and remembering old things and the new theory should have solved the issue. The mathematical model is limited to 2 synapses and I cannot actually predict what would really happen when the neuron is inserted into the network. But in theory should solve the issue of learning/remembering. I inserted the neuron into the network immediately because in fact the coding does not support anymore testing a single neuron by itself. But once inserted into the network, the network is at the same time too simple to test the learning mechanism, but also too complex when I add more than 4 neurons… So I need to get back to a simpler version where I can test only a single neuron but with complex patterns.

I concluded that while simple and complex cell may seem very similar in their biology, from a functional stand point, they must be very different. Simple cell select precisely some patterns while complex cells differentiate patterns through firing rates. I believe the simple cells are not that dependent on the firing rates, but in the model I’m using this behavior cannot really be ruled out.

I may have found a way to selectively group signals based on their complexity. Meaning complex signals will converge to the center of the surface, while the more simple patterns will stay on the periphery. But, at this time, I only have two areas and I can’t be sure more areas will form or how should they form. Also an area (in current simulation a 4 by 4 matrix) is somehow hard to define in a general way, right now I’m defining it as being the edge where the inhibitory neurons are overlapping, allowing for more signal but at the same time being more controlled (it activates less because is more often inhibited).

Inhibition works, somewhat … but it’s unpredictable because of the firing rates. Firing rates lead to a general unpredictability because I don’t know when to synchronize the neurons. Synchronization is possible but I don’t know the right triggering event. Sometimes they should synchronize other times they shouldn’t, because I don’t obtain the “correct” answer. Correct answer is also poorly defined.

I’ve also concluded that the network cannot be precisely corrected with specific feedback. Any feedback (back propagation equivalent in a way) has to be nonspecific. Meaning that it may lead to a desired outcome but it will also lead to unpredictable changes, secondary to the primary desired outcome, or, sometimes, the undesired outcome will be the primary outcome. This conclusion has come from trying to implement a “calling function” where the neuron will send a signal within an area, signaling the neuron is ready to accept new connections.

In conclusion I have very few good news, learning seems to be working but I can’t be very sure. Firing rates may be separating patterns but I can’t be sure of that either. Signal grouping is limited to only 2 areas where I believe it should be many areas..

What’s next ? Next I’m gonna go backwards.. I need to create a branch where I can test the learning mechanism with a single neuron, but with complex input. I’ll see from there..

Leave a Reply

Your email address will not be published. Required fields are marked *