Uncontrolled activation

I’m still running simulations on the effects of LTD-D and LTP (long term potentiation) . If a synapse is activated its potential goes to zero eventually in the absence of additional stimuli. So this made me believe that a network would eventually go back to a rest state but it turns out I was wrong.. This is not a situation of conservation of energy, energy is continuously used to keep a neuron working so the net result is that due to inter-layer loops of neurons, they can stimulate each-other and the activation would never stop….

First and second hidden layers keep themselves active without external stimuli.

Perhaps same principle applies when it comes to the excitation wave… I could not find the right conditions where the wave would stop after some distance from the starting point.. It goes on forever .

So perhaps the inhibition should target specifically neurons that keep on activating breaking the vicious cycle of activation..

Long Term Depression preliminary work

I implemented a sort of LTD associated with depolarization: excitation of a neuron during its depolarization cycle, leads to a decrease in “synaptic strength” = LTD-D.

LTD-D much like inhibition has a devastating effect on neuronal activity. There are cases where LTD can regulate firing frequency but in some cases this regulation leads to synapse removal, so that cannot really be regulated. The synapse removal happens when pre-synaptic neurons fire faster than the re-polarization cycle of the post-synaptic neuron, this particular case should be rare, happening only in input layers where the signal can be faster then the depolarization cycle, once that layers is past, because all neurons have the same duration for re-polarization, there can be no synapse removal based on this mechanism. However I’m not convinced all neurons have the same duration for re-polarization. But this mechanism is actually very active when neurons link inter-layer. Nearby neurons are very likely to fire at exactly the same time so they will excite each other during the re-polarization cycle resulting in synapse removal … So if I run cycles were all neurons are ON at the same time, all inter-layer links are getting removed.. This leads to a pattern where a firing neuron has to be surrounded by neurons that are not firing at the same time, or leading to an abstract directionality, meaning neurons in a line will fire one after the other not all at the same time. Without this LTD-D, there will be no propagation of the signal though, since all neurons firing at the same time will not help each other to propagate the signal further.

In the video below, I’m showing how an input signal is spreading throught the same layer.. Same behavior should be seen in between sequential layers:

I calculated how much the signal should spread and I was wrong. Being wrong is ok, what bothered me was the fact that there were two more variables (at least) on top of the 7 I already considered, meaning that this is so complex now that I cannot (roughly ) predict anymore what should happen.. I’m afraid I’ll have coding mistakes that I will not be able to see…himmm

I removed all inhibitory neurons for now and started working on them. Is there an algorithm that has ever predicted the need for an inhibitory mechanism ? Now we know they exists and still cannot understand what they are doing.. I guess what I’m trying to say is that we cannot think our way into understanding inhibition, I think we need to discover it.. I usually used inhibition on a general principle of “winner takes all”, but now I’ve seen cases where other patterns should also be present, so that 100% inhibition can’t work like that.. There is also the case that inhibition protects a synapse from an LTD-D effect… Could that be its hidden purpose ?

While I made progress in various directions I made no “significant” progress.. The hypothesis I had of how this should work, no longer apply , I still used them as a general guiding goal but this is much more complex than I had originally envisioned..Next objective is to add inhibition.


Long term depression / potentiation

I use these terms very loosely to mean the increase (LTP) or decrease (LTD) of potential delivered by a synapse to the neuron body. I used some forms of LTP/LTD in my previous versions but they were never meant to approach the biological equivalents. Now, I spent some time to read what is known in biology about LTP and LTD and while there are tones of papers on the subject, I could not find anything that has explored the need for such mechanisms.. They are used in “leaning”, that’s a very vague statement.. I’ve been thinking and I cannot find a use case for them. I found a use for an LTD event during depolarization of postsynaptic neuron, but no uses for LTD/P associated with low/high frequency inputs.. What is low/high frequency ? They seem arbitrary to me. I can use whatever frequency in my code, I should be able to link these terms with something… but to what ?

However, I believe the main reason for having an altered synaptic potential is to change the firing frequency of the postsynaptic neuron… This conclusion troubles me, firing rate is crucial in selecting / separating events (inputs), any alteration (or missing alteration) can be picked up by the inhibitory neuron and amplified, resulting in vastly different results even when the initial change in synaptic output was extremely small. So if I don’t add them now and don’t understand them, they may come back to haunt me…. yet, I don’t need them..

My plan is to explore the implication of LTP and LTD on various other variables but with no certain goal in mind, I find that both boring and difficult.. Is there a paper showing they are actually “long” term changes ? I found a paper saying that very few last up to a week and most alterations vanish within hours. I don’t consider that, long term…

Does LTP/LTD stop ever ? with age perhaps ? for certain layers maybe ? Do they become less frequent ? So many questions… Given my difficulties in transporting signal through layers is still feasible that some LTP/D events would be extremely difficult to change in deeper layers, so they in the end could be viewed as long term and part of the learning mechanism.. So they are long term because they are hard to change… speculations …

Inter-layer transmission

If neuron in layer 1 (L1), requires 10 inputs to fire, and those 10 inputs are delivered in 10 cycles, another neuron in layer 2, requiring also 10 inputs for activation, is activated in 100 cycles by the neuron in L1… In third layer, the cycles required for activation is 1000… So this cannot work like this.

I was aware of this issue since the beginning but I have hoped I can solve it by increasing synaptic efficiency so basically neuron from L2 would require not 10 inputs from L1, but say just 1… That would have been acceptable…The problem with this approach became apparent very late, by increasing synapse efficiency, the selectivity of the post-synaptic neuron decreases. So the solution I envisioned proved to be a dead end. Now I’m considering other approaches to deal with this slow transmission from layer to layer..

  1. Would be to have multiple synapses between Neuron from L1 and neuron from L2. This does not look very promising from various reasons, but maybe in combination with other ideas, could work… not necessarily make 10 synapse, but even 2 synapses would reduce significantly the delay.
  2. have much more neurons in L2 then in L1. And those extra neurons would serve as some sort of amplifier .. would bind among themselves, and excite each other in a bizarre loop. I have played with such loops in the past but they resulted in continuous excitation. Maybe they could be used to store more patterns too… I was planning to add more neurons in L2 anyway, so I’m more inclined to start with this approach.
  3. accept a serious reduction of signal in L2… Basically 10 neurons from L1 could link to a single neuron in L2, and that neuron would fire immediately after the 10 neurons from L1 fired because it receives 10 inputs. This could be part of the solution, but I don’t see this as acceptable (this is what is happening right now by default, when there are multiple binding from L1 to L2)
  4. Something else that is unknown now…

I’m also not happy with the inhibitory neurons… By acting fast (require just 1 input to go active) and being 100% efficient, removes some of the learning rules I have envisioned.. They are not in my immediate focus but they are bothering me..

The new synapse kinetics work extremely well, beyond my expectations.

Preliminary conclusions for new kinetics

  1. Inhibitory action has to last more or equal to the number of cycles required for a neuron to cool down after an activation event.
  2. Synapse kinetics have to be slow compared with what happens in the neuron after activation.
  3. There is no more propagation of signal in layer 2 because of potential decay in between two firing events from layer 1.
  4. Inhibition can now be synapse selective, but I don’t see the reason to make it so.
  5. There is no way to guarantee “perfect” separation between all the very similar patterns. Some patterns will give an identical response. Yet, any two patterns can be separated with the provision that other two or more patterns may become identical in response (not separated by neuronal response).
Neuronal potential, in layer 1 and layer 2. Upper side, there are 3 distinct events: Before activation, After the first activation when only a single synapse was contributing to the total potential, After activation with multiple synapses contributing to the total potential. Lower side, a single neuronal input from layer 1 and the L2 neuron loses potential, eventually reaching a steady stated, when the neuron is not able to fire but is not inactive either..
Synaptic potential, prior to the first firing event and after
Potential of a single neuron when running with different patterns

what to do next ? Don’t know yet..

Adding time #2

Seems by adding kinetics to synapses I added also time to the algorithm. But time has always been an elusive variable. Time is the rate of change for some events. So time is not really correlated with the outside arbitrary unit of time and will depend on the computer power. It is very possible to correlate this internal time to the outside time but for now it will serve no purpose. However time is now embedded in multiple processes. What can be learned is now indirectly linked to time. The time component will determine what is correlated and what is “important”. Time also seems to determine how many patterns can a synapse learn without internal changes. Slower kinetics would allow for more patterns being learned.. In a way would increase precision or selectivity. Increase precision requires more processing cycles.

On the update side.

I implemented the new kinetics at synapse level but I need some sort of kinetics at neuronal level. Without it I cannot decide when a neuron was active. Inhibitory neurons still work on the old simpler mechanism so inhibition is instantaneous and inhibits 100%. I may have to change that in the future.

Kinetics #4

various synaptic potentials for various firing rates. I think I’ve done all I could for this particular issue, if it’s still not working properly, then I don’t know… Anyway, I will move on to other issues, make some actual progress within other parts of the AI algorithms..I hope this is good enough so I won’t have to deal again with synapse kinetics.. I now have enough parameters to completely change the behavior of the synapse if I need it..

Kinetics #3

Combing synapse kinetics with diffusion into the soma, I get this behavior :

For small firing rates (1 / cycle rate), the signals is not adding up, but for higher firing rates the signal is adding up (temporal summation) resulting eventually into a firing event… and all this trouble because I could not smoothly converge to an arbitrary number..

Kinetics #2

I spent a lot of the time on this topic for various reasons. Now I’m happy with this kind of behavior:

where: cycling rate = 1/ firing rate. This is the first model I will try within the AI system, second model should be something where only the height of the peak is affected by firing rate, but should reach zero at about the same time for all firing rates.

Kinetic models

I distilled down two possible models to use for synapse kinetics.

In literature, seems the model from the right, B line is the way to go, but to me that does not seem right, because there is no way to regenerate A (on the right I still have a +50 arbitrary regeneration rate for A curve, impacting B and C behavior). Left model varies also wildly by having different kinetic rates ..

Also I don’t like B from right , because a decaying value leads to a certain timing for neuronal activation, meaning a synapse will build up potential only up to a certain time, then it drops with no steady state reached …. but it also could be right.. So I’m undecided …