Dendritic Growth 3

I have programmed in the first step, linking synapses on multiple directional dendrite, but no branching yet. However, when running this model I discovered that my kinetics for synapse potential, don’t work well, they were good enough for previous model, but in essence a kluge job that worked for the wrong reasons. Basically I’m not converging well to the firing potential of the neuron, I’m overshooting and the correction, which is not good either, messes up the timing of the firing event. The result is devious and cascades into the following layers resulting eventually into a wrong learning pattern. I added ordinary differential equations, but it did not help, the problem comes from, dP/dC, where P = potential and C is the cycle number … The cycle number is an integer and I can’t do anything about it => the convergence is still poor => W (AMPA receptors) fluctuates from pattern to pattern => a delay in forming a stable pattern in the next layer => that patterns is completely inhibited if it competes with another pattern

Synaptic Potential, left my data, right data from this paper

I don’t want to reproduce real biological data in my simulation, but when I get stuck I look for inspiration in real data :). For now I’m only interested in the upward trend but I still wonder why the downward side looks so …. not symmetric .. Why does it take so long to go back to the initial state ? How long does it take though ? Can a neighboring neuron fire twice in the amount of time it takes for this synapse to regenerate ? Can this neuron fire again while this particular synapse is regenerating ?

I extracted many equations from the code hoping to solve them mathematically… but no luck there either, I don’t know how to solve so many linked simple equations.. but maybe someone does ..

Dendritic Growth 2

As usual, this is much more complicated than I thought. I can’t really decide when to branch, how much to branch and how long to grow a dendrite… Look at the picture bellow, I want to connect A with C, but there is no dendrite growing directly on that direction, so it has to branch to reach C.. But branching can be done from various points as shown in RED. A random branching, from whatever point on the GREEN lines (default dendrites) is out of the question.. The branching has to be done from precise points, where the dendrite connected with a different neuron, so I’m left with 4 branching points… Should I link C with 4 synapses from A’s dendrites? Just one ? What if that breaks ?

green – default dendrite, red – possible dendritic branching

I don’t have enough information to decide on a course of action so I’m left with trial and error, like I did for the synapses kinetics (about 30 models that did not work).. So far I didn’t do much, coding wise, I only change the code to accept directionality for dendrites and decided to go with 8 default directions, basically 8 dendrites that forms only when they are needed, but unless the neuron is at the edge of the matrix, all 8 are needed .. Also I decided to remove vacancies (location previously occupied by a synapse) from available places for new synaptic binding. That place will remain empty, presumably separating two patterns…

Dendrite Growth

So far I worked with a simplified model with a single dendrite. But moving the signal to layer two with more inter-neuronal connections allowed, it led to obvious errors. Why is this important ? In my model distance matter, synapses further away from the neuronal body contribute less to the overall neuronal potential. The dendrites, with their growth, provide that variable distance. I could not find a model in literature to fit my requirements so I came up with two very different models, eventually I decided to start with the one that seems easier because it does not require any calculations (as in geometrical calculations). This model should lead to structures close enough to what is observed in literature, but regardless if it’s close or not, it should clearly link synapses based on distance. It will allow for branching, this is very important since branching allow for synapses with same distance to the neuronal body.

However there are still many questions unanswered. What happens when a synapse is removed from a dendrite ? Does it bind to a further away position ? It is removed forever ? It binds to a different dendrite ? Should I allow multiple synapses in between 2 neurons ? What happens with the vacancy left by the removed synapse ? remains empty ? is occupied by other synapses ? a further away synapse takes its place ? What should I do about far away synapse (away from the neuron), their contributions to the overall potential is insignificant even with a linear decrease in contribution with distance, I now have an exponential decrease so it’s even worse.. Sure in some cases the synaptic strength (AMPA receptors equivalent) increase and the contribution is a bit bigger, but still small. Whys so many direct connections with so small contributions ? The signal would still reach a target neuron through its neighbors, more like in a GNN network .. that would make more sense to me.

As far as I can tell, I now have a good model for :

  1. Glutamatergic synapse (kinetics of glutmate and of AMPA)
  2. GABA synapses (half baked.. is acting on the axon resulting in 100% percent inhibition, but is also affecting active synapses.. so it’s a half man half bear kind of a situation.. maybe half pig as well)

What is similar ?

I’ve been getting many unexpected results in my quest for invariance. I thought is because of a bug in coding or a bad theory. But no, everything seems to be in order, there are no errors that I’m aware of.. So I was left with the improbable.. I don’t get identical results for identical patterns, because “similar” is not what I assumed it to be. Something is similar (or identical) not only when is formed from identical components but it also needs to have the same history. When I was thinking of “context” I was usually thinking only about the “stuff” around, did not think that I need the whole history behind that event (history in context is : sequence of patterns in time).

In the meantime I have some explanations for the lack of synchronization I now encounter on a regular basis.

  1. I don’t have horizontal cells or amacrine cells, in my code this results in an out of phase state that cannot be corrected (my input cells don’t fire every frame but on a certain frequency, set every other frame at this point in time). I have a button that brings them all to frame 1 when this happens, but this is just a cheap easy fix.
  2. The input cells are not out of phase, but patterns within the same visual field, fire at different frequencies. Not sure what to do about this one, could be normal and be somewhat “fixed” within the next layer. I was thinking to link inhibitory neurons among themselves so if one is activated it it will inhibit the inhibitory neurons around( meaning it will inhibit the inhibition they were providing)

Is learning guaranteed ?

I’ve been spending a lot of time trying to reach a state in which I’d get a partial Invariance… I observed that there are at least 2 pathways that would lead to different results..

  1. The frequency and sequence of patterns – only certain sequences would lead to the desired result. There is no way to guarantee for a certain result. The only way to guarantee a result is to have a precise training set which is not what I want, but so far I could not think of a way to correct for an imbalanced training set..
  2. Timing. Learning depends on time, meaning at time t1 we can have result R1 and at time t2 (where t2>>t1), we have a result R2. There is a time limit after which there is no change, but again, there is no guarantee that I get a certain results and no way of assessing when something learned would not change with time.

Both to me seem reasonable but very annoying… I find it very hard to set up an objective function with so many unknowns..

Invariance to nowhere

I’ve already done many simulations with the new model. There’s no invariance in sight. I also have no theories that would predict this elusive invariance, so much so that I’m not so sure anymore that this is obtained at the neuronal level. So I’ve prepared plan B. Even without invariance learn as many patterns possible, very much like the regular ML used these days. Well, that also failed. When active, neurons send a call to other neurons looking for binding partners. But how are the neurons active in the first place ? To solve this problem I linked from the beginning, neuron ij from L1 to same ij neuron from L2. That is proving to be a limiting factor now since in L2 I need to be able to form more patterns than in L1. Even if L2 has more neurons, they never get activated and they never connect to anything. In my previous post I showed a 2by2 matrix learning a 2 pixel pattern, but the cross lines were missing as a pattern because I did not have enough neurons in L2 to learn those patterns.

Anyway I need to make neurons activate “spontaneously”, till they make their first connection at least, see if that solves my problem.. Why not make fully connected layers ? Theoretically, that should work too but is a bit impractical because training takes a lot of time. Also this random activation is not as simple as it seems, because it will result very fast in a fully connected network.

on the right track….

After many trials and errors, the four pairs are now forming as expected, the priority mechanism seems to be working also:

When 2 neurons from the input layer are firing, they should be recorded by a single neuron in the first hidden layer, without overlap.

The priority mechanism has to be able to separate patterns correctly, scroll to 0:12 to see what I’m talking about. Basically when a new pattern is started the initial response is not always specific (more than 1 neuron is active in the first hidden layer), but after couple of firing events only the correct neuron is firing.

Training data for upper left neuron

The mechanism is still fragile but at least I know I’m on the right track… Hopefully next update will be more consistent.

I’m stuck… again

I ran through 4 models already and nothing seems to be working. I can only learn 3 patterns out of 4 at one single time. The fact that same problem shows up even when I tried to correct it 4 times already makes me believe there must be something else that is actually not right, something that I’m not aware of… And the 4 models are not just changing some numbers each model has a different ways of forming and breaking synapses, different rules. Last model is the most complex allowing for every conceivable permutation and still gets stuck in timing issues. I still have couple of more ideas to try out, but not many, I’ll be running out of ideas soon… And no idea is earth shattering … so I don’t have much hope that it will solve the problem… Still my main problem remains that models are too complex and I can’t predict what should happen, there are always unintended consequences. So either I’m going to make an unexpected progress soon or I’ll stop working on this project…

What is learning ?

This seems like a simple question for a ML algorithm .. train some variables to some values and the combination uniquely identifies something. But those variables can also change when something new is learned and you lose the initial meaning..

From my previous post, I concluded that learning must be somewhat random, but I was not happy with that conclusion. After all we can all tell a circle is a circle, so it can’t be that random. So I eventually came up with a middle ground theory.. Assume 4 patterns with equal probability, there has to be a network configuration that would be stable and have all 4 patterns “memorized” as long as no additional information (additional patterns) are entering the network. I managed to prove there are such states in a 3 then 4 neurons matrix. But in a 4 neuron matrix there is more than one such stable configuration and also there are also states that are stable but not specific, meaning a neuron will learn 2 patterns and another one will learn the remaining 2 patterns in a symmetrical configuration. That was to be expected but still depressing 🙁 . So the new theory is simplistic and incomplete, but I’m sure is the basis for “learning”. Still have to find a way to make learning more stable (perhaps permanent through additional synaptic variable).

I’ve also started treating the neuron more like an atom with electrons, the electrons being the synapses. So I’m actually moving away from the “fitting” hypothesis (and synaptic strength). This theory lead me to believe that there must be “empty” places on a dendrite where there is no synapse, but a synapse could have been there, a forbidden energetic location, used to separate related patterns. So all synapses have defined energies that can be perturbed by input data, but perturbation will still lead to a defined configuration or will just get back to initial state with no change. I just don’t see how a synapse could be a continuous function.

In conclusion I’m still far far away from any meaningful progress 🙁

What is true for a neuron ?

Whatever method we use to extract meaning from data, truth seems to be elusive. Assume two event A and B are 90% correlated, meaning in 90 % of the time A precedes B. Is there a cause and effect between A and B ? What does it mean “precede” ? Day precedes night and summer precedes fall. So without a time frame, we cannot correlate event A and B. Assume the correlation is 100%. Does that mean cause and effect relationship ? We don’t know, maybe we did not pool enough data, or the time frame is wrong…

How is this related to my neuron ? I can’t decide when events are correlated or not, therefore I cannot move the data upwards in the next layer for further processing. How is this problem solved in the biological neuron ? I inferred from the 1960, cat experiment of Torsten Wiesel, that synaptic plasticity must stop in the lower layers of the primary visual cortex , and must stop rather early in life. I also assumed that the same process must happen on upper layers as well, but within a longer time frame. I also found a recent article (Rejuvenating Mouse Brains …) talking about a perineuronal net, which prevents neurons from forming new synapses. This tells me that we decide rather randomly about what truth is, base on available data, up to some time period, and the subsequent data is classified base on this old data, that cannot be changed anymore (or perhaps if extremely difficult to change but not impossible ).

So how does this help me ? I decided that I also need to stop the “learning” process at some point, otherwise data keeps changing resulting in always changing conclusions. But when to stop it ? Hard to say, it depends on the available data. If available data is enough to represent “reality” than I can confidently stop the learning process. But what does it mean “represents reality”… I’m thinking that reality is whatever sensory data is available for analysis, coupled with the environment in which the neuron evolves coupled with some evolutionary programming, objectives such as surviving. This all looks way to philosophical to be of any practical use but perhaps all is statistics underneath..