# what is learning ?

I’ve said a while back that learning is changing a variable and I believe that it holds true in the most general abstract sense. Something has to change when something is learned. This definition doesn’t help much though, these are some questions that need answers to be more useful:

1. the variable that has to change, let’s call it x, has to have an initial value. What should that value be ? Why ? What does it represent ? – — > In my system the abstract value x has an initial value that would allow a single synapse to fire a postsynaptic neuron and is constrained by some arbitrary values for Activation Potential and the initial “glutamate” concentration in the postsynaptic axon.
2. When to change x? this is not clear at all, the input value that would alter x, in this case firing frequency of the presynaptic neuron, is variable. That input value is rarely “accommodating” the x variable, but how much of a deviation from x should be allowed, before changing x ? This is the case where the decision is made locally and rely only on the deviation value from x. Relaying only on the local environment to change x does not seem to work. Changing temporarily x to alert another “decision node” seems more useful, less random. Yet this is just deferring the decision of making a decision.. Another “decision node’, will just be the same problem where instead of x, we have y. Deferring a decision should end somewhere, to some other z variable, that is in fact a constant. Any push to change that should result in a feedback loop that would promote action to change the input..

I see some benefits in having multiple decision nodes that make changes in short feedback loops and have a final decision node that would promote action, yet this is still not enough and not clear. Another way of framing question 2 would be: “What is feedback ?” – while the same question, the answer does not seem to be the same.. Feedback to what ? When you say: “This is not a cat”, this should alter multiple decision nodes or just one, because why that is not a cat, can be many things… Then what to change ?

3 How much to change x ? Looking for a minimum ? Does not seem feasible because it take a long time to accomplish anything. Adding a single AMPA receptor on the postsynaptic side of a synapse, has a big effect on the final output is not just adding a single Calcium ion in the mix.. I have no clear idea about this. When I change x in my system that is an exponential decay but I have not found a dx that would have a link to the rest of the system.

4 When to stop changing x? Just because I detect an increased frequency, increased from the expected value, that is, does not mean that I should change x indefinitely, yet how do I know the change I made is enough ? Still based on expected value of the next decision node ? That node would change x so the expected value in the decision node is met while asking further away nodes if it should also change the expected value ?

Since my last post I haven’t done any work on the code side of the project. I initially had a theory of how this should work, but that proved to be too simplistic. Then I programmed everything I could think of based on what is known in biology hoping that by doing so, something would eventually start making sense, But that was not the case either. Nothing has become clear. Now I’m again trying to understand the basics of the problem and change the code to fit a certain theory ðŸ™‚

CategoriesAI