I still can’t define the problem clearly. Anyway, it seems there are 2 ways of learning:
- Bayesian, in which you count the evidence. The downside: assuming I keep seeing the movie Superman, if I only count occurrences, at some point I should start believing that humans can fly.
- Feedback learning, where an input modulates another type of input. The downside: the feedback is not always right. If you ask a girl out and she says no, doesn’t mean much. You may get correlation without much causation.
So it seems #2 is still a subset of #1 and #1 lacks a clear definition of what is possible or real. The question is how should I implement learning in my network. In either cases what I struggle with is deciding when something is learned. Assume I do counting. Assume my “synaptic strength” is a change in cta (cycles to activation). So how many times should I encounter a sequence of events before deciding to change cta, meaning to “learn” ? How much should I change cta ? Mind you, cta is small, I work with an arbitrary number of 35, so I can’t change it that much anyway .. Bottom line I can’t store much in the cta value. Regardless of using #1 or #2. So the question is then, what else is left ? I’m thinking of not using “learning” at all, it seems pointless to change default cta value with +/- some small number. But this leaves me with everything is learned or nothing is …. I still need to use a criteria to store data.. If it’s not based on the 2 hypotheses of learning then what else ? I’m very much confused about this topic..