time summation … why ?

there are cases where the pre-synaptic neuron, fires twice before the postsynaptic fires once…. There are at least 3 cases to be discussed but the main question is why should this be an acceptable behavior ? This leads to cases where the time between activation of pre and post synaptic neuron is relatively big… I believe LTP (long term potentiation) is meant to to deal with precisely this case.. Then we have the opposite … summation of thousands of synapses … this should be where LTD comes to help.. I’ve done all the simulations I can with current model and the preliminary conclusion is that the synapse is in fact undecided …. I could not find any scenario where the result is certain.. If input varies wildly all variables of a synapse (4 in total) change the synapse from “increased strength” to ” decreased strength ” and vice versa … I understood that a biological neuron cannot have in infinite number of synapses… but certainly a digital one, would not be bound by such non-sense limitations … Not true… with current variables, my model is limited to about 80 synapses per neuron … this number is somewhat arbitrary, but there is no theoretical option that this number could be infinity… The time summation is not an option either… if neuron cannot adjust (because the presynaptic fires too frequent) so we have one to one, pre and post -synaptic firing relation ship, that synapse breaks … which is observed both in biology and in my simulations… So I’m left with no good options… I need to find a balance and to limit accordingly the input values … but so far I failed … I was convinced that I could find a definite solution ..

Leave a Reply

Your email address will not be published. Required fields are marked *