1+1=2

Let’s abstract away that operation and consider instead: S1S1S3S4 activates S5, where S1 stands for Symbol 1 (number 1), S2 stands for symbol 2 (the + sign) and so on. If the first 4 symbols are activated they should then activate symbol 5 (which is the result, number 2). My assumption is that we actually do NO other types of calculations other than sequence association. So even if we appear to do new, complex calculations we actually use known small steps to achieve an unknown result.

There are couple of problems here:

  1. How is it known what symbol S5 is ? In other words is S5 just a label saved in an outside system, other than the AI algorithm itself ? If I were to save S5 as part of the AI algorithm what would that be ?
  2. Assume that S5 is already known as a visual representation for number 2, which is to say, there is a neuron somewhere that would specifically light up when the system sees the image of number 2. Should this specialized neuron also light up when seeing that abstract association ? (S1S2S3S4)
  3. I could combine 1 and 2, but there is still a problem with this approach. This approach leads to the idea that for every specific thing we know, there is a specific neuron that encodes it. Do I know more things then the number of neurons in my brain ? I’m not sure, because there are many things I know, but I don’t know that I know them… My guess is that we don’t have as many neurons as required to store all the information we have stored in out brains, so my conclusion is that we may have single neurons encoding single (specific) objects/labels, but we must also be using multiple neurons in various combinations for storing some data.. But if information is so distributed then I get back full circle to the first question.. How is S5 known ?

Leave a Reply

Your email address will not be published. Required fields are marked *