Yup, that time has come. No more wiggle room. This should be the first step in what I want to build … a new breed of AI. This first step should show how learning is accomplished in such a system. Signals should be classified and recognized as the same when seen again. The first show case I want to build is 2 color recognition on a 3 layer network: L1 2×2, L2 5×5 and L3 2×2 with a single inhibitory neuron on layers 2 and 3. I’m also considering showing angle recognition on a 3×3, 3×3, 1 network configuration.
I did not add color vision but I added a simpler simulation where different “pigments” (as in cone cell pigments) would result in different firing rates / neuron.
Looking back it seems that the most troublesome assumption I made, was to believe that neurons can work with homogeneous inputs. Even now I can’t fully grasp the implication of not using homogeneous signals, because what I see when I look at a white sheet of paper is a homogeneous white, very difficult for me to believe that no two neurons in my brain are the same… very difficult for me to believe that when I see white, my brain first makes a mess of that white and then puts it back together as white..
This could take days or months or maybe it won’t work at all.