Information theory formalizes methods to parse an image into binary bits for coding and transmittal between locations. However, this can require sophisticated algorithms to encode, decode, and store the data over time. For example, a color sensor in a television camera responds to variations in light intensity with a continuous stream of bits that encode stimulus intensity at sequential times as encoded binary information. This sensor output is also multiplexed with the information from other color sensors to reproduce the hue in each pixel in the decoded image. Error correction built into the coding algorithms ensures that noise that corrupts specific bits of transmitted information is corrected when decoded onto a 2D screen.
An organismal visual system seems to have a form of sparse coding because the rate of spikes generated at a sensory surface like the retina, reduces at successive synaptic stages, from the retinal ganglion cells (RGCs) to the lateral geniculate nucleus (LGN) and in individual cells in subsequent areas such as V1 and inferotemporal (IT) cortex. This reduction in maintained spike firing rate, as neurons spatially converge sensory inputs into cortical perceptual areas that each have larger receptive fields, (RFs) with increased numbers of cells, has been found true for sensory systems in general. Some have interpreted this reduced cellular spike rate, due to spatial and temporal summation at serially convergent synapses, as multiplexed data, in a hypothetical temporal or latency code in neurons. Others have presumed that the repetitive information in an image is coded sparsely, with algorithms to decode, or reconstruct, the image from the compressed information. In an effort to copy the nervous system's economy of information transfer, neuromorphic chips condense binary coded information into packets that are time-multiplexed, with each packet time- and origin-stamped and addressed to specialized processing units. This avoids the congestion of information at a central processing unit, known as the von Neumann bottleneck, analogous to retinal convergence, in which photoreceptors spatially converge on RGC neurons in an approximate 60:1 ratio in primates, before fanning out in an approximate 1:350 ratio of LGN neurons:V1 neurons. The apparent analogies between neural and computational systems are used in other models to code, transmit and decode still images from a sensor surface to a location that performs cognitive functions upon the information in the image. The nervous system is conventionally interpreted to take ‘snapshots’ of successive visual images with each shift of the retinal surface during fixational eye movements, but these images, in order to be stored in memory, have to be sparsely coded to conform with presumed anatomical and actual electronic coding limitations. These previous interpretations are only a partial use of the neurophysiological evidence and inevitably conmingle information theory with experimental data in a way that does not resolve relevant issues meaningfully but creates more confounded complexity. Massive experimental data supports neural spike synchrony as a mechanism that binds perceptual processing of sensory input and communication between cortical locations. However recent analysis in reputable labs shows that little information is transmitted or communicated between neural locations in the synchronized or phase-locked state. Embodiments that use this synchronization of locations as an alternative to information transmittal by temporal coding between locations are the subject of this invention.
One prospective embodiment would improve 3D stereovision goggles used in virtual reality, which synthesize binocular stereo imagery from algorithms that compute the information of monocularly changing imagery but without precisely correlated temporal synchrony, which creates dizziness and vertigo in the wearer after a short time. Another embodiment improves accuracy of a robotic grasp as a target nears, which presently has feedback delays, causing inaccuracy and blurring when the target image is repetitively reconstructed from updating information at a central processor.
In a similar vein, conventional machine or deep learning algorithms repeatedly cycle streams of temporally coded image information, comparing false positives with previously learned image templates, to incrementally learn from the repetitive feedback to probabilistically recognize a spatial pattern. This repetitive cycling of temporally coded information through layers of connections takes substantial time and computational resources. Given that neurons in a synchronized state do not transmit information, the following invention describes several embodiments that are more efficient than the conventional transmission and cyclic feedback of temporally coded information.