Artificial Intelligence (AI) is a branch of computer science that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is traditionally concerned with producing machines to automate tasks requiring intelligent behavior. While many researchers have attempted to create AI systems, there is very limited prior work on comprehensive cognitive architectures.
For example, there is no comprehensive brain-like architecture that links physiology with anatomy and the derived functionalities. However, numerous neuroscience-inspired modal architectures have been proposed, such as those cited as literature reference numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11 (See the “List of Cited References” below). Functional characterizations of these architectures typically use aspects from very different levels of biologically-inspired descriptions. For example, connectionists often base their architectural proposal on some abstract properties assumed to be involved in the information processing of the brain. Others are more biological in terms of their underlying modeling; however, they do not explain the wide body of experimental data.
A description of psychology-based architectures is provided since these represent the state of the art in cognitive architectures. While several cognitive architectures have been proposed and implemented, two popular and commonly used architectures are ACT-R (see literature reference no. 12) and Soar (see literature reference no. 13). ACT-R is a parallel-matching, serial-firing production system with a psychologically motivated conflict resolution strategy. Soar is a parallel-matching, parallel-firing rule-based system where the rules represent both procedural and declarative knowledge. However, several limitations of these traditional cognitive architectures exist (see literature reference no. 18).
Implementing such a complex system of neural-like components is a major challenge and, as such, there is very little existing work to draw on. Hecht-Nielsen and Lansner (see literature reference nos. 14 and 15) have built large systems, though not as all-encompassing in size and complexity as the present invention. Additionally, Spoms' (see literature reference no. 16) work on motifs in brain networks is a mathematical optimization technique to obtain network topologies that resemble brain networks across a spectrum of structural measures. Further, Andersen (see literature reference no. 17) has suggested building brain-like computers via software development using models at a level between low-level network of attractor networks and associatively linked networks. However, it is not clear how the above are neuromorphic architectures or that they support the large body of neuroscience data.
The computer program language that is a part of the present invention shares some features with the so-called “skeleton parallelism” programming languages, such as P3L and Ocam1P3L (see literature reference nos. 23, 24, and 25). However, such languages are general-purpose programming languages that are not in any way optimized for the programming of the brain-like systems, and they are not a part of a comprehensive suite of tools and technologies embodied in the present invention.
Research in neuroscience and cognitive psychology over the last several decades has made remarkable progress in unraveling the mysteries of the human mind. However, the prior art is still quite far from building and integrating computational models of the entire gamut of human-like cognitive capabilities. As discussed above, very limited prior art exists in building an integrated and comprehensive architecture.
A challenge present in the art is to develop a cognitive architecture that is comprehensive and covers the full range of human cognition. Current approaches are not able to provide such a comprehensive architecture. Architectures developed to-date typically solve single and multiple modal problems that are highly specialized in function and design. In addition, there are often very different underlying theories and architectures for the same cognitive modal problem. This presents a significant challenge in seamlessly integrating these disparate theories into a comprehensive architecture such that all cognitive functionalities can be addressed. Computational design and implementation of these architectures is another major challenge. These architectures must be amenable to implementation as stand-alone or hybrid neuro-AI architectures via software/hardware and evaluation in follow-on phases.
Thus, a continuing need exists for a computational systems modeling and architecture development framework for rapid prototyping and implementing of biologically-inspired computing modules in a flexible, extensible, adaptable, scalable and modular manner.