1. Field of the Invention
The present invention relates to neural networks and more specifically to generating stable neural networks for autonomous use in nontrivial environments.
2. Introduction
Neural systems are mathematical or computational models consisting of an interconnected group of nodes, otherwise known as neurons or simple processing elements, which process information in a connectionist approach. Some neural systems may be constructed so as to adapt their structure based on internal or external factors. In order to create a neural system that demonstrates reasonable behavior, the neural system must have a certain level of complexity, but that complexity is difficult to maintain in a stable form. In humans, additional complexity without stability manifests itself in the form of psychological conditions or tendencies, such as Narcissistic Entitlement Syndrome, overly perfectionist tendencies, etc. This is characterized by having too many choices and too few rules to truncate a continuing search for a better or more perfect solution. At the opposite end of the psychological spectrum, there are too many rules with too few choices, resulting in sociopathic tendencies owing to over certainty in solutions and discarding and devaluing the input and value of others. In machines, additional complexity without stability leads to all sorts of behavior analogous to human psychological defects in its internal interactions with various subsystems as well as its interactions with its environment and/or peers.
One existing approach is to create a purely rule-driven system, but rule-driven, or reflexive, sociopathic trend (emotionally dominant) systems almost invariably encounter exceptions to the established set of rules and are quickly perturbed by these exceptions and variations. Another approach is to create a purely choice-driven system, which is narcissistic trend (intellectually dominant) and becomes overburdened by too many choices. Both of these approaches cannot provide a stable synthetic neural system for any non-trivial autonomous environment. A robot on an assembly line is one example of a trivial autonomous environment based on a rule-driven system. The robot is able to quickly perform a set task given set inputs. For example, if the input changes, the robot's location changes, or the robot's abilities are hindered (such as by a malfunction) the robot is unable to autonomously cope with these changes and requires human intervention.
One approach in prior art systems provides a rigorous approach to neural system stability analysis, attempting to catalog every possible state in a given neural system, but that results in a prohibitively high number of states if the system is not constructed with stability as an architectural driver from the beginning. Such systems often include requirements to identify unstable interactions between elements of neural systems and to provide guidance on their correction.
Another prior art approach can be labeled “progressive mimicry”. Rather than understanding and duplicating the thought process and reflexive behaviors behind human actions, progressive mimicry is the process of copying the external manifestations of the thought process and reflexive behaviors driving human actions. Essentially, this approach attempts to copy human behavior rather than independently generating behavior. Copying the actions and mannerisms of another is not autonomous behavior. Such a system can only duplicate what it knows. Complexity is fundamental to any form of true autonomy.
Accordingly, what is needed in the art is an improved way of generating stable neural systems for use in non-trivial autonomous environments.