The automatic recognition of human behavior by an automated device may be desirable in fields that utilize behavior recognition. Accordingly, it is often desirable to train an automated device to identify behaviors involving person centric articulated motions, such as picking up a cup, kicking a ball, and so forth. As such, a variety of methods have been developed that attempt to teach an automated device to recognize such actions. For example, in some methods, the automated device may participate in a “learning by example” process during which, for example, a labeled dataset of videos may be utilized to train the device to perform classifications based on discriminative or generative methods. However, a variety of other statistical approaches have been developed to train automated devices to recognize actions.
Unfortunately, these statistical approaches often fall short of achieving the desired recognition levels and are plagued by many weaknesses. For example, these methods may include drawbacks such as overlearning and ungraceful performance degradation when the automated device is confronted with novel circumstances. Further, these statistical methods may be unable to account for physical phenomena, such as gravity and inertia, or may poorly accommodate for such phenomena. Accordingly, there exists a need for improved systems and methods that address these drawbacks.