Traditional voice recognition systems can recognize speech using a speech module configured to reduce sound signals into text. While such systems are beneficial for certain textual translations, such systems are limited in functionality and traditionally require a predetermined “voice print” of a speaker/re-speaker and/or the use of a “pitch and catch” type of question/response system to accurately translate sound signals to text. The output of such systems is also limited as a textual representation of the sound signals.
While such systems have been satisfactory in the art, improved sound analysis and recognition systems are needed.