Computers have become an integral part of society. Every day people become more dependent on this technology to facilitate both work and also leisure activities. A significant drawback to such technology is its “digital” nature as compared to the “analog” world in which it functions. Computing technology operates in a digital domain that requires discrete states to be identified in order for information to be processed. In simple terms, information must be input into a computing system with a series of “on” and “off” states. However, humans live in a distinctly analog world where occurrences are never completely black or white, but always seem to be in between shades of gray. Thus, a main difference between digital and analog is that digital requires discrete states that are disjunct over time (e.g., distinct levels) while analog is continuous over time. Since humans naturally operate in an analog fashion, computing technology has evolved to reduce the impact it has when interfacing with “nondigital entities” or humans.
Handwriting and speech recognition technologies have progressed dramatically in recent times to facilitate with these digital computing interfaces. This enables a computer user to easily express themselves and/or input information into a system. Since handwriting and speech are the basics of a civilized society, these skills are generally learned by a majority of people as a societal communication requirement, established long before the advent of computers. Thus, no additional learning curve for a user is required to implement these methods for computing system interaction. However, a much greater burden is required of the computing system itself to process these types of interfaces. So, typically, as it becomes easier for a human to interface with a machine, the burden on the machine increases dramatically when it is required to learn from a human. In computing terms, this burden equates to a requirement for a very fast “processor” which can compute extreme amounts of information in a very short amount of time. Because processing power varies, a distinction is normally made between “real-time” processing and general processing. When human interaction is involved, humans typically need “feedback” within a short period of time or they will lose interest and/or assume something has gone wrong. Thus, for “ergonomic” reasons, an even greater emphasis is placed on the computing system to respond within a “human-factored” time frame or, in other words, as close to real-time as possible.
Extreme processor workloads can readily be seen especially when computing systems are employed to interpret either handwriting or speech recognition data. This is based upon the fact that, for a given letter or sound, there is a multitude of variations that must be accounted for by the system. In earlier forms of technology, a computing system would attempt to process every conceivable variation from a database of possibilities. This required extensive computations and generally did not produce very accurate results nor “real-time” interaction with the user. Typically, users would have to speak slowly and pause for speech recognition systems or would have to submit handwriting samples and wait for analysis from handwriting recognition systems. Obviously, any equation with an infinite number of possibilities that must be solved in a computing system can, theoretically, take an infinite amount of time. This, however, is unacceptable by a typical user. Thus, technology has evolved to speed up the process and create a much more efficient means of assessing these types of user inputs.
Initial attempts at achieving gains in this area included speeding up the processor and training the user to adapt themselves to a given set of inputs. Handwriting systems required a user to learn stroke patterns or submit many writing samples to facilitate in recognizing the data. Likewise, speech recognition systems limited the number of “commands” or spoken words allowed for processing or required a user to recite long sessions of predetermined prose to aid in quickly ascertaining that user's spoken word. Technology has continued to develop to reach a point where a system can accurately and quickly interact with a user. This has led to an increased focus on systems that can adapt readily to a multitude of users. One way of achieving this type of system is to utilize a “classification” system. That is, instead of attempting to confine data to “right” or “wrong,” allow it to fall within a particular “class” of a classification. An example of this would be a user whose handwriting varies slightly from day-to-day or has a cold while trying to “talk” to a computer. Thus, with either example, a traditional system might not understand what was written or spoken. This is because the system is attempting to make a black and white assessment of the input data. However, with a classification based system, a negative response might only be given if the handwriting was so varied as to be illegible or the spoken word was so varied as to be indistinguishable from noise.
Thus, classification systems give new life to human-machine interfaces employed by computing systems. They allow for much greater flexibility and require less learning by the user in order to utilize the system. The main drawback to these types of solutions is that they remain computationally intensive. In other words, they have greater flexibility but at the expense of requiring very fast computers or have sluggish response times in a typical processor situation. A general rule for these types of systems is that greater accuracy requires longer processing times. This penalty typically falls on the user to either accept lower recognition accuracy or wait longer for the recognition to occur.
To operate, these systems employ a “classifier” which interprets data and decides which class the data belongs to. Classifiers facilitate computing systems to recognize the data input by a user. They are developed in many different ways and employ any number of methods to build. However, with current technology, classifiers are still very complex and require extensive processing power, slowing the analysis of recognition samples. Although mainframe, super computers, and even extravagant desktop computing systems may be able to adequately handle this type of interface, it is typically precluded from the mobile computing systems due to the amount of processing required. This factor becomes increasingly evident as the computing market evolves towards a mobile one, requiring small, easily portable, user-friendly devices.