Interpreting sign language signals to text and backward conversion is very useful for communicating with deaf and handicapped people and available technology ought to be used in exploring any such help that can be extended. Many attempts have been made in automating such conversion process in both directions, with minimum physical intervention.
Early attempts were concentrated on using a camera and image processing techniques in classifying different patterns for different postures in a sign language. The majority of these attempts use image and pattern matching techniques to compare captured sign language gestures with pre-stored database such as image patterns or Eigen space of images in order to convert sign language postures into text words. The drawback of these technologies is the intensive computation that is associated with any image processing techniques; therefore, most of these tasks can only be done off-line.
Other attempts have been made to use sensors that are usually attached to the fingers in order to track the gesture path or motion of fingers, or forearm rotation of hands. While these technologies require less computing power, they have the drawback of limited sign language postures that can be recognized due to limitations of sensors. For instance, only locations of fingers can be detected, which limits the conversion of hand posture to English alpha-numeric character; whereas there is a need to detect both the movement of hand itself and location of the hand in order to recognize gestures used extensively in sign language.
It is therefore an objective of the present invention to provide a system and method for detecting hand movement for the translation of sign language.
It is further an objective of the present invention to provide a system and method for detecting hand shape or posture for the translation of sign language.
It is another objective of the present invention to provide a system and method for detecting hand position for the translation of sign language.
It is further an objective of the present invention to provide a system and method for detecting hand or palm orientation for the translation of sign language.
It is still further an objective of the present invention to provide a system and method for converting sign language instantaneously to voice or text without requiring intensive computing power.
It is yet another objective of the present invention to further provide a system and method for using multiple sensors for the purpose of translating sign language.
It is another objective of the present invention to provide a system and method for converting a sign language in both English and non-English.
It is further an objective of the present invention to provide a system and method for converting text or voice to sign language display.