The sign language has been developed to contrive the communication between aurally handicapped persons. By using the sign language, an aurally handicapped person is able to converse directly with another aurally handicapped person being close to him or her with hand gestures, body gestures, face expressions, etc. In a case of the communication between aurally handicapped persons being apart from each other, the transmission of will was possible in realtime by performing sign language gestures using videophone devices.
On the other hand, recently, researches on sign language translation system have been actively performed so that an aurally handicapped person who uses sign language is able to converse with a normal person who does not know the sign language (Reference: Masaru Oki, Hirohiko Sagawa, Tomoko Sakiyama, Eiji Ohira, Hiromichi Fujisawa: Information Processing Media Research Society, 15-6, Information Processing Society of Japan, 1994). The sign language translation system is composed of a sign-language-to-Japanese-translation-subsystem and a Japanese-to-sign-language-translation-subsystem.
(1) The sign-language-to-Japanese-translation-subsystem is composed of a sign language recognition unit which recognizes the sign language and translates it to a sign language word train, and a sign-language-to-Japanese-translation-unit which translates the recognized sign language words to Japanese. In the sign language recognition unit, the gestures of hands are inputted using a glove-based input, the input hand gesture is compared with a standard hand gesture and a sign language word which has the closest standard hand gesture is selected. The sign-language-to-Japanese-translation-unit translates a sign language word train to Japanese using a correspondence table between sign language words and Japanese words and a conversion rule from a sign language sentence to a Japanese sentence.
(2) Japanese-to-sign-language-translation-subsystem is composed of Japanese to the sign language translation unit which translates Japanese to the sign language, and a sign language generation unit which displays the sign language as an animation using 3 dimensional computer graphics. The Japanese-to-sign-language-translation-unit analyzes Japanese and translates Japanese to a sign language word train using a correspondence table between Japanese words and the sign language words and a conversion rule from Japanese sentences to sign language sentences. The sign language generation unit generates sign language animations using a (sign language words)-(animation data) dictionary which stores sets of indexes of sign language words and the corresponding data of gestures of hands or countenances which are registered beforehand. In the generation of a sign language animation, the sign language animation data corresponding to the sign language words in a sign language word train are retrieved, and a human body model moves based upon the retrieved data. The movement of the model is made to be seen continuous by interpolating the gaps between the sign language words.
However, the sign language translation system is basically developed for the direct communication between an aurally handicapped person and a normal person being close to each other, so that it is not shown how to simply apply the configuration for a long distance call (conversation).
If the conventional sign language translation system is enlarged to apply to a long distance call, several controversial points will be produced.
In the first place, there will be a problem which makes the configuration of a device a large scaled and complicated one. To begin with, the above-mentioned sign language translation system is supposed to be a stand-alone type system, and in a case where it is enlarged to be applied to a long distance call, as an ordinary form, the following form can be considered: the sign-language-to-Japanese-translation-subsystem and the Japanese-to-sign-language-translation-subsystem are separately composed and these systems are connected to each other through a network.
However, in the case of the sign-language-to-Japanese-translation-subsystem and the Japanese-to-sign-language-translation-subsystem in a conventional sign language translation system, the dictionary data base or the correspondence table between the sign language words and Japanese words in the sign-language-to-Japanese-translation-unit (Japanese-to-sign-language-translation-unit) are commonly used in order to economize in the storage capacity.
For example, for the sake of long distance calls, if the sign-language-to-Japanese-translation-subsystem and the Japanese-to-sign-language-translation-subsystem are made to be separated and independent from each other and the sign-language-to-Japanese-translation-subsystem is provided on the side of an aurally handicapped person and the Japanese-to-sign-language-translation-subsystem is provided on the side of a normal person, then the identical data for translation have to be provided in duplication, which will naturally make the device configuration a large scaled and complicated one.
In the second place, there is another problem in that it is difficult to use an existing network for long distance calls (conversation). In a case where the sign-language-to-Japanese-translation-subsystem is provided on the aurally handicapped person side and the Japanese-to-sign-language-translation-subsystem is provided on the normal person side, it is necessary to transmit translated Japanese sentences or sign language animations to the other subsystem with each other. In particular, the transmission of sign language animations accompanies the transmission of a large quantity of images, so that for the execution of long distance calls enough preparations of the infrastructure of the network is needed, the network which is able to cope with the high speed transmission of a large capacity of data. Image transmission is possible with the present videophone facilities; however, in the case of the sign language, unless the subtle form and movement of hands, etc. are accurately transmitted and displayed, misunderstandings or erroneous recognition may be caused, which may give occasion to a trouble in communication.
Therefore, up to now, for an aurally handicapped person who uses the sign language, there has been no means to have conversation easily with a normal person in a distant place who does not know the sign language. Accordingly, they communicated to each other in transmitting characters or pictures using facsimile. Therefore, for an aurally handicapped person who wants to talk with the sign language, there have been some troubles to communicate with a normal person in a distant place who does not know the sign language.
The purpose of the present invention is to offer a simple device with which an aurally handicapped person who uses the sign language is able to communicate with a normal person in a distant place who does not know the sign language.
Another purpose of the present invention is to offer a device which makes an aurally handicapped person who uses the sign language possible to communicate with a normal person in a distant place who does not know the sign language through an existing network.