When persons who speak different languages meet, there occurs a lack of communication due to a language barrier. The persons try to communicate with each other by expressing extremely restrictive opinions with a look or gesture or by using a text translation service of an Internet service provider. However, there are limitations in this case, compared to a case in which they communicate in their mother languages.
Automatic speech translation technology has been developed for persons with different languages to communicate with each other.
The combination of a smartphone and a wearable device provides an environment in which it is further convenient to use automatic speech translation. When an automatic speech translation service is used with only a smartphone, a person should hold the smartphone at a position where voice is recognizable, that is, in the vicinity of his/her face, and thus there is an inconvenience that one hand is not free. However, the utilization of the wearable device allows both hands to be free. A smartphone and a wearable device are connected through a protocol such as Bluetooth Low Energy (BLE), and the wearable device may also control a start and an end of an automatic speech translation service.
Such an automatic speech translation service includes a connection method in which personal information such as a phone number of a partner or an identification number and an ID of a speech translation service subscriber is utilized (1st generation) and a connection method in which, when a partner is in close proximity, both terminal are brought in contact with each other using a gesture, bump-to-bump, infrared pointing, etc., and then recognized by each other (2nd generation). However, the conventional methods have inconveniences in that a user and his/her partner should check their intentions for speech translation and manually set up connection according to a predetermined rule.
When a manual connection is performed to start a speech translation service between users, there are some problems. That is, the execution of a speech translation app is recommended, or even when the app is executed, intentions for speech translation between users are necessarily checked by bringing in contact with their terminals. Also, a mother language of a partner with the speech translation intention should be at least recognized to start the speech translation service.
Furthermore, when both hands are not free, e.g., when a waiter/waitress receives an order and serves a table or a taxi driver is driving, the conventional manual connection methods cannot smoothly provide the speech translation service.