1. Field of the Invention
The present invention relates generally to increasing a character recognition rate of an image received through a camera, and more particularly, to an apparatus and method for increasing a character recognition rate by extracting store names within a predetermined radius using a location information system and comparing the extracted store names with character information received through a camera of a mobile phone.
2. Description of the Related Art
With the increased popularity of mobile phones equipped with a camera, a variety of different service scenarios are being developed. Particularly, work is currently being made for a service in which a store name is recognized by the camera of the mobile phone and additional information related to the store name is then provided to a user of the mobile phone. Additionally, another service for use while traveling recognizes a signboard and translates the signboard for the traveler.
FIG. 1 is a flowchart illustrating a conventional operation for recognizing characters from a signboard in a mobile phone.
Referring to FIG. 1, a user captures an intended signboard using a camera of the mobile phone in step 101. In step 103, a text area is extracted from the captured image and converted to a black and white binary image. The binary text is segmented on a character basis in step 105, and distortion, such as noise, is compensated for in each character so that the character can be recognized normally in step 107. In step 109, each character is recognized through a character recognizer usually by best matching, exact matching, etc.
In step 111, to verify whether the text obtained by combining the compensated characters has been recognized successfully, it is determined whether the text is included in a database. Commonly, the database includes a dictionary function for determining whether the text has been recognized correctly. If the text is included in the database, the recognition result is displayed on an output portion of the mobile phone in step 113. Therefore, the user may search for related additional information. However, when the text is not included in the database, the user is notified that no valid text has been recognized in step 115.
When an intended signboard is captured by the camera of the mobile phone, it is not easy to analyze the captured image and recognize a text included in the captured image because various fonts and background images or colors that are available to signboards. Further, even from different images of the same signboard, the recognition rate of the same text from the signboard differs depending on lighting and a capturing angle. Further, if the text included in a signboard is a store name, the store name is often a proper noun in most cases, and therefore, it will not be recognized using the above-described dictionary function. Consequently, store names are difficult to recognize.
FIGS. 2A and 2B illustrate a conventional character recognition order. For example, referring to FIG. 2A, when a store name “” is recognized, the store name is divided on a character basis and best matching is applied to the individual characters, thus producing a recognition result. For each character, character candidates with first to fifth priority levels are extracted and only characters with the highest priority levels are selected as an output recognition results. While this technique may lead to an accurate recognition result, a wrong recognition result may be obtained according to the angle and lighting of image capturing, like “” illustrated in FIG. 2A.
Referring to FIG. 2B, when a store name “ebook” is captured, the store name is divided on a character basis and best matching is applied to the individual characters, thus producing a recognition result. For each character, character candidates with first to fifth priority levels are extracted and only characters with the highest priority levels are selected as output recognition results. While this technique may lead to an accurate recognition result, a wrong recognition result may be obtained according to the angle and lighting of image capturing, like “fboek” illustrated in FIG. 2B. Currently, there is no way to search for accurate additional information with the wrong recognition result.