Examples of a pattern recognition device are described in non-patent literature 1 (“Easy-to-Understand pattern recognition”, Ohmsha, Ltd, Ishii et al, Aug. 20, 1998, pp. 2-3), patent literature 1 (Japanese patent publication JP-P2002-259911A, Page 14 and FIG. 1), and non-patent literature 2 (“Recognition engineering—pattern recognition and application—”, authored by Toriwaki, Inst of Television Engineers of Japan text series, CORONA PUBLISHING CO., LTD., Mar. 15, 1993, pp. 153-154).
FIG. 1 is a block diagram showing the pattern recognition device disclosed in non-patent literature 1. This pattern recognition device has a preprocessing section 101, a feature extracting section 102, a recognition calculation section 103, and a recognition dictionary 104. In this pattern recognition device, an inputted pattern is preprocessed by the preprocessing section 101. Only essential features that are necessary for recognition are extracted from the preprocessed pattern, by the feature extracting section 102. The recognition calculation section 103 checks the features extracted by the feature extracting section against the recognition dictionary 104 to output a class of the inputted pattern.
In order to execute precise recognition, which features are extracted from the inputted pattern is important.
In patent literature 1, a technology is described, whose purpose is to provide a pattern recognition device which enables to realize high recognition accuracy with using features of a lower dimension number. FIG. 2 is a block diagram showing a pattern recognition device described in patent literature 1. This pattern recognition device includes a data input section 201, a feature extracting section 202, a feature selecting section 203, a feature selecting dictionary 204, a feature selecting dictionary correcting section 205, a recognition dictionary correcting section 206, a recognition dictionary 207, a recognition section 208, and a result outputting section 209. The feature extracting section 202 extracts n-dimensional features from an inputted pattern or a learning pattern. The feature selecting section 203 consists of a first feature converting section which selects h-dimensional features from the n-dimensional features by a nonlinear function, and a second feature converting section which selects m-dimensional features from the h-dimensional features by a linear function or a nonlinear function. The recognition dictionary 204 consists of a group of m-dimensional reference patterns. The recognition section 208 checks m-dimensional features of the inputted pattern against the m-dimensional reference patterns to output recognition result of the inputted pattern. The feature selecting dictionary correcting section 206 corrects the recognition dictionary 207 at a time of learning, and the feature selecting dictionary correcting section 205 corrects the feature selecting dictionary 204 at the time of learning.
Furthermore, in patent literature 2, as an example of a method for selecting features, forward sequential selection is exemplified. In the forward sequential selection, when M features are selected from N features, a certain feature is firstly selected. Then feature amounts are sequentially added to that. For example, when a combination X(i−1) of “i−1” number features have been already selected, in order to select an i-th feature, a combination whose evaluation value of goodness is maximum is selected from a group of (n−i+1) combination which is obtained by adding each one of remained (n−i+1) features to the combination X(i−1).
Furthermore, as a related technology, the inventor could know patent literature 2 (Japanese patent publication JP-Heisei 10-55412A), patent literature 3(Japanese patent publication JP-P2001-92924A), patent literature 4(Japanese patent publication JP-P2002-251592A), patent literature 5(Japanese patent publication JP-P2006-153582A), and patent literature 6 (Japanese patent publication JP-Heisei 9-245125A).