1. Field of the Invention
The present invention is related to relationship and composition features (objects) detection, identification, extraction, reduced instruction set recording, pattern matching, storage and retrieval of an Image scene inter-frame object motion mapping and course prediction, search area isolation of pattern matched objects and prediction of a pattern matched presence of objects in scenes where features of objects are not present.
More particularly, the invention is directed to using features of objects, or feature domains identified by the methods of the invention, to encode or re-encode imagery for the purposes of: (a) reducing the size of images, (b) improving the rendering accuracy and performance of image content within or without existing image format methods, (d) providing a non-intuitive scheme for secure transmission and storage of imagery. The invention is also directed to calculating the degree of dissimilarities between features of objects to be stored for subsequent retrieval for the purposes of identifying anomalies between stored object features and comparisons with master templates (training models) used for pattern matching.
2. Background of the Invention and Related Art
The present invention is driven by the process problem of identifying application relevant objects of interest, being logically organized data; text fields, symbols, morphology; living cells, energy waves, textures. Other methods in the related art that may be substituted for the present invention primarily approach the problem as a known linear distance and orientation solution: specifically, between the known linear distance of a user-defined master template. This strict distance relationship approach only works when the linear distance applies i.e., in many real world scenarios the position of the object of interest will vary, therefore making this approach only relevant when the distance and relative orientation are similar.
Some other methods in the related art take an upfront approach of pre processing the digital image to make it more suitable for negating false positives by;                1. Threshold approaches and methods for binarizing the image scene,        2. Removing contiguous lines,        3. Removing contiguous pixel blobs or ‘specs’        
All of these false positive elimination techniques will be useless when the linear distance and relative orientation are not similar.
A Fuzzy logic approach is used when objects that do not meet the algorithm thresholds are attempted to be identified by changing the weighting criteria to accommodate the calculated results of a given image scene. The results of skewing (or mathematically ‘pushing’ data) in a fuzzy logic approach is that the identified object is too often not an application-relevant object element and not an object of interest. Therefore, this approach yields inferior results by virtue of the high probability of designating too many false positives. Where, false positives will require an alternative process or a human to solve the original problem. However, using an alternate process or a human to manually designate the objects of interest, removes all possible efficiencies that may otherwise be gained by using a fuzzy logic-based follow on approach and process to solve the required linear distance and orientation calculations.
Other methods in the related art use an optical character recognition technology as a pre process for identifying character string objects of interest, based on a master string template approach. Using a primarily OCR-driven approach relies on the following to obtain statistically significant locations of character objects of interest:                (a) Image scenes that contain sufficient levels of object element(s) contrast to make dynamic range composition changes; (wave form analysis, various methods of transformations) in order to effectively perform step (b) and/or fine object (close radius) edge detection processes.        (b) Directed noise and intersecting object removal to enable effective character recognition processes.        (c) The image scene presence of exact and/or rules-based exceptional character sets string detection and recognition        (d) The preconditions of (a)-(c) for enabling the object element vectors of hand written character data.        
A known variation on the OCR approach is to segment a character into is composed elements, so that if some of the character elements or parts are obscured, not present or intersected by other objects or noise, the remaining pieces can be subjected to a modified OCR process.
In machine vision-focused and generalized targeting methods a variety of approaches are used:                (a) Using an imposed template or mask that is guided (either manually or via some other method of automation) to its target.        (b) Calculating an azimuth drift or relative value of the nadir of the image collection array to estimate orientation, which may also be constrained by approach (a), above.        
Approaches in other methods in the related art do not incorporate an accurate and therefore, efficient method of identifying Character vs. non character objects, which does not use forms of optical character recognition (OCR) pattern recognition. This approach is ineffective for non character objects, as OCR pattern recognition only identifies the presence of a matched or potentially matched character formation. Therefore, if the inference of the non presence of a formed character is used to indicate a non character object, too many false positives will result to make the designations statistically significant.
The present invention is in stark contrast to all known prior art which relies on solving for:                (1) the relative distance between estimated objects of interest in order to find the estimated location of the object type that matches the designated master object type.        (2) the estimated location of coordinate boundaries of objects that significantly match the designated master object types along an established vector.        
All raster images are divided up using a 2D coordinate system based upon the image picture element density (pixels) or resolution. This method of dividing up the image area is typically identified in the related art and public domain knowledge as ‘page navigation’ as taught by Matsuo, et. al. in U.S. Pat. No. 5,852,675. Also, see chaps 3 and 4 viz., Photometric Restoration of Documents and Geometric Restoration of Documents, and the cited work of W. Brent Seales, in doctoral dissertation by George V. Landon, Jr., College of Engineering at the University of Kentucky, 2008, which is available at: http://cslinux.eku.edu/landong/papers/landon-dissertation.pdf. Page navigation in raster image types uses a Cartesian coordinate system for applications of: document imaging, medical and machine vision applications.
Commonly, existing art technologies use a standard calculus numerical integration method to count units between estimated objects of interest as horizontal or vertical line units or sets or measurement. Once the distance of an estimated object of interest is deduced from a numerical integration method it is used to determine:                1. If the relative anticipated or assumed distance corresponds to the parameters that were designated by the human manual or machine trained master object selection process.        2. If the deduced distance does correspond to the parameters (given in item 1. above) then, the estimated location can be calculated.        
However, the entire approach used in known art technologies relies upon whether the sought object of interest resides in the anticipated location, and if the sought object of interest does reside in the anticipated location, then its estimated coordinate boundaries are calculated based upon the numerical integration deductive process of the relative distance and anticipated vector from other objects of interest that were used to derive the sought object's estimated location.
The limitations of using known art methods is that, at best the location is only an estimate and not the actual location of the coordinate boundaries of the object of interest and any post process will only be able to consider the part of the contained object of interest that fits into the estimated boundaries (e.g., in fielded OCR post processes), only some of the pixels can be possible to read to generate output text.
It is instead more desirable to calculate the actual location of object of interest boundaries. For process applications, the objective is to encompass the entire object of interest, as is disclosed in the present invention. In OCR applications, this is the entire string of character data.
It is also more desirable to be liberated from finding object coordinate boundaries of interest directly from numerical integration calculations of known or estimated to be known objects. An example application using this principle can be seen in document imaging, where photo copying documents typically either reduces or enlarges the document image to accommodate the full output to the paper. The location of potential objects of interest will be materially altered by this process when the paper image is digitally captured and estimated using an integration method. This outcome is certain because integration methods do not take into account changed proportions and, or changed orientations of potential objects of interest. The present invention is not constrained by proportional obstacles since it does not use any numerical integration method to calculate the distance between objects of interest.
Another example of desirability for not being confined to finding objects of interest by calculating relative distances can also be seen in document imaging applications when sought field objects dynamically change in position and from image to image either slightly or radically due to print restrictions in a batch run e.g., a transaction on a phone record may be indented or reformatted due to a billing code, warning or advertisement in line on a statement.
Accuracy is a consequence tied to efficiency in object recognition tasks because, any object improperly designated will generate a false positive, which negates the intended automation benefits of the technology and invention. Thus, a method is needed to find the precise location of objects of interest in digital images which uses human manual and, or machine vision trained input to supply a set of example objects or a singular master image object to derive and record the location of the objects (a process of mapping) of interest.