Object recognition involves the analysis of a two-dimensional image in order to identify specific objects or areas of interest. Many systems incorporate a three-step process where a pre-screener identifies regions of interest or candidate objects which are then analyzed by an intermediate discriminator that evaluates each area in more detail. The remaining areas are then further analyzed to positively identify them and establish their location in the image.
Two dimensional imaging systems such as video or Forward-looking Infrared (FLIR) systems analyze incoming information on a nearly continuous basis and must identify and locate any areas of interest identified by the camera or sensor. One method for image analysis is known as blob analysis. This method of analysis, the basics of which are known to those skilled in the art, allows for fast operation and is reasonably accurate under ideal conditions. It uses a method called connectivity analysis to group pixels into a blob if they are adjacent to one another. For each identified blob, properties such as area, perimeter, and position can then be determined. One main limitation is that its speed and efficiency can be severely degraded if extra features are present in the image. These extra features require additional processing and computing resources, degrading performance.
This blob-finding method is used in FLIR systems to find warm objects of the approximate size and shape of the expected target. This was suitable for older FLIR systems, which were not able to detect direct radiation from the sun until it turned into heat. Newer, smaller FLIR systems take in shorter wavelengths and may often become blinded by reflected sunlight. This is a result of the need for smaller sensors and therefore smaller apertures that had to take in a broader spectrum to achieve the same resolution. Unlike the larger-aperture FLIR systems, which showed anything that did not directly radiate heat as black, the small-aperture systems tend to capture solar radiation reflected from nearly every feature in their field of view. This greatly increases the number of potential false alarms detected because the sensor detects everything in its field of view that is reflecting infrared solar energy. Since there is no reliable method for determining immediately upon detection whether a pixel is a background pixel or an object pixel, the sensor must analyze all the incoming pixels and determine if they are part of a potential target.
Because of the increased processing requirements that such systems are confronted with in the face of so many areas requiring further evaluation, the efficiency of these systems may be reduced significantly. In some worst-case scenarios, the system may be rendered effectively inoperable due to its inability to keep up with the flow of incoming data.