Computer vision utilizes a variety of image feature detectors to identify features of the image or “points of interest” within the image. Image features detectors may identify edges, corners, blobs (i.e., regions of interest points), and/or ridges of an analyzed image, depending on the particular algorithm/detector. For example, Canny algorithms and Sobel filters perform edge detection; Harris detectors perform corner detection; and Laplacian of Gausian (LoG), Hessian of Gaussian determinants, and Difference of Gaussian (DoG) detectors identify corners and blobs within an image. Feature detection systems oftentimes utilize a combination of algorithms and detectors to more accurately identify features of an analyzed image.
Common feature detectors, such as Speeded Up Robust Features (SURF), Scale-Invariant Feature Transform (SIFT), Canny, Harris, and Sobel detect and describe features of single-channel images (i.e., grayscale images). Accordingly, multi-channel images (i.e., colored images) must be transformed into a single-channel image as a preliminary analytical step to feature detection, which can result in significant loss of image information. For example, the image pixel values of the single-channel grayscale image may be generated as a linear combination of corresponding pixel values of each of the channels of the multi-channel image. As such, the contrast between multi-channel image pixels having distinct colors but the same single-channel grayscale representation is lost due to the grayscale transformation. Although some algorithms utilize perceptual-based color models (e.g., CSIFT uses Kubelka-Munk theory, which models the reflected spectrum of colored bodies), they use global color to grayscale mapping, which results in a loss of information.