Many non-modernized aircrafts lack the ability to capture, in a computer readable medium, information related to cockpit instrumentation. As a result, the notorious “black box” is non-existent in many of these aircraft. Accordingly, these aircraft remain exempt from existing federal regulations that require the capture of instrument readings during flight.
Many of the instruments, within the cockpit, need to be read in combination with one or more needles and or gauges. Pilots are forced to process an accurate instrument reading by viewing the several needles and gauges together at any particular moment in time and then process the instrument reading. Correspondingly, the present technique used to rapidly determine an instrument reading is subject to human error.
Although, modern aircraft and large commercial aircraft have computerized the readings of instrumentation, which are recorded on a black box, there are no backup processes or cross checks in the event of computerized failure, or in the event the pilot's display panel fails. Therefore, a single point of failure in capturing instrument readings results in a need for adequate backup in the event of failure.
Moreover, many existing devices lack the ability to provide instrument readings in a computer readable format. Often, these devices cannot practically be retrofitted to include processing and storage capabilities. The expense and labor associated with installing new instruments to provide such functionality can make the new instrument upgrade more expensive than buying a new device altogether. As a result, many existing devices continue to lack the appropriate automation, which can improve the use, performance, and analysis of the devices. Furthermore, once any device is upgraded then the information collected can be used and shared with other automated devices.
Yet, current image feature extraction techniques do not provide performance and translation abilities which permit rapid and efficient image capture and image processing in order to generate instrument readings from a captured image in a useful and meaningful way to a user or a control system associated with the device. By and large, current image feature extraction relates to standard optical character recognition (OCR), or the detecting and the indexing of sub images within an image for purposes of search and retrieval techniques. OCR is not helpful when reading an instrument that includes needles or other gauges, this situation requires an analysis of an orientation of the multiple visual features within a captured sub image of the instrument in order to adequately generate an instrument reading.
OCR is primarily helpful in character recognition; it will not assist with feature recognition within a captured sub image, such as detecting a needle and a needle's orientation within the sub image. Furthermore, image feature extraction does not assist with translating various features extracted into meaningful information or instrument readings, since these extractions are primarily focused on permitting better search and retrieval capabilities associated with image data. Therefore, there exists a need for methods, functional data, and systems that addresses these shortcomings, and provide increased monitoring associated with reading instruments.