High frame rate image sensors, which have a large image format and small pixel pitch, are becoming commonly available for use in numerous new products and applications. However, conventional video architectures generally do not support bandwidth and timing requirements of some high framerate image sensors. New video architectures that support the bandwidth and timing requirements of high frame rate image sensors have been developed; however, these new video architectures are generally developed from scratch for particular uses without taking advantage of previously available hardware.
Improvements in high frame rate image sensor technologies vastly exceed bandwidth and transport capabilities of many existing video transport architectures. An extensive infrastructure of existing video hardware that is designed and configured for transporting high definition (HD) video is deployed and installed in equipment throughout the world. This infrastructure generally does not support transport of video data from the high frame rate video cameras to a display or end-user.
Existing HD video architectures are generally configured for processing streams of video data that conform to one or more standard formats such as the Society of Motion Picture and Television Engineers (SMPTE) standards SMPTE 292M and SMPTE 424M, for example. These standards include a 720p high definition television (HDTV) format, in which video data is formatted in frames having 720 horizontal data paths and an aspect ratio of 16:9. The SMPTE 292M standard includes a 720p format which has a resolution of 1280×720 pixels, for example.
A common transmission format for HD video data is 720p60, in which the video data in 720p format is transmitted at 60 frames per second. The SMPTE 424M standard includes a 1080p60 transmission format in which data in 1080p format is transmitted at 60 frames per second. The video data in 1080p format is sometimes referred to as “full HD” and has a resolution of 1920×1080 pixels.
A large number of currently deployed image detection systems are built in conformance with HD video standards, such as the commonly used 720p standard. The 1280×720 pixel frames of a 720p standard system include about 1.5 megapixels per frame. In contrast, High frame rate image sensors generally output image frames in 5 k×5 k format, which have about 25 million pixels per frame. Therefore, the 1280×720 pixels used in a 720p standard system are not nearly enough to transport the much larger number of pixels generated by an high frame rate image sensor.
High frame rate image sensors are conventionally used with video architectures that are designed particularly for transporting high frame rate video data. These new video architectures generally leverage video compression techniques to support high frame rate bandwidth and timing requirements. Some video architectures that are currently used for transporting high frame rate video data use parallel encoders or codecs and data compression to transport the high frame rate video. However, the use of compression makes these video architectures unsuitable for end users who rely on receiving raw sensor data.
The use of legacy hardware for transporting high frame rate video from next generation cameras is problematic because the legacy hardware generally does not provide sufficient bandwidth. Moreover, replacing existing video architectures with new architectures for transporting high frame rate video data can be impractical and/or prohibitively expensive for users who have already implemented a large amount of conventional video processing equipment.
Various spatial and temporal video compression techniques have been used to process image data from high frame rate image sensors for transport over existing HD video architectures. The high frame rate video data is commonly compressed using compression algorithms that retain enough of the high frame rate video data to generate visible images and video streams for human viewing, but lose or discard data from the high frame rate image sensors that may not be needed for human viewable images and video streams.
Other conventional techniques for processing data from high frame rate image sensors generally involve the use of new or proprietary video architectures that have been developed for particular applications of the high frame rate image sensors. These techniques are costly and inefficient because they do not take advantage of widely available HD video architectures that have been deployed throughout the world.