Currently, cameras capable of directly interfacing with a graphics processing unit's (GPU's) specific data bus are high pixel count, high data rate cameras. However, these cameras lack sufficient optical resolution at ranges required by applications for autonomous vehicles. Autonomous vehicle applications, including space-based applications, require the use of long-range cameras (e.g., high definition (HD) cameras) that are able to operate at a high optical resolution. However, these higher caliber cameras are in video formats that are not commensurate of the data bus of a GPU (e.g., an Nvidia TX1 GPU).
Currently, conventional solutions for interfacing cameras with high optical resolution with GPUs involve the use of field-programmable gate arrays (FPGAs). In particular, these solutions employ multiple FPGAs to convert each video stream, and a larger FPGA to encode the data at a less efficient compression method (e.g., H.264/MPEG-4 advanced video coding (AVC)) than utilized by GPUs (e.g., H.265/high efficiency video coding (HEVC)). These solutions require multiple FPGA sets for each camera's video stream, and are inefficient and costly.
There is therefore a need for an improved technique for interfacing cameras with high optical resolution with GPUs.