Deep Convolution Neural Networks (Deep CNNs) are at the heart of the remarkable development in deep learning. CNNs have already been used in the 90's to solve problems of character recognition, but their use has become as widespread as it is now thanks to recent researches. These CNNs won the 2012 ImageNet image classification tournament, crushing other competitors. Then, the CNNs became a very useful tool in the field of machine learning.
Recently, the CNNs have been popular in an autonomous vehicles industry. When used in the autonomous vehicles industry, the CNNs perform functions of acquiring images from a camera installed on a vehicle, searching for lanes, etc. The CNNs use training images for learning to perform such functions, and these images are generally in an RGB format.
But in some cases, the CNNs must process test images in a non-RGB format. As opposed to a learning process where images in the RGB format prepared in advance are simply fed into, during testing processes, images in the non-RGB format may be acquired from cameras or sensors on an actual vehicle in operation. Since the CNNs have been learned by using images in the RGB format, the CNNs cannot process the test images in the non-RGB format properly. This is because parameters of the CNNs which have been learned are based on the RGB format.
So far, conventional technologies have solved this problem by converting the format of images acquired in the testing processes into the format of images used in the learning process, in real-time. However, since this conversion mathematically transforms values of every pixel in real-time, resultant overhead becomes major disadvantage in the autonomous vehicles where real-time processing is most important. Although such overhead may be trivial in case of transforming the YUV format into those in the RGB format whose conversion rule is simple, this cannot be applied to cases of complex conversion rules or non-existent rules.
In a nutshell, resolving the problem in the course of the testing processes is not easy, and it is better to have the CNNs re-learn their parameters by using new training images in the same format of the test images.
However, even this is problematic because, if the CNNs are to show reasonable performance, they require tens of thousands of the training images. The CNNs require the training images and their corresponding GTs, but these GTs should be generated by hand. Thus, tens of thousands of the training images cause tremendous time and money.