At present, each frame of image in a video is generally processed by performing the following operations in a blank period of time between two adjacent frames of video image: firstly a statistic is made on grayscale values of a last frame of video image, and a histogram of the last frame of video image is obtained; next a mapping table is obtained from the grayscale values of the last frame of video image using an image contrast enhancement algorithm; and lastly grayscale mapping is performed on a current frame of video image according to the mapping table to thereby obtain a processed current frame of video image.
As can be apparent from the solution above, a video image is currently processed using only the characteristic of a last frame of image, and if there is a large difference between grayscale ranges of two sequential frames, then an obvious flicker may occur while the video is being played after it has been processed as above.