Compared with an ordinary camera, a light-field camera collects light information in space based on light-field technologies and can shoot a three-dimensional image, where the light information includes information such as a light direction, intensity, and a color.
An existing light-field camera acquires an image based on a model: i=PDBMlh+e, where i indicates an image acquired by a sensor unit, lh indicates a target to be shot, e is additive noise, P is a projection matrix used to describe a projection of light on a two-dimensional sensor unit, M is a matrix corresponding to an offset, B is a blurring feature (blurring) or a point spread function (PSF for short) of an optical system, and D is a down-sampling matrix. Because a pixel of the sensor unit of the light-field camera is small, spatial resolution of the light-field camera is low. When processing an image, an existing light-field camera improves the spatial resolution mainly by using a method of estimating the point spread function of the light-field camera and then performing reconstruction by using de-blurring technologies.
However, because of a large error between a theoretical value of the PSF and an actual situation, the method is poor in precision, and in effect of improving the spatial resolution.