Depth sensing technology can be used to determine a person's location in relation to nearby objects or to generate an image of a person's immediate environment in three dimensions (3D). One application in which depth sensing technology may be used is in head-mounted display (HMD) devices and other types of near-eye display (NED) devices. Depth sensing technology can employ a stereo vision, structured light, structured stereo, or time-of-flight (ToF) depth camera. With structured light based depth sensing technology, a light source emits a light pattern onto nearby objects, and a camera captures the light after it reflects off surfaces of the objects. The sensor observes the geometric deformation of the illumination pattern, and a processor then calculates the 3D map of the scene. Such a measurement can be processed with other similar measurements to create a map of physical surfaces in the user's environment (called a depth image or depth map) and, if desired, to render a 3D image of the user's environment.
A depth imaging system (also referred to as depth sensing system) can include a light source for providing structured light. Such a system is referred to as structured light depth imaging system. Structured light is a process of using a projector (which can be part of the light source) to project a known pattern of light onto a scene. The light is reflected by the scene and captured by an imaging camera. The light pattern captured by the depth imaging camera is different from the projected known pattern because of the scene geometry. Based on the differences between the deformed pattern and the projected known pattern, the depth imaging system can calculate the depth information of the scene.
However, the accuracy of the depth information depends on a precise geometric alignment between the projector and the camera. In particular, to produce accurate depth information, the depth imaging system needs maintain the 3D positions and orientations of the projector and the camera relative to each other at the time of camera calibration. The information on the 3D positions and orientations of the projector and the camera is collectively referred to as extrinsic calibration information. Even a very small physical deformation of the depth imaging system can lead to inaccurate depth information. For example, a deformation can be caused by a change of an environment temperature, a user dropping the depth imaging system, or even mounting strain during camera assembly. The deformation causes a change of extrinsic calibration information (e.g., a distance or an orientation of the projector and the camera relative to each other). Because the system uses the original extrinsic calibration information to generate the depth information, the depth information is no longer accurate.