Depth sensing technology can be used to determine a person's location in relation to nearby objects or to generate an image of a person's immediate environment in three dimensions (3D). One application in which depth (distance) sensing technology may be used is in head-mounted display (HMD) devices and other types of near-eye display (NED) devices. Depth sensing technology can employ a time-of-flight (ToF) depth camera or structured light depth camera. With ToF based depth sensing technology, a light source emits light into its nearby environment, and a ToF camera captures the light after it reflects off nearby objects. The time taken for the light to travel from the light source and to reflect back from an object to the ToF camera can be converted, based on the known speed of light, into a depth measurement (i.e., the distance to the object). Alternatively, the phase of the detected return signal can be determined and used to calculate the depth measurement. Such a measurement can be processed with other similar measurements to create a map of physical surfaces in a particular environment (called a depth image or depth map) and, if desired, to render a 3D image of the environment. Structured light depth cameras project a light pattern onto the environment. The 3D geometry of the environment causes the light to appear distorted when observed from a different perspective. The difference in perspective is caused by the physical spacing (also called the “baseline”) between the illuminator and the camera imager.