The present disclosure generally relates to virtual or augmented reality systems, and more specifically relates to depth camera assemblies that obtain depth information of a local area using various patterns of structured light.
Virtual and augmented reality (AR) systems can leverage the capture of the environment surrounding a user in three dimensions (3D). However, traditional depth camera imaging architectures are comparably large in size, heavy, and consume significant amounts of power. Example common depth camera imaging architectures for obtaining 3D information of a scene include: time-of-flight (both direct-detect pulses and encoded waveforms), structured light (SL), and stereo vision. Different depth camera imaging architectures provide different strengths and weaknesses, so certain depth camera imaging architectures may provide better performance than others in different operating conditions. However, because of the relatively large size of conventional depth camera imaging architectures, many systems including a depth camera typically use a single type of depth camera imaging architecture configured for a particular use case. In addition, most of the depth camera imaging architectures are fixed and cannot adapt to sensing needs of the environment or data products being captured, either due to limitations in the architecture or design decisions around size, weight, power and stability. As head-mounted systems are increasingly used to perform a broader range of functions in varied operating conditions and environments, including large range of depths, selecting a single or fixed functionality depth camera imaging architecture to obtain depth information of an area surrounding the head-mounted system and user may impair the user experience with head-mounted systems.