It is known to use depth sensing devices for various applications. One example of a depth sensing application is 3D modeling, where a real life object is scanned with a depth sensing device, and the depth information is used by a computer to construct a computerized three dimensional model of the real life object. It is also known to use active depth sensing devices for capturing depth information about a scene. Active depth sensing devices transmit or emit light onto the scene and a sensor is used to capture a reflected portion of the projected light. A decoder decodes the received signal and extracts depth information. Typically a single sample only captures a portion of the object. For example, a depth sensing device such as the one described in U.S. Pat. Nos. 8,090,194 and 8,538,166 both of which are incorporated herein by reference, have a certain field of view (FOV) and each frame that is capture by the depth sensing device covers the device's FOV. Several frames can be combined (registered) with one another to obtain an extended coverage of an imaged object.
Using two or more depth sensing devices (e.g., two, three, . . . , n 3D cameras) simultaneously has the advantage of providing broader coverage while avoiding the need to resort to dynamic registration of frame that were captured at different times. It can also shorten scanning time. Two people shooting 3D scenes together can also be a fun social endeavor. However, many depth sensing devices shoot rapid pulses of spatially or temporally coded light at the scene and process the reflected portion of the projected light to extract depth information, and when two (or more devices) active depth sensing devices operate simultaneously the pulses from the two devices can overlap in time and corrupt one another. In the field of structured light projection, this sort of corruption is sometimes referred to as “shadowing”.
US Patent Application Publication No. 2013/0120636 to Baer, discloses a method for capturing an image with an image capture device, such as a camera or mobile electronic device. The method includes initiating a master-slave relationship between the image capture device and at least one secondary device. Once the master-slave relationship is initiated, remotely activating one of an at least one light source of the at least one secondary device. As the light source is activated, capturing a test image of a scene illuminated by the at least one light source by the image capture device. Then, analyzing the test image to determine if an illumination of the scene should be adjusted and if the illumination of the scene is to be adjusted, providing a control signal to the at least one secondary device including at least one of a position instruction, an intensity level, or timing data.