Technical Field
This disclosure relates to an apparatus and in particular but not exclusively to an apparatus with an array of photosensitive devices.
Description of the Related Art
The use of cameras as networked sensors or networked devices is known. Cameras for example may be used as sensors within many applications. For example a camera or cameras may be used in sensors implemented within the internet of things (IOT) to monitor activity for controlling household devices, in industrial processes for verification of objects and in security for biometric authentication.
A specific example may be the use of a camera (or multiple cameras) employed as a security sensor for capturing images. The security sensor may be used to control access to an area based on whether the image biometrically identifies the person in front of the camera.
Such uses of cameras as sensors however has several issues. Firstly the camera is typically operated in an always on mode which is a high power consumption mode.
Secondly a single camera may be unable to determine whether the image captured is actually an image of a real object or an image of an image of the object. Thus a printed image of an authorized person may be used to spoof a camera that the authorized person is present and open a controlled door or gate.
Thirdly capturing of images for security purposes can produce poor results where there is any transparent surface between the camera and the object being imaged. For example when a person is located behind a pane of glass the camera on the other side may not be able to capture an in-focus image to identify the person. Similarly a pair of glasses may prevent a good quality image of the person's iris from being captured.
One known solution to these problems is to employ devices having multiple cameras to determine a distance between the cameras and the object. Computational camera applications may compare features within the images captured by the cameras and use the knowledge of intrinsic and extrinsic parameters associated with the cameras or camera arrays to determine the distance from the device. Computational camera applications thus can create 3D images with associated 3D depth maps which may then be used to assist focusing and foreground-background separation.
Accuracy, speed and consistency of the depth computation may be important for the use cases described above. For instance, the device should generate consistent 3D models, which can be used to determine whether the image is in focus or is a ‘proper’ image. Errors in the 3D models can for example lead to incorrect results.
Furthermore limitations in cameras, algorithms and device production prevent effective correction of all errors, motions and variations. These issues are typically worse in mobile devices because of the limited computation power, battery capacity and movement of the device during capture.