Occupancy sensing means detecting whether persons or objects are present in a space. Typical known examples of occupancy sensing are automatic doors that open and outside light fixtures that switch on when a person is walking by. For both purposes a PIR (passive infrared) sensor, often combined with a “radar” diode is used. Sometimes pressure sensitive carpets or interrupts of light beams are used.
In more processing intensive methods, cameras may be used to detect persons, for example for applications like border control and pedestrian detection.
Current day 1D occupancy systems cannot give a proper 2D overview of the scene. Only camera-based methods can have some slight sense of the 2D position of the person based on height, ground-plane position and detection angle. A PIR based sensor for instance detects only that something is within its range.
The systems that are proposed in research and innovative installations for 2D occupancy sensing are based on multi-camera setup, radar, ultrasound, radio beacons or pressure sensitive carpets. The pressure sensitive carpet setup and camera setup are difficult to employ because of expensive alterations on the infrastructure and wiring for power and data lines.
In a true 2D occupancy system, the 2D positions of persons or objects in the space are reported. They can be drawn on a geographic plan of the scene, alleviating bothering the operator with video images, and being much more understandable for an intermediate user.
A camera-based method can use a camera mounted on the ceiling with wide-angle viewing lenses, with background subtraction and foreground detection in order to obtain the number and location of persons. A ceiling based camera gives challenges in the processing of the signal because of the huge distortion of the wide viewing angle lens that is necessary in order to capture an entire room. Also further analysis of the images, such as face recognition, if so desired, will then need additional cameras in the horizontal plane.
A system that uses multiple cameras in the horizontal plane that work together to create a 2D occupancy system is described by M. Morbee, L. Tessens, H. Lee, W. Philips, and H. Aghajan in “Optimal camera selection in vision networks for shape approximation”, Proceedings of IEEE International Workshop on Multimedia Signal Processing (MMSP), Cairns, Queensland, Australia, October 2008, pp. 46-51. Such system is illustrated schematically in FIG. 1, showing a top view of the scene which shows its geometry and the positions of the cameras 10 and persons 11. The system does background-foreground segmentation and shares “active” angles (angle ranges where objects are detected) between the cameras in order to create an occupancy map. An added benefit of this method is that if necessary, also facial views are available in the system.
Camera based systems suffer from possible privacy breach, which is halting the progression into private enterprises and elderly homes.
Occupancy systems give only input to final applications, this means that in economical sense the price has to be far below the cost reductions that can be achieved in the main application by using them. This puts a heavy burden on the bill of materials of the complete sensor system. Regular cameras are usually too expensive because of the high processing involved to analyse real-time video signals. Also their power consumption is too high to make battery operation practically feasible needing hookup to grid power or power over Ethernet, which means expensive changes to the infrastructure.
Both ultra sound and radar technologies need extensive noise cancellation techniques and heavy processing because of the use of direction of arrival methods.
For a sensor based occupancy system to be successful, it needs to be ultra cheap and very low-power.