Field of Invention
The present invention relates to a method for providing feedback to a user during a 3D scanning session and guides the user to finish scanning completely.
The device used for 3D scanning is a mobile phone or a tablet with color camera and a depth sensor such as [7], [8]. In order to obtain a 3D model of a person or an object a user has to walk around the scanned object with the mobile device. Tests with users revealed that new users often stop scanning prematurely (without capturing all parts of a scanned entity) which results in inferior scans. Moreover, even more experienced users had problems with scanning all parts of a complicated object such as a human body or a room. The present invention provides a real-time feedback system to help a user to capture all features of scanned object.
A common way to provide feedback during scanning is to show captured parts to the user [1, 2, 3] with KinectFusion [3] as an excellent example. This is useful visualization but it has a shortcoming: it doesn't encourage a user to scan parts which are not seen. For example, when scanning a human bust, if a person didn't scan a top of the head KinectFusion visualization would look perfect from all points of view encountered by the user but the final model will not have measurements corresponding to the top of the head. In our case the first component of our system will tell a user that scanning coverage is below 100% and so the scan is not complete, the second component will actually guide the user to scan the top of the head. In both cases it is much more valuable feedback in this situation than just KinectFusion visualization. The closest approach to us is [4] which aims to provide a user a more valuable feedback than standard systems. However, their feedback has similar shortcoming to the standard systems that a problematic area should be in the field of view of the camera to be noticeable to the user and also their feedback is more complicated and less intuitive than the present invention.