The present disclosure is related to a localization system and method for determining a position of an imaging system in a region of interest. The system is contemplated for incorporation into an image or video-based application that can determine the spatial layout of objects in the region of interest. Particularly, the disclosure is contemplated for use in a product facility where the spatial layout of product content is desired, but there is no limitation made herein to the application of such method.
FIG. 1A shows a store profile generation system 10 in the PRIOR ART configured for constructing a store profile indicating locations of products throughout a product facility. The disclosure of co-pending and commonly assigned U.S. Ser. No. 14/303,809, entitled, “STORE SHELF IMAGING SYSTEM”, by Wu et al., which is totally incorporated herein by reference, describes the system 10 as including an image capture assembly 12 mounted on a mobile base 14. The fully- or semi-autonomous mobile base 14 serves to transport at least one image capture device 16 around the product facility and can be responsible for navigating the system 10 to a desired location—such as a retail store display shelf 20 in the illustrated example—with desired facing (orientation), as requested by a control unit 18, and reporting back the actual position and pose, if there is any deviation from the request. The control unit 18 processes images of objects—such as products 22 in the illustrated example—captured by the image capture device 16. Based on the extracted product-related data, the control unit 18 constructs a spatial characterization of the image capture assembly 12, and generates information on the position and pose of the mobile base when the images were acquired.
However, the conventional store profile generation system including this navigation and localization capability may not reach the desired accuracy in cases where products are closely laid out. Using retail store displays as one illustrative example, merchandise can be displayed by wall mounts, hang rail displays, and/or peg hooks that are in such close proximity (e.g., one inch or less) that product location information generated by the assembly may be off by a measure.
In other words, when the existing store profile generation system (“robotic system”) is instructed to move the image capture device to a goal position (xG, yG) and pose θG (“coordinates”) in the product facility, it generates a reported position and pose (xR, yR, θR) after it arrives at the instructed location. In a perfect system, the robot's actual position and pose (xA, yA, θA) will be identical to both the goal position and pose (xG, yG, θG) and the reported position and pose (xR, yR, θR). In practice, the actual position and pose will not match the goal position and pose nor the reported position and pose—i.e., small errors are introduced by the statistical nature of the navigation algorithms. Errors have been observed in the range of +/−3 inches in reported position (xR, yR) and up to 4-degrees in pose θR. More accuracy may be achieved in the navigation algorithms by adding very expensive, high accuracy sensors. However, the sensors can make the unit cost-prohibitive.
In practice, the existing system is not accurate enough. For example, a location error can result when the existing system reports an incorrect coordinate after stopping or moving around an obstacle (i.e., (xR, yR, θR)≠(xA, yA, θA)). A navigation error can also result when the existing system's navigation takes the image capture assembly to a proximate location only, i.e., (xA, yA, θA)≠(xG, yG, θG), particularly in one instance when the navigation requires a reroute calculation to reach the destination. Particularly, the existing image capture assembly knows its position and pose by some measure, but that location may not be correct if the navigation calculated a route that ends in proximity to the goal coordinates, but not at the exact goal coordinates.
Although the navigation and localization capabilities are well-studied in the field of robotic systems, there are limitations in practice depending on the sensors, processors, and response time, etc. The existing image capture assembly can provide its coordinates to a user, but the coordinates may not match the goal. Furthermore, depending on the applications the importance of the navigation verses the localization features can be quite different. For the purpose of profiling the layout of a product facility, there exists a need for more accurate localization output. The system may generate errors, in response of which it may choose to weight one requirement more than the other. An algorithm is therefore desired that computes an estimated position and pose (xE, yE, θE) that reflects the actual position of the robotic system with higher accuracy.
That is, the present disclosure further desires to provide an algorithm that can produce the estimated position and pose (xE, yE, θE) such that errors between the estimated position and pose (xE, yE, θE) and the actual position and pose (xA, yA, θA) are smaller than those between the reported position and pose (xR, yR, θR) and the actual position and pose (xA, yA, θA) observed in a conventional robotic system.