The present disclosures generally relate to augmented reality environments, and more specifically, modeling physical environments in real time.
In augmented reality (AR) environments, a user may wish to obtain a model of his physical surroundings to enable AR functions. For example, the user may wish to model key structures of a view of his office, which may include the surfaces of the walls, floor and ceiling, and the table surfaces of a desk. Current methods of modeling a real-life physical environment may lack the ability to distinguish different surfaces from each other, and instead simply generate a dense reconstruction of points indicating depth associated for each point from a camera view. Furthermore, this set of points may not contain any way to distinguish which points belong to a wall, or which points belong to a desk, and so forth. Lacking such semantic meaning makes interacting with said AR walls or desk surfaces difficult. Furthermore, generating this set of points may be quite processor-intensive and less suitable for real-time use.
Embodiments of the invention solve this and other problems.