Natural user interface (NUI) devices such as touch screens, depth cameras or three dimensional cameras are becoming more and more popular due to their high availability, appealing features and dropping prices.
NUI systems comprise advanced sensors and utilize advanced algorithms for identifying a user, recognizing voice and gestures, and providing reactions. These abilities provide users with the option to communicate with or operate computerized systems using natural gestures, and enable smooth, natural and intuitive interaction. In some embodiments, such as Microsoft Kinect™, NUI devices may be used in immersive environment, i.e., an environment in which the user feels that he is part of the occurrences in the system, whether it is the user's image, voice, movements or another input that is being used. Such environments may benefit a lot from the augmentation of the person into the system.
However, developing applications that make use of NUI devices is labor intensive and requires complex programming. For example, a public relations (PR) agent may wish to provide a client who is a sunglasses retailer with an application that enables a customer interested in purchasing sunglasses to select a pair from a catalog, and using a camera that captures the client's face, to show the customer's face with the selected glasses on, without the customer actually trying them on or even having them physically on site. The application may also show the client in a dynamic manner, for example from different angles, when moving, or the like.
The PR agent may additionally or alternatively wish to provide another client who is a movie producer, with an application that immerses the users into a movie scene, interacts with the actors and become part of the trailer cast in a themed environment and custom clothing.
With existing technologies, the PR agent, vendors, or other media creators have to initiate a long, complex and expensive programming effort for generating such applications, which may require a lot of programming resources in addition to content-specific knowledge, and may incur costs and create a bottleneck which increases time to market. Media creators are thus at a frustrating position, in which they lack tools to fully utilize the new available exciting technology.
Exiting NUI technologies include depth cameras such as Kinect® manufactured by Microsoft or Redmond, Wash., USA, which enables users to play games using bodily gestures without external objects. Developers may use the Kinect System Development Kit (SDK) for developing Kinect-based applications. Other technologies include cameras such as manufactured by LeapMotion® of San-Francisco, Calif., USA, which may easily interface to any computer and recognize hand gestures, software solutions such as those suggested by XTR of Herzliya, Israel, which extract depth information from 2D cameras; XTION manufactured by ASUS Beitou District, Taipei, Taiwan, using which developers may use the OpenNI NITE SDK for developing XTION or Kinect based applications; sensors and microcontrollers such as Arduino from Ivrea, Italy, using which developers may develop in open source Arduino environment, or others.
Such cameras are being integrated and used in platforms such as smartphones, laptops, smart TVs, or any other computing platforms.
Other NUI devices may include NUI glasses, hand tracking devices and touch displays, such as Microsoft's Digits, wrist sensor manufactured by Ringbow of Israel, Google Glass by Google located of Menlo Park, Calif., U.S., Microsoft PixelSense that enables a display to recognize fingers, hands, and objects placed on the screen, made by Samsung of Seoul, South Korea, Smart Window which consists of a transparent touchscreen LCD, Digital Signage Endcap manufactured by Intel® of California, U.S, or such. Projection solutions such as floor projection, window projection, 3D projection or others may be also be significantly enhanced with the use of NUI applications. In addition to hardware devices, there are also advanced tracking and recognition algorithms implementing features such as but not limited to: facial detection and recognition; face tracking; eye tracking; body tracking; gesture recognition of hand; facial or body gestures; voice recognition or others.
The devices and algorithms mentioned above are suitable and may be integrated into NUI applications, but nevertheless the development of such applications requires programming, and therefore do not enable easy and fast application development.