The present invention relates to the production and control of video content, and in particular, to the production and control of on-demand video content that utilizes modeling support to enable viewing of dynamic scenes from virtual camera perspectives.
In the prior art, video content is broadcast, for example, utilizing over-the-air terrestrial or satellite-based radio frequency signals, via cable and fiber optic network, or via data networks, such as the Internet. U.S. Pat. No. 5,600,368 to Matthews, III describes conventional video programming as follows:                Although multiple cameras may have been used to cover the event, the program's producer selects which camera to use at which point in the program so that only one video stream is broadcast to the viewer. For example, when broadcasting sporting events, such as baseball games or tennis matches, the sports network typically employs multiple cameras to adequately cover the action. The multiple cameras enable ready replay of key plays, such as a runner sliding into home plate or a diving backhand volley, from many different angles. The producer relies on his or her creativity and experience to timely select the appropriate camera viewpoint which best conveys the sporting event.        The viewer, on the other hand, has no control over what he/she is viewing. Conventional broadcast systems are not interactive and thus, the viewer is forced to watch the single video stream compiled by the program's producer. As a result, the viewer cannot independently choose to watch the action from the home plate camera in anticipation of a close call at home plate.Matthews, III goes on to disclose an interactive system in which a television viewer can control which physical camera's video feed is presented on a primary broadcast channel.        
U.S. Patent Application Publication No. 2014/0101549 to Sheeley further discloses that the views provided by a video production system are not limited to those obtained by physical cameras, but can instead include those provided by virtual cameras.
U.S. Patent Application Publication No. 2008/0178232 to Velusamy similarly discloses that virtual cameras' views are computed from frames of physical cameras:                Specifically, by providing the user with the capability to control the views to be shown on a display screen using the VVCD 109, the user can not only experience a feeling of being within the scene, but will also appear to have the ability to control a “virtual camera,” which can be placed and moved anywhere in the coverage area in three-dimensional space, thereby providing the user with a first person view of the event As the user “moves” through the scene, the VVP 105 ensures that the full screen action for the user, either by seamlessly providing parts of the area covered by a single camera, or by interpolating (or “stitching”) frames to provide a smooth transition between cameras or by generating frames based on inputs from one or more cameras, in response to the user's actions to the view the event in a desired way.        
U.S. Patent Application Publication No. 2006/0244831 to Kraft similarly discloses that the virtual cameras' views are computed through the application of mathematical transformations to the images provided by the physical cameras:                Each positioned camera, of course, is normally equipped with a lens. While the preferred lens is a fisheye lens or other wide-angle lens, any other lens can be used. Mathematical transformations can be used to combine images from any or all cameras covering an event to produce virtual pan, tilt and zoom and to create virtual camera positions and view angles from many different virtual locations.        