At least some known spherical camera systems are able to capture a field of view including 360 degrees around the camera. To be able to view the footage captured by a spherical camera, various options exist i.e. Virtual Reality headsets of appropriate software that allows the viewer to monitor a portion of the wider Field Of View video as an undistorted video stream.
Spherical video is projected to be a desired way of producing and distributing video in the upcoming years, not only due to the immersion it offers but also due to the interactivity between the user and the captured environment as now the viewer can now freely select what direction to monitor and is not confined to the “standard” camera narrow and fixed Field of View (FOV).
At least some known video camera manufacturers developed various types of spherical camera systems following a common approach on producing and distributing the spherical video. The fundamental technique used so far to produce spherical video is to combine footage from two or more narrower-FOV cameras assembled together (camera rigs) in such a manner so each one of them captures a small portion of the sphere. Then, through the use of appropriate software, the individual footages are “stitched” together to resemble the final sphere. This is the spherical video production stage. The second stage is to distribute the video. During this stage, various techniques are used to isolate a part of the total spherical video and present it as an undistorted narrow FOV video. As the final spherical video footage is the product of multiple video streams combined together into a single file it is natural that the end stream is a large file that becomes inconvenient to stream live over the internet or other conventional networks (i.e. cellular) due to bandwidth limitations.
At least some known systems generate spherical video that can be transferred only through wide-bandwidth media i.e. WiFi. This limits the end user to only stream the spherical video footage from their spherical camera to a nearby WiFi enabled smartphone or a tablet devices a.k.a. local wireless streaming. Alternatively, these multi-sensor cameras can record the video in an i.e. SD card and reproduce it later on. However, none of these known streaming techniques are live streaming over the internet as usually a live streaming camera with multiple sensors would require a bandwidth of 50 Mbits-200 Mbits. Furthermore, these cameras lack the IP protocol stack interfaces to be categorized as IP cameras since usually they are equipped with high bandwidth interfaces i.e. USB or WiFi. In other words, these cameras cannot be connected directly to an IP network unless external processing media are used i.e. video compression and conversion servers. This processes of course is far from an actual live internet streaming as spherical footage currently seen on the internet are pre-recorded files, none of which is live.
The present invention is aimed at one or more of the problems identified above.