Digital imagery has been shown to contain a great deal of useful information that can be utilized in many disparate applications. The challenge has been to provide a way to deliver this wealth of digital image information to an end user in an effective, efficient, and practical manner; one that meets the user's needs and objectives with as little excess data as possible. Digital imagery, both still and motion, includes a great deal of information since each picture element (or pixel) comprising the image must eventually be individually represented to the end user through the display device as a color, and as many as 16 million possible colors (or more) are required for a true to life, full color image. The volume of information required to digitally represent a true to life image can become staggeringly large; so large that manipulation of such images on digital devices, such as computers can become impractical. Transmission of these images between devices becomes even more problematic given the generally limited bandwidth of common transmission methods. Aggravating this issue is the fact that in many cases only a part of the image, sometimes a very small part, contains information that is of interest and value to the end user. Much of the source image is unneeded and/or distracting, taking up valuable transmission bandwidth and complicating the user's interaction with and analysis of the image and of the information held within. Digital image delivery and exploitation is, therefore, hindered by the sheer bulk of the information involved.
In most cases, these problems are addressed by reducing the size of the digital image—a process called image compression. Image compression reduces the size of the data required to store the image. With digital video, compression of the individual digital image frames is usually coupled with some type of frame-to-frame processing and potentially a reduction in frame rate. No matter what the specific technique, the end result is to reduce the amount of data required to represent the image(s), thus enabling faster transmission and frequently faster processing.
Many image and video compression techniques are commonly found in the art, such as the Joint Photographic Experts Group (JPEG) ISO standard 10918 for still images and the Moving Pictures Experts Group (MPEG) for motion video images, which reduces image data size to facilitate transmission through limited bandwidth connections. Using such compression techniques, digital still and motion video can be manipulated, transmitted across networks, and displayed to an end user in a usable and somewhat effective manner.
However, many limitations and constraints can be found in such systems. Imagery that is delivered to the user across a given connection is limited to a single resolution at a time, although additional requests from the user might cause a new transmission at some new resolution. Storage formats limit each image file to storing only a single resolution, though multiple files may be used to store multiple resolutions of the same source image. Images are transmitted and stored in their entirety, regardless of whether the user or application requesting the image needs the entire image or only a limited region of interest. User interaction with the image information stream is limited to selecting which images are to be transmitted. Even when an image is displayed to the user and zoomed in to a subset area of the same image, the entire image must be transmitted to the user.
On a fundamental level, all such compression methods address only one of the problems found in digital imagery—the size of the images themselves. Although such reductions do greatly alleviate the difficulties of dealing with digital imagery, they offer no way to extract from the imagery only the portions that the user ultimately wants to see. Compression, by itself, will result in a smaller representation of essentially the same data set, and not extraction of the useful regions of interest from the data set.
Prior art does exist that addresses some of these issues. In U.S. Pat. No. 5,768,535 issued Jun. 16, 1998 to Navin Chaddha, et al., and titled “Software-Based Encoder For A Software-Implemented End-To-End Scalable Video Delivery System,” a system is presented that provides a continuous stream of image data from a server (encoder) to clients (decoders), allowing each client side decoder to extract only the resolution(s) applicable to the user. The server in this case streams all resolutions to the clients and allows the clients to process only the desired resolution(s). Within the server's data stream, clients drop packets in response to limited transmission bandwidth. No provision is made for selective transmission from the server to the clients of only a region of interest from the source imagery; nor is there any method described by which a client can specify and request high resolution still frames of individual video frames. The client/server model in this system provides little interaction between the user and the server (outside of initiating and regulating the transmission stream) since the client side decoder is the component that extracts the desired data from the server stream.
U.S. Pat. No. 6,281,874 issued Aug. 28, 2001 to Zohar Sivan, et al., and titled “Method And System For Downloading Graphic Images On The Internet” describes a system in which low resolution still images are first transmitted to the client who then selects on the image a region of interest which is then sent to the client as a high resolution image. This system reduces network bandwidth demands and delivers high resolution information for the area of interest. However, it deals exclusively with still images and does not utilize any progressive resolution display. Furthermore, no image analysis is described for the discovery of image components that might aid in the identification of regions of interest, nor is any client control over the delivered resolution(s) described.
Some of the limitations are inherent to the particular image compression and storage formats. A relatively new image format put forth by the Joint Photographic Experts Group called JPEG 2000 addresses many of these storage and transmission limitations. JPEG 2000 images are stored in a wavelet compressed representation that includes multiple resolutions within a single data stream, allows for extraction of only a specified region of the image, and allows for the embedding of non-image information. The multiple resolutions are represented by a data code stream in which the data for a base, low resolution image can be streamed first, followed by data that enhances the base resolution image to increasingly higher levels of resolution. This data code stream can also be trimmed to include only the data representing a region of interest comprised of a subset of the entire image. An additional JPEG standard, the JPEG 2000 Internet Protocol (JPIP), couples the JPEG 2000 standard with a transmission protocol that permits the JPEG 2000 capabilities to be experienced across a standard transmission protocol.
Although relatively new, JPEG 2000 technologies and standards can be commonly found in the art. Said technologies do offer the JPEG 2000 advantages of multiple, selectable resolutions and selectable regions of interest. In U.S. Patent Publication No. 2002/0051583 A1, by Craig Matthew Brown, et al., published May 2, 2002, and titled “Image Transfer Optimisation,” a system is described in which multiple resolutions of wavelet based (i.e. JPEG 2000) images are transmitted to a user in an optimized manner. Furthermore, the system provides for the specification of a region of interest and the selective transmission of only that region's information to the user. However, only still frame images are described, both as the low resolution “thumbnail” initial representation, and as the higher resolution delivered image. No provision is made for utilizing video representations, at any resolution or of any portion of the source images, in the presentation of the image data or selection of the region of interest. Additionally, no provisions are described for region of interest selection based upon automated feature recognition or other image analysis methods.
In the World Intellectual Property Organization international publication number WO 00/65838 by Tony Richard King, et al., published Nov. 2, 2000 titled “Conversion Of A Media File Into A Scalable Format For Progressive Transmission,” a system is described wherein multiple source image formats are converted to a universal layered file format, supporting incremental resolution transmission, as well as selection and transmission of regions of interest. Although motion video formats such as MPEG are addressed, the focus of the system is towards the creation of a generalized bit stream transmission that can be decoded by generalized tools regardless of the original image format. This system does not utilize low resolution motion video imagery in the selection of higher resolution motion and still imagery, nor does it specify a method of delivering high resolution still images based upon selections made against low resolution motion video.
In the World Intellectual Property Organization international publication number WO 00/49571 by Meng Wang, et al., published Aug. 24, 2000, and titled “Method and system Of Region-Based Image Coding With Dynamic Streaming Of Code Blocks,” methods are presented for encoding JPEG 2000 images such that regions of interest can be efficiently specified and transmitted. The methods apply to the creation and storage of JPEG 2000 images and as such are focused on only a small part of the present invention. No provisions are made for the selection of regions of interest and/or resolutions based upon the content of low resolution motion video.
All such prior art focuses on the presentation of existing, static imagery to multiple users in an efficient or bandwidth-optimized fashion. Although the amount of binary information that is transmitted is reduced, no attempt is made to reduce the amount of visual information to which the user is exposed and which the user must analyze. Region of interest and image resolution selections are limited to a few predetermined ranges, mostly as a reflection of the static nature of the source imagery. In the prior art, the user interaction with the imagery is minimal at best and the user is forced into limited image choices.
Real users typically utilize digital imagery in the context of some objective—some condition the user wishes to detect or monitor. Objectives generally are the visual observation of some scene, detection of specific changes and/or objects in that scene, visual analysis of the parts of the scene that contain the changes and/or objects, and the arrival at some decision or determination regarding those changes and/or objects in particular and regarding the scene in general. Only portions of the complete imagery will typically contain information that is useful within the context of a given objective and resolutions less than the source's true resolution will typically be sufficient to achieve such objectives. As one visually analyzes a scene, a user might want to get a higher resolution view of part of an image which they would then further analyze; deciding at times that an even higher resolution view of part of the image is required, while at other times being able to reach their objective concerning the scene with the already delivered imagery. By interacting with the delivered imagery, a user would be able to select areas upon which they wish to focus their attention, further refine those areas, make decisions or determinations, and then move to other parts of the image holding information of interest.
No prior art allows a user to get just enough information in the form of digital imagery for the user to satisfy such objectives. No prior art allows a user to perform a repeated, iterative selection of image areas and resolutions in search of just enough information to make a decision. No prior art allows for imagery to be analyzed before being sent so that only imagery containing information of interest to the user need be transmitted. These capabilities remain unavailable to users today. Furthermore, no prior system provides for the delivery to an end user of high resolution JPEG 2000 images of a region selected from a video sequence, at selected resolutions.