High resolution images are beneficial for showing detailed images of significant landmarks and structures. Resolution in this case refers to the ground distance represented by a screen pixel. The resolution viewable depends upon the overall distance shown on the screen. In other words, if the viewable width of the image displayed on the screen is 3000 miles, the resolution would be about 2000 meters, at 500 miles the resolution improves to 300 meters, and so on.
A shortcoming in the current art is that such high resolution images, in digital form, require a large amount of computer-readable memory. An image is typically comprised of a plurality of “tiles” which include an image file (e.g., bmp, jpeg, or the like) of a portion of the image and a meta-data file which stores identifying information about the tile such as geographic coordinates and a sequence number to define its place in the overall image. As resolution increases, so does the number of tiles which are required to display and image. For example, a 2000 meter resolution image data may comprise twenty to thirty tiles. A 30 meter resolution image may consist of thousands of tiles. A high resolution image may consist of hundreds of thousands of tiles. Since each tile comprises about 256×256 bmps, memory required to store and display high resolution images, large memory capacity is required.
A typical one meter resolution image used in weather displays covers about a 60 to 100 square mile area and represents an uncompressed file size of about five gigabytes of storage space. This is roughly the limit of the area capable of being accessed and displayed because current display techniques often require reading the entire image before displaying any of it. If a different location is desired to be viewed, another five gigabyte file must be read.
Television stations typically serve a viewing area covering hundreds or thousands of square miles and either cannot store a significant number of files of high resolution images covering their entire viewing area, or cannot quickly display selected areas of images within their viewing areas to display relevant weather events. When an image is accessed for display, the tiles comprising are loaded for display sequentially according to the sequence number in the meta-data file. Since weather events occur over a large area, a plurality of threats may be imminent at any given point over the entire viewing area, but to show the high resolution images the various points would take a significant amount of crucial time to load and display the image. Moreover, weather events advance over ground in a manner that is likely inconsistent with the sequencing scheme of the tiles. Therefore, to track weather events across multiple non-sequentially related tiles, it is necessary for the system to remember the position of a first tile, then calculate its relationship to the next tile desired to be within the view which results in a cumbersome technique. Finally, when panning across an area, whole tiles must be dropped from view, and the new tiles added, again, sequentially. If the panning is not according to the tile sequence scheme, the access and loading time is lengthy.
Because of this limitation, the current art may access and display an image affected by a weather event occurring within the area represented by the image, but it fails to allow display of an image covering an area that is beyond the scope of the first image without significant processing time. Consequently, typical commercial weather information activities, such as television stations, only display high resolution images of densely populated areas. However, weather systems, for example, severe storms, often occur over larger geographic areas.