1. Field of the Invention
The present invention relates to methods and systems for monitoring, decoding, transmitting, and archiving of closed caption texts from television broadcasts. More particularly, the present invention relates to methods and systems for the automatic collection and conditioning of closed caption texts originating from multiple geographic locations, and resulting databases produced thereby.
2. Description of the Prior Art
In the United States, television stations currently create more than 12,000 hours of local news programming every week. Network and cable news organizations broadcast an additional 1,400+hours. Because every newscast contains references to specific persons, organizations, and events, an entire industry has grown up to monitor newscast content on behalf of newsmakers. The traditional monitoring approach required workers to videotape, view, and summarize the content of TV newscasts. However, using such a traditional method, it is very difficult to monitor every newscast on every channel on a timely basis. Thus, a need exists for newsmakers and other interested parties to have comprehensive and cost effective real time access to a database of newscast content.
Closed captioning, which is mandated by the Federal Government for most television programs, is a textual representation of the audio portion of a television program. Originally devised as a means for making program dialogue accessible to the deaf and hearing impaired, closed captioning is often displayed now for the convenience of non-deaf persons in environments where television audio is not practical, such as noisy restaurants and airport kiosks. Closed captioning is encoded into the video blanking intervals (VBI), which are part of the video component of a conventional television signal. In the United States, line 21 of the VBI is reserved for carrying closed captioning.
One approach to monitoring television broadcasts by using closed caption text is disclosed in U.S. Pat. No. 5,481,296, issued Jan. 2, 1996, to Cragun et al., and titled APPARATUS AND METHOD FOR SELECTIVELY VIEWING VIDEO INFORMATION. The Cragun et al. system provides a closed caption decoder that extracts the closed caption text from a television broadcast. A viewer specifies one or more keywords to be used as search parameters and a digital processor executing a control program scans the closed caption text for words or phrases matching the search parameters. The corresponding complete video recording of the television broadcast may then be displayed, edited, or saved. In one mode of operation, the Cragun et al. system may be used to scan one or more television channels unattended and save items that may be of interest to the viewer. In another mode of operation, the Cragun et al. system may be used to assist in quickly locating previously stored video recordings. One clear disadvantage of the Cragun et al. system is that extremely large amounts of memory are required to store the video segments.
One approach to monitoring television broadcasts by using closed caption text is disclosed in U.S. Pat. No. 5,809,471 issued Sep. 15, 1998 to Brodsky et al and titled RETRIEVAL OF ADDITIONAL INFORMATION NOT FOUND IN INTERACTIVE TV OR TELEPHONY SIGNAL BY APPLICATION USING DYNAMICALLY EXTRACTED VOCABULARY. Significant limitations of the Brodsky patent are that server based features are missing and only single closed caption data is monitored from a specific geographic site, as opposed to broad geographical and dispersed sites in the present application. As such, the present design has features and benefits that are not in the Brodsky design.