Video streams, such as television broadcasts or video streamed over a network, often contain embedded text data along with video display data. The embedded text data, which is usually transmitted during vertical blanking intervals, can include news, sports information, weather information, or subtitles based on the dialog of the video display. As a result of the wealth of information provided by the embedded text data, a number of software and hardware applications have been developed to process and/or analyze the embedded text. For example, applications have been developed that search Closed Captioning text for keywords and then generate a transcript based on the text surrounding the keyword. Other applications have been developed to display subtitle text in a separate window so as to not interfere with the display of the video. Additionally, many displays, such as televisions, can display subtitle text in conjunction with the video display.
However, the functionality of these displays and applications is limited due the variety of formats of the embedded text. The two formats most widely used include the Teletext format and the EIA-608, or Closed Captioning, format. These two formats are generally incompatible as a result of the difference in location(s) of the text data during the vertical blanking interval, the difference between the number of characters per subtitle line, and/or the data/character transmission rate. This incompatibility between the Teletext and Closed Captioning formats renders applications and displays developed for one format useless when presented with text data in the other format. For example, televisions designed to process embedded text according to a Closed Captioning format are generally incapable of handling video steams having embedded text with a Teletext format, and vice versa. Likewise, Teletext-enabled video broadcasts often cannot be analyzed since applications to search Teletext data for keywords in the subtitles have not yet been developed. Accordingly, the embedded text must either go unutilized, or the video stream must carry embedded text in both formats, a process that is practically impossible as the two specifications define the use of the vertical blanking interval (VBI) data in different and generally incompatible ways.
Given these limitations, as discussed, it is apparent that a system and/or method to address some of the shortcomings of the prior art would be advantageous.