For years, providers of video entertainment have included closed captioning information in order to provide a satisfying viewing experience for viewers with hearing difficulties or for those watching video in noisy environments. Existing methods of including closed captioning information for analog television, such as the EIA-608 standard, take advantage of sections of the analog NTSC broadcast signal which are not displayed in order to include ASCII-encoded characters for each broadcast frame. The EIA-608 standard is also know as the “Line 21” standard, named because closed captioning information is stored in line 21 of the vertical blanking interval.
While EIA-608 allows for one form of closed captioning, the technique can only encode two bytes of data per frame, and thus does not provide very much control over the types of characters in the captioning or their placement. Additionally, it is necessarily constrained by the NTSC broadcast television standard. As such, EIA-608 is limited to the frame rate at which NTSC broadcasts, or 59.94 interlaced video fields per second.
With the advent of digital video technologies, the vertical blanking interval was no longer being used, and as such a new standard was required. One solution for closed captioning in digital environments is found in the EIA-708 standard, which encapsulates data encoded as to the EIA-608, allowing this NTSC-based standard data to be tunneled in the user data of a digital video signal.
However, although systems exist to insert EIA-608 data into digital video, this standard is, at its heart, based on NTSC parameters. As such increased complexity is added when video must be captioned that differs in aspects from the NTSC standard. Examples of aspects include, but are not limited to things such as frame rate, the use of interlaced vs. progressive video, the use of 3:2 pulldown to upsample content shot at 24 frames/sec to regular NTSC video frame rate of 29.97 frames/sec, and the manner in which frames are encoded. Thus, for example, a video sample that runs at a higher frame rate will need additional bytes of closed captioned data, in order that sufficient data exists to allow a caption to be displayed for a given amount of time. What is needed is a system for accounting for these aspects, examining them, and creating closed captioning that is properly configured for the digital video to which it is being added.