The present invention relates to an information processing apparatus, an information processing method, and a computer program for processing information and, in particular, to an information processing apparatus, an information processing method, and a computer program appropriate for performing an editing process.
Known recording and playback apparatuses include one of a hard disk drive (HDD) and a digital versatile disk (DVD) drive to record content data to and play back content data from one of a hard disk (HD) and a DVD. The recording and playback apparatus records, onto a HD, one of television broadcast data and input data input from the outside. A portion of the data recorded on the hard disk, desired by a user, can be recorded onto a DVD (dubbed onto a DVD) for storage purpose. The user typically extracts a desired scene from the recorded data to be dubbed onto the DVD. To satisfy such a user requirement, the recording and playback apparatus is designed to allow only a desired scene to be extracted from the data recorded on the HD in response to an operational input from a user. Edited and generated data is then dubbed onto the DVD.
The edited data is not only dubbed onto the DVD but also stored onto a different recording area of the HD.
If the data is broadcast program data, a data title can be attached to each program data. If the data is the one picked up by a camcorder, a data title can be attached to each data input to a recording and playback apparatus. In the case of a DVD camcorder, a data title can be attached on a per DVD disk basis. The user searches a plurality of recorded data units for desired data with a title as a search key.
Widely used editing methods include a chapter method and an in and output point method. In the chapter method, data is divided into small units referred to as chapters for editing. In the in and out method, the user clips a scene of the data with a start point (in point) and an end point (out point) determined. Scene change where picture largely changes is detected as a breakpoint of the chapter or the scene. In the scene change detection method, a difference in moving image frames between a prior frame and a subsequent frame is calculated, and the variance of the differences of a predetermined number of consecutive frames is calculated. Using the calculated variance, a deviation of the differences of the frames contained in the predetermined number of frames is calculated to detect a scene change. Such a technique is disclosed in Japanese Unexamined Patent Application Publication No. 2003-299000.
Japanese Patent No. 3502579 corresponding to Japanese Unexamined Patent Application Publication No. 2001-60381 discloses another technique. According to the disclosure, the user plays back a content to specify a copy position of the content recorded on a primary recording medium as editing points, such as a chapter breakpoint and in and out points. When the playback operation reaches one of a desired start point and a desired end point of a chapter, the user indicates one of a start point and an end point using a display.
Known methods of specifying the editing points are described below with reference to FIGS. 1 and 2.
FIG. 1 is a block diagram illustrating a known recording and playback apparatus 1.
A central processing unit (CPU) 11 generally controls the recording and playback apparatus 1. In response to an operational input entered via an operation input unit 12 by a user, the CPU 11 reads a predetermined application program from a read-only memory (ROM) 13, and then loads the application program to a random-access memory (RAM) 14. A flash ROM 15 stores information, which needs to be continuously stored even when power is removed, out of information required by the CPU 11 in operation.
An antenna 16 receives broadcast signals and supplies the received signals to a tuner 17. The tuner 17 selects one broadcast signal on a channel desired by the user, and supplies the selected broadcast program data to a switch 19. An input terminal 18 receives broadcast program data, such as cable television broadcast data, and supplies the received data to the switch 19. The switch 19 supplies, to an NTSC (National TV Standard Committee) decoder 20, one of the broadcast program data selected by the tuner 17 and the broadcast program data input via the input terminal 18. An electronic program guide (EPG) can also be supplied to the input terminal 18. The supplied EPG data is then transferred to the CPU 11 via the switch 19, the NTSC decoder 20, and an MPEG encoder 21 (data processing is performed when necessary). The EPG data can be used to schedule recording of broadcast program data.
The NTSC decoder 20 decodes the supplied signal using the NTSC system, and supplies the decoded data to the MPEG encoder 21. The MPEG encoder 21 performs an encoding process in accordance with one of MPEG standards (MPEG2, MPEG4, etc.), and then supplies encoded (compressed) data to a drive controller 22 to record the data onto one of a hard disk of a HDD 23 and a DVD loaded on a DVD drive 24.
The drive controller 22 under the control of the CPU 11 supplies the received signal to one of the HDD 23 and the DVD drive 24 to record one of the hard disk and the DVD. The drive controller 22 also drives one of the HDD 23 and the DVD drive 24 to read data desired by the user from one of the hard disk and the DVD.
The HDD 23 drives the hard disk to record onto the hard disk the data supplied from the drive controller 22, and read data from the hard disk and supply the read data to the drive controller 22.
The DVD drive 24 drives the DVD loaded thereon to record onto the DVD the data supplied from the drive controller 22, and read data from the DVD and supply the read data to the drive controller 22.
The MPEG decoder 25 receives playback data from the drive controller 22, and decodes the received data in accordance with one of the MPEG standards (MPEG2, MPEG4, etc.), and then supplies a resulting video signal to a video signal processor 26 and a resulting audio signal to a audio signal processor 28.
Upon receiving the video data decoded by the MPEG decoder 25, the video signal processor 26 converts the video data into a NTSC format signal or digital-to-analog converts the video data, and supplies a display controller 27 with the converted data. The display controller 27, under the control of the CPU 11, controls the displaying of the supplied video data to be displayed on one of a television receiver and an external monitor.
Upon receiving the audio data decoded by the MPEG decoder 25, the audio signal processor 28 performs predetermined processes, including noise removal, amplification, and digital-to-analog conversion, onto the audio data, and outputs a resulting audio signal to the audio output controller 29. The audio output controller 29, under the control of the CPU 11, controls the outputting of the audio signal in one of the television receiver and an external loudspeaker device.
A known breakpoint determination and dubbing process of the recording and playback apparatus 1 discussed with reference to FIG. 1 is described below with reference to a flowchart of FIG. 2.
In step S1, the CPU 11 determines in response to the signal supplied from the operation input unit 12 whether a dubbing start command has been received from the user. If it is determined in step S1 that no dubbing start command has been received, the process in step S1 is repeated until it is determined that a dubbing start command has been received.
If it is determined in step S1 that a dubbing start signal has been received from the user, the CPU 11 controls the drive controller 22 in step S2 to read, from one of the HDD 23 and the DVD drive 24, a content (data of an original program) to be dubbed, and supplies the read content to the MPEG decoder 25. The MPEG decoder 25 decodes the supplied data, and then supplies the video data to the video signal processor 26 and the audio data to the audio signal processor 28. The video signal processor 26 performs predetermined processes on the decoded video data, and then supplies the processed data to the display controller 27. The audio signal processor 28 performs predetermined processes on the decoded audio data, and then supplies the processed data to the audio output controller 29.
The CPU 11 determines in step S3 in response to the signal supplied from the operation input unit 12 whether a high-speed playback (in one of a forward direction and a reverse direction) has been received from the user.
If it is determined in step S3 that a high-speed playback command has been received, the CPU 11 controls the display controller 27 and the audio output controller 29, thereby starting a high-speed playback operation in step S4.
If it is determined in step S3 that no high-speed playback command has been received from the user, the CPU 11 determines in step S5 in response to a signal supplied from the operation input unit 12 whether a frame playback command (in one of a forward direction and a reverse direction) has been received from the user.
If it is determined in step S5 that a frame playback command has been received from the user, the CPU 51 controls the display controller 27 and the audio output controller 29 in step S6, thereby starting the frame playback.
If it is determined in step S5 that no frame playback command has been received from the user, or subsequent to step S6, the CPU 11 determines in step S7 in response to a signal supplied from the operation input unit 12 whether a standard-speed playback command (in one of a forward direction and a reverse direction) has been received from the user.
If it is determined in step S7 that a standard-speed playback command has been received from the user, the CPU 11 controls the display controller 27 and the audio output controller 29 in step S8, thereby performing a playback operation at a standard speed.
If it is determined in step S7 that no standard-speed playback command has been received from the user, or subsequent to step S8, the CPU 11 determines in step S9 in response to a signal supplied from the operation input unit 12 whether an operational input specifying a breakpoint has been received from the user.
If it is determined in step S9 that an operational input specifying a breakpoint has been received from the user, the CPU 11 stores information concerning the breakpoint onto the RAM 14 in step S10.
If it is determined in step S9 that no operational input specifying the breakpoint has been received from the user, or subsequent to step S10, the CPU 11 determines in step S11 whether a content to be dubbed has been played back to the end thereof. If it is determined in step S11 that the content has not been played back to the end thereof, processing returns to step S3 to repeat step S3 and subsequent steps.
If it is determined in step S11 that the content to be dubbed has been played back to the end thereof, the CPU 11 performs in step S12 a dubbing process based on information concerning the breakpoint recorded on the RAM 14. More specifically, a content recorded on one of the hard disk in the HDD 23 and the DVD loaded in the HDD 23 is partitioned based on the information relating to the breakpoint recorded on the RAM 14. Only a portion of the content desired by the user is recorded on a different recording area of one of the hard disk in the HDD 23 and the DVD loaded on the DVD drive 24. The dubbing process is thus completed.
The user plays back at a desired speed the content to be dubbed or rewinds the content. The user thus determines the breakpoint (namely, in and out points and a chapter breakpoint) to partition the content recorded on the hard disk in the HDD 23 into a portion desired to be dubbed and a portion not to be dubbed.
In each of the chapter method and the in and output point method, the user looks for the edit breakpoint while playing back the content. More specifically, in the editing operation to search for the breakpoint, the user plays back the content at a high speed where a breakpoint is unlikely to present. While monitoring the video, the user reduces the playback speed, performs a frame playback operation, or suspends the playback operation as the playback position becomes close to a breakpoint. The user thus checks a location of the breakpoint, and may rewind the content slightly after passing the breakpoint for confirmation. The breakpoint is verified in this way. In the known editing operation, operational inputs become complex, and the user can miss the timing of modifying the playback speed. The point to be set as a breakpoint can be passed at a high speed, and an unnecessary portion can be dubbed.
To facilitate the editing process, a scene change is detected to be used as a breakpoint. The detected position of the scene change point does not always match the breakpoint desired by the user. The user thus needs to enter complex operational inputs to play back the content in the vicinity of a scene change detected point of the content to be dubbed. The user needs to verify and modify the breakpoint as necessary.
In the chapter method, a breakpoint is likely to be present near a front of a chapter to be specified as a range of dubbing. When the front section is played back, the user slows the playback speed or performs a frame playback. The user enters an operational input to rewind the content slightly as necessary after the playback of the breakpoint (namely, after verifying the breakpoint), and then fixes the breakpoint. More specifically, in the editing process to search for a breakpoint based on the chapter, the user starts playing back from the front of the chapter to be dubbed, returns to a preceding chapter to determined whether the breakpoint is correct, and then continuously plays back to the selected chapter. If the user desires to modify the breakpoint, the content is rewound to the location of the desired point. While performing the frame playback, the user checks the modified breakpoint and then sets the breakpoint. If the user enters the operational inputs to play back the content in the vicinity of the breakpoint in the chapter method, the operational inputs become complex. It is likely that the user overlooks another breakpoint in an area other than the font section of the chapter.
If the data is broadcast program data, a data title can be attached to each program data. If the data is data picked up by a camcorder, a data title can be attached to each data input to a recording and playback apparatus. In the case of a DVD camcorder, a data title can be attached on a per DVD disk basis. If the user desires to attach a plurality of titles to each scene, the content needs to be partitioned into data units.
For example, in the data recorded on a DVD on a DVD camcorder, data code such as date of photographing, photographing actions, and camera settings is recorded together with video. In the process as previously discussed with reference to FIG. 2, a DVD having the video recorded thereon with the DVD camcorder is loaded onto the DVD drive 24 to record the content onto the hard disk in the DVD drive 24. The date of photographing and the photographing actions (including a video capturing recording position) may serve as a search key. These parameters and newly set breakpoints can be specified as a playback start point, but cannot be individually annotated with titles.
It is thus desirable to simplify an editing process by reducing user operational input time with a breakpoint candidate predetermined in a manner such that a playback speed in the vicinity of the breakpoint candidate is automatically slowed.