A camera-integrated recorder (hereinafter, referred to as “camcorder” as appropriate) or a recorder that encodes and records a high resolution image signal as well as a low resolution image signal has been common in the past. Low resolution encoded data is called a proxy in comparison with high resolution main line data. Proxy data has lower resolution and a smaller encoding rate than those of the main line data and thus, decoding processing and transmission processing at the time of confirming the content after being recorded are carried out with ease. Accordingly, there is an advantage in recording the proxy simultaneously with the main line data.
For example, an HDTV signal of 1920 horizontal pixels×1080 vertical lines is encoded in an MPEG-2 mode so as to be recorded as the main line at a data rate of approximately 60 Mbps along with data of audio and soon and also encoded in an MPEG-4 AVC mode at resolution of 352 horizontal pixels×240 vertical lines (source input format (SIF)) so as to be recorded as the proxy at a data rate of approximately 1.5 Mbps along with data of audio and so on.
Incidentally, in accordance with the expansion of an HDTV, it has currently become usual not only in a camcorder for broadcasting or professional use but also in a camcorder for home use to record and store HDTV content as encoded data in the MPEG-2 mode, the MPEG-4 AVC mode, or the like, where an interlaced signal of 59.94i or 50i is encoded in most cases.
Meanwhile, in future, content at higher resolution such as 4K and 8K will be able to be created and it is assumed that a camera-integrated recorder or a recorder having such a configuration that a main line image is in 4K or 8K and a proxy image uses the resolution of the HDTV can become the mainstream thereof. However, imaging by way of interlaced scanning is expected not to be used for the resolution of 4K and 8K in future.
Actually, all of 4K camera-integrated recorders currently in the market use progressive scanning to image and a progressive signal during encoding. Some of the 4K camera-integrated recorders can also record the low resolution proxy at the same time. However, even in a case where the proxy is at the HDTV resolution, both of the main line and the proxy are encoded by using the progressive signals when a signal source employs the progressive signal such as 1920×1080/59.94p or 1920×1080/29.97p.
Then, even though the high resolution such as 4K will be able to be used relatively with ease in future for imaging, recording, and storing, when it is assumed as described above that the HDTV currently serves as the main resolution for content, an affinity with an existing content creation process for the HDTV can be enhanced by imaging content at the high resolution such as 4K simultaneously with imaging at the resolution of the HDTV and additionally encoding and recording in the MPEG-2 or MPEG4-4 AVC.
Editing work serves as part of the content creation process and recently, so-cold non-linear editing has been carried out in many cases, in which the encoded data is cut and pasted on a PC to be edited. At this time, when a certain condition such as similarity in resolution or similarity in frame rates is satisfied, a method called smart rendering can be employed between materials encoded in, for example, the MPEG-2 mode, in which efficient editing is achieved by exclusively reencoding a vicinity of an editing point even when cutting and pasting are carried out (for example, refer to Patent Document 1). However, whether to be an interlaced signal or a progressive signal acts as a condition for determining whether the smart rendering is available and thus, the smart rendering cannot be used in editing where both types of the signals are mixed. In order to adhere to one of the scanning modes, one of the materials needs to be encoded one more time.
In consideration of such a workflow, as long as a large number of interlaced materials for the HDTV are still used, even in a case where the encoded data at the HDTV resolution can be generated as the proxy, when the encoded data as the progressive signal only can be generated, reencoding processing across a wide range is required during an editing process where the existing interlaced material is mixed, even in the encoding mode such as the MPEG-2 and the MPEG-4 AVC. As a result, degradation in image quality and time loss due to reencoding occur, which has served as a cause of preventing high efficient content creation.