The present invention relates to an image processing device, an information storage device, an image processing method, and the like.
When still images are continuously captured in time series at given time intervals, or when a spatial object is covered by a number of images, or when a movie is captured, and each image included in the movie is used as a still image, for example, a very large number of temporally or spatially continuous images (hereinafter may be referred to as “image sequence”) are acquired. In such a case, it is likely that the images that are closely situated in the image sequence (i.e., images that are close to each other temporally or spatially) are similar images, and it is not likely that it is necessary to check all of a large number of images in order to determine the captured information. Since the number of images may typically be tens of thousands or more, it takes time for the user to check all of the images.
Therefore, it has been desired to summarize the original image sequence using an image sequence that includes a smaller number of images by deleting some of the images from the original image sequence. This process is hereinafter referred to as “image summarization process”. For example, JP-A-2009-5020 discloses an image summarization method that extracts a scene change boundary image included in the image sequence, or an image that represents the image sequence, and allows images from which the information represented by the image sequence can be easily determined to remain.
For example, when capturing an in vivo image using an endoscope apparatus, it is considered that the degree of importance of a lesion area included in the in vivo image is higher than that of other areas when performing diagnosis or the like. JP-A-2010-113616 discloses a method that detects a lesion area from an image.
When performing the image summarization process on in vivo images, the image summarization process may be performed so that an image from which a lesion area has been detected using the method disclosed in JP-A-2010-113616 is allowed to remain in the summary image sequence, and an image from which a lesion area has not been detected is deleted, since a high degree of importance and a high degree of attention are paid to a lesion area, for example. However, a lesion area may be detected from most of the images included in the acquired image sequence depending on the disease, and it may be inefficient (i.e., the effect of reducing the number of images may be low) to perform the image summarization process based only on whether or not a lesion area has been detected.
Therefore, the image summarization process may be performed on images that include a lesion area using the method disclosed in JP-A-2009-5020. In this case, when applying the image summarization technique to the medical field (e.g., endoscopic observation), for example, it is necessary to prevent a situation in which a lesion area that cannot be observed occurs due to deletion of an image in order to prevent a situation in which the disease is missed.
It may be necessary to prevent a situation in which it becomes impossible to observe an area other than a lesion area due to deletion of an image. For example, JP-A-2007-313119 discloses a method that detects a bubble area included in an in vivo image, and JP-A-2010-115413 discloses a method that detects a residue area. Since a mucous membrane is covered by bubbles or a residue in a bubble area and a residue area, these areas are not suitable for observation. Specifically, an area that is included in an in vivo image, and is not included in a bubble area and a residue area, has high observation value as compared with a bubble area and a residue area, and it is necessary to prevent a situation in which it becomes impossible to observe such an area due to deletion of an image.
JP-A-2012-16454 discloses a method that detects a dark area that is captured very darkly within an image, and JP-A-2011-234931 discloses a method that detects a halation area that is captured very brightly within an image. Since a dark area and a halation area have extreme pixel values (i.e., the visibility of the object is poor), these areas are not suitable for observation. Specifically, an area that is included in an image, and is not included in a dark area and a halation area, has high observation value as compared with a dark area and a halation area, and it is necessary to prevent a situation in which it becomes impossible to observe such an area due to deletion of an image.
Specifically, when an area (e.g., a lesion area, an area in which the mucous membrane is not covered, and an area in which the visibility of the object is good) included in an image that has high observation value as compared with other areas is defined as an observation target area, it is necessary to perform an image summarization process that suppresses a situation in which it becomes impossible to observe the observation target area due to deletion of an image.