Conventionally, as disclosed, for example, in Japanese Laid-Open Patent Publication No. 2000-69404 (hereinafter referred to as “Patent Literature 1”), various proposals are made for devices that provide an image displayed on a display device with decorations of a user's preference. An image print creation device disclosed in Patent Literature 1 enables a user to put makeup on a captured image. Specifically, in the image print creation device, a makeup palette is prepared that includes various makeup tools (for example, lipsticks of different colors and foundation). The user can put makeup on a captured image displayed on a monitor by operating a stylus on the monitor, using a given makeup tool included in the makeup palette.
In the image print creation device disclosed in Patent Literature 1, however, image processing is performed such that the user directly puts makeup on a still image of the user captured by the device. Accordingly, the user can put makeup on the still image by an operation where the user themselves touches the still image, and therefore can enjoy viewing the made-up still image. The image that can be enjoyed by the user, however, is only the made-up still image, which is temporary.
Thus, it is an object of the present invention to provide a storage medium having stored thereon an image processing program, an image processing apparatus, an image processing system, and an image processing method that can reflect an image corresponding to a user input on an image different from an image used for the user input.
To achieve the above object, the present invention may have the following features, for example.
A configuration example of a computer-readable storage medium having stored thereon an image processing program according to the present invention is executed by a computer of an image processing apparatus that displays an image on a display device. The image processing program causes the computer to function as first image acquisition means, first feature point extraction means, second image display control means, coordinate input acquisition means, input position data generation means, first image superimposition means, and first image display control means. The first image acquisition means acquires a first image. The first feature point extraction means extracts at least a first feature point from the first image, the first feature point being a feature point having a first feature on the first image. The second image display control means displays a second image on the display device. The coordinate input acquisition means acquires data of a coordinate input provided on the second image displayed on the display device. The input position data generation means generates input position data representing a position of the coordinate input provided on the second image, using the data acquired by the coordinate input acquisition means. The first image superimposition means superimposes a predetermined superimposition image on the first image, at a position on the first image based on the first feature point, the position corresponding to a position, represented by the input position data, on the second image based on a second feature point, the second feature point being a feature point having the first feature on the second image. The first image display control means displays on the display device the first image on which the superimposition has been made by the first image superimposition means. It should be noted that the first image and the second image displayed on the display device may be temporarily the same, or may be invariably different from each other.
Based on the above, it is possible to reflect an image corresponding to a user input on an image different from an image used for the user input.
Further, the image processing program may further cause the computer to function as second image superimposition means. The second image superimposition means, based on the input position data, superimposes on the second image an input image representing the coordinate input. In this case, the second image display control means may display on the display device the second image on which the superimposition has been made by the second image superimposition means. The first image superimposition means may superimpose the superimposition image on the first image, at a position on the first image based on the first feature point, the position corresponding to a position of the input image on the second image based on the second feature point.
Based on the above, it is possible to reflect an image corresponding to a user input on an image used for the user input and also on an image different from the image used for the user input.
Further, the first image superimposition means may superimpose the superimposition image on the first image, at the position on the first image based on the first feature point, the position corresponding to the position of the input image on the second image based on the second feature point, the superimposition image being an image based on the input image.
Based on the above, the image based on the input image input on the second image is also superimposed on the first image. Thus, it is possible to achieve an operation feeling as if the user has input the image on the first image.
Further, the image processing program may further cause the computer to function as second image acquisition means and second feature point extraction means. The second image acquisition means acquires the second image. The second feature point extraction means extracts the second feature point having the first feature from the second image acquired by the second image acquisition means.
Based on the above, it is possible to use the newly acquired second image as an image to be used to provide a coordinate input by the user.
Further, the first image acquisition means may acquire a moving image as the first image. The first feature point extraction means may extract the first feature point from a frame image included in the moving image acquired by the first image acquisition means. The first image superimposition means may superimpose the superimposition image on the frame image included in the moving image, at a position on the frame image based on the first feature point, the position corresponding to the position, represented by the input position data, on the second image based on the second feature point. It should be noted that the first image superimposition means may superimpose the superimposition image on the frame image from which the extraction has been made by the first feature point extraction means, or may superimpose the superimposition image on the subsequent frame image (a frame image different from the frame image from which the extraction has been made). In the first case, the first image superimposition means superimposes the superimposition image on the frame image, at a position on the frame image based on the position of the first feature point on the frame image from which the extraction has been made. In the second case, the first image superimposition means superimposes the superimposition image on the frame image, at a position on the frame image based on the same position on the frame image on which the superimposition is to be made.
Based on the above, it is possible to reflect an image corresponding to a user input on a moving image different from an image used for the user input.
Further, the second image display control means may display a still image as the second image on the display device. The coordinate input acquisition means may acquire data of a coordinate input provided on the still image displayed on the display device. The input position data generation means may generate input position data representing a position of the coordinate input provided on the still image, using the data acquired by the coordinate input acquisition means.
Based on the above, it is possible to use a still image as an image to be used to provide a coordinate input by the user. This makes it easy for the user to provide an input on a desired position on the second image.
Further, the first feature point extraction means may extract the first feature point from each of a plurality of frame images included in the moving image acquired by the first image acquisition means. The first image superimposition means may superimpose the superimposition image on said each of the plurality of frame images included in the moving image, at a position on said each of the plurality of frame images based on the first feature point, the position corresponding to the position, represented by the input position data, on the second image based on the second feature point.
Based on the above, it is possible to reflect an image corresponding to a user input on not only one image but also each of the frame images included in a moving image.
Further, the input position data generation means may include input position data updating means. The input position data updating means sequentially updates the input position data every time the coordinate input acquisition means acquires the coordinate input. The first image superimposition means may superimpose the superimposition image on the corresponding frame image, at a position on the corresponding frame image based on the first feature point, the position corresponding to a position represented by the input position data most recently updated by the input position data updating means.
Based on the above, it is possible to reflect an image corresponding to the most recent user input on an image different from an image used for a user input.
Further, the image processing program may further cause the computer to function as frame image storage control means. The frame image storage control means temporarily stores in a storage device the frame image included in the moving image acquired by the first image acquisition means. When the first feature point extraction means has extracted the first feature point from the frame image acquired by the first image acquisition means, the first image superimposition means may read from the storage device the frame image from which the extraction has been made, and may superimpose the superimposition image on the frame image, at the position on the frame image based on the first feature point.
Based on the above, even when it takes a long time to perform a process of extracting the first feature point from the first image, it is possible to reflect an image corresponding to a user input on the first image.
Further, the first feature point extraction means may extract at least the first feature point and a third feature point from the first image, the third feature point being a feature point having a second feature on the first image. The first image superimposition means may superimpose the superimposition image on the first image, at a position on the first image based on the first feature point and the third feature point, the position corresponding to a position, represented by the input position data, on the second image based on the second feature point and a fourth feature point, the fourth feature point being a feature point having the second feature on the second image.
Based on the above, the use of the plurality of feature points makes it possible to determine that a position on the first image corresponding to the position at which a coordinate input has been provided on the second image is a specific position, and to display the superimposition image at a position corresponding to the position at which the coordinate input has been provided by the user.
Further, in accordance with a relationship between: a distance between the second feature point and the fourth feature point on the second image; and a distance between the first feature point and the third feature point on the first image, the first image superimposition means may set a size of the superimposition image corresponding to the position represented by the input position data, and may superimpose the superimposition image on the first image.
Based on the above, when, for example, the display size of a predetermined object included in the second image is relatively different from the display size of the object included in the first image, it is possible to display the superimposition image in a size corresponding to the difference in display size.
Further, the first image superimposition means may determine a position and an orientation of the superimposition image to be superimposed on the first image, such that a relative positional relationship between: the first feature point and the third feature point on the first image; and the superimposition image, is the same as a relative positional relationship between: the second feature point and the fourth feature point on the second image; and the position represented by the input position data.
Based on the above, when, for example, the display orientation of a predetermined object included in the second image is relatively different from the display orientation of the object included in the first image, it is possible to superimpose the superimposition image in an orientation on which the difference in display orientation is reflected, and to display the superimposed result.
Further, the first image superimposition means may include image extraction means and extracted-image superimposition means. The image extraction means extracts from the second image the input image included in a region determined based on the second feature point. The extracted-image superimposition means superimposes the superimposition image on the first image based on the first feature point, the superimposition image being the input image extracted by the image extraction means.
Based on the above, it is possible to easily generate the superimposition image by copying the input image.
Further, the image extraction means may extract from the second image the input image included in a region surrounded by three or more feature points, each having a unique feature on the second image. The first feature point extraction means may extract three or more feature points from the first image, the three or more feature points having the unique features and corresponding to the three or more feature points on the second image. The extracted-image superimposition means may superimpose the input image extracted by the image extraction means on the first image, in a region surrounded by the three or more feature points extracted by the first feature point extraction means.
Based on the above, it is possible to generate the superimposition image by copying the input image included in a region to a region in the first image corresponding to the region.
Further, the first image superimposition means may include texture generation means. The texture generation means generates a texture representing the input image superimposed on the second image. In this case, the first image superimposition means may set texture coordinates of a predetermined polygon corresponding to the texture based on the second feature point on the second image, may map the texture on the polygon, may place the polygon on the first image based on the first feature point on the first image, and may superimpose the superimposition image on the first image.
Based on the above, it is possible to generate the superimposition image by mapping the texture of the input image on a polygon.
Further, the first image and the second image may each be an image including a face image representing a person's face. The first feature point extraction means may extract the first feature point from the face image of a person's face recognized in a face recognition process performed on the first image acquired by the first image acquisition means, the first feature point being a point having the first feature in accordance with the face image. The first image superimposition means may superimpose the superimposition image on the first image such that the second feature point is a point having the first feature in accordance with the face image included in the second image.
Further, the first image and the second image may each be an image including a face image representing a person's face. The second feature point extraction means may extract the second feature point from the face image of a person's face recognized in a face recognition process performed on the second image acquired by the second image acquisition means, the second feature point being a point having the first feature in accordance with the face image. The first feature point extraction means may extract the first feature point from the face image of a person's face recognized in a face recognition process performed on the first image acquired by the first image acquisition means, the first feature point being a point having the first feature in accordance with the face image.
Based on the above, it is possible to superimpose the superimposition image corresponding to a user input on the face image of a person's face recognized in an image, and to display the first image.
Further, the first image acquisition means may acquire, as the first image, a moving image of a real world captured in real time by a real camera available to the image processing apparatus.
Based on the above, it is possible to superimpose the superimposition image corresponding to a user input on a moving image of the real world captured in real time by a real camera, and to display the superimposed result.
Further, the image processing program may further cause the computer to function as second image acquisition means and second feature point extraction means. The second image acquisition means acquires the second image. The second feature point extraction means: extracts a plurality of feature points from an image representing an object recognized in a process of recognizing a predetermined object performed on the second image acquired by the second image acquisition means, each feature point having a unique feature on the image representing the object; and further extracts a plurality of out-of-region points provided outside the image representing the object, in a radial manner from the corresponding feature points. In this case, the first feature point extraction means may extract a plurality of feature points from an image representing an object recognized in a process of recognizing the predetermined object performed on the first image acquired by the first image acquisition means, the plurality of feature points having the unique features and corresponding to the plurality of feature points extracted by the second feature point extraction means, and may further extract a plurality of out-of-region points provided outside the image representing the object, in a radial manner, so as to correspond to the plurality of out-of-region points extracted by the second feature point extraction means. Furthermore, the first image superimposition means may include image extraction means and extracted-image superimposition means. The image extraction means extracts from the second image the input image included in a region surrounded by three or more of the feature points and/or the out-of-region points set by the second feature point extraction means. The extracted-image superimposition means superimposes the superimposition image on the first image, in a region surrounded by three or more of the feature points and/or the out-of-region points corresponding to the region from which the input image has been extracted, the superimposition image being the input image extracted by the image extraction means.
Based on the above, when a predetermined object is recognized in the first image and the second image, and the superimposition image corresponding to a user input is superimposed on the object and the superimposed result is displayed, it is possible to generate the superimposition image even on the user input provided outside the region of the object, and to display the superimposed result.
Further, the display device may include at least a first display screen and a second display screen. In this case, the second image display control means may display the second image on the second display screen. The first image display control means may display on the first display screen the first image on which the superimposition has been made by the first image superimposition means.
Based on the above, it is possible to simultaneously display the first image and the second image on the first display screen and the second display screen, respectively.
Further, the display device may include a touch panel that covers the second display screen. In this case, the coordinate input acquisition means may acquire, as the data of the coordinate input, data representing a touch position of a touch performed on the touch panel. The input position data generation means may generate, as the input position data, data representing a position on the second image that overlaps the touch position.
Based on the above, it is possible to superimpose the superimposition image corresponding to a touch input on the first image, and to display the superimposed result.
Further, the first display screen may be capable of displaying a stereoscopically visible image, using a left-eye image and a right-eye image. In this case, the first image acquisition means may acquire, as the first image, a stereoscopically visible image including a left-eye image and a right-eye image. The first feature point extraction means may extract the first feature point from each of the left-eye image and the right-eye image of the first image. The first image superimposition means may superimpose the superimposition image on the first image, at a position of the left-eye image and a position of the right-eye image on the first image based on the first feature point, each position corresponding to the position, represented by the input position data, on the second image based on the second feature point. The first image display control means may display the stereoscopically visible image on the first display screen, using the left-eye image and the right-eye image of the first image on which the superimposition has been made by the first image superimposition means.
Based on the above, it is possible to superimpose the superimposition image corresponding to a user input on a stereoscopically visible image, and to display the superimposed result.
Further, the present invention may be carried out in the form of an image processing apparatus and an image processing system that include the above means, and may be carried out in the form of an image processing method including steps performed by the above means.
Further, in the image processing system, at least a first apparatus and a second apparatus may be configured to communicate with each other. In this case, the first apparatus may include the second image display control means, the coordinate input acquisition means, the input position data generation means, second image superimposition means, and data transmission means. The second image superimposition means, based on the input position data, superimposes on the second image an input image representing the coordinate input. The data transmission means transmits data of the input image to the second apparatus. In this case, the second image display control means may display, on a display device available to the first apparatus, the second image on which the superimposition has been made by the second image superimposition means. Furthermore, the second apparatus may include data reception means, the first image acquisition means, the first feature point extraction means, the first image superimposition means, and the first image display control means. The data reception means receives the data of the input image from the first apparatus. In this case, based on the data of the input image received by the data reception means, the first image superimposition means may superimpose the superimposition image on the first image, at the position on the first image based on the first feature point, the position corresponding to a position of the input image on the second image based on the second feature point, the superimposition image being an image based on the input image. The first image display control means may display, on a display device available to the second apparatus, the first image on which the superimposition has been made by the first image superimposition means.
Based on the above, the transmission and reception of the data of the input image make it possible to superimpose on the first image the superimposition image corresponding to an input provided by another user, and to display the superimposed result.
Based on the present invention, it is possible to reflect an image corresponding to a user input on an image different from an image used for the user input.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.