Enormous sized data that exceeds terabytes or petabytes is sometimes outputted from an application that is executed on a large-scale computer. This output data is data expressing a wide variety of phenomena such as physical or chemical phenomena, and in order to observe such phenomena, graphical illustration (or in other words, visualization) of that output data is required. However, when simply making visual all of the data that is outputted from a simulator in fields such as biology, nanotechnology or the environment, there are cases in which, due to complex arrangement of data in space, a portion that is to be paid attention to cannot be observed because of data that is behind that area of interest. For example, as illustrated in FIG. 1, when vectors representing physical quantities that are obtained through numerical calculation (generally, includes simple calculation values that are not physical quantities) are arranged at plural points in 3D space without any design or plan, it is not possible to easily recognize what kind of phenomena or condition is portrayed.
With respect to such situation, conventionally, a method of observing the 3D space after deleting unnecessary areas is a typical method, however, as the amount of data increases on a large-scale, the task itself of setting unnecessary areas before searching for and determining areas of interest is difficult. Moreover, because the data structure differs depending on the respective method of simulation, it is difficult to separate out areas of interest. There has been no effective method for solving such problems.
Incidentally, there is a technique for realizing the blur similar to that in real-world cameras in graphics. More specifically, with respect to a depth point that is to be focused on and was selected according to the desire of an operator operating a graphics terminal, a calculation processing unit finds the Z-axis distance of that depth point from 3D data of an object, which is stored in a data storage unit, and the calculation processing unit uses a “blur function” to further calculate the degree of the blur of that object according to the Z-axis distance that was calculated, and provides the degree of the blur to a display processing unit together with the object data. The display processing unit displays the object on a display screen according to that object data and blur function. By doing so, in the virtual space whose center of the screen is the depth point, objects having a larger absolute value distance along the Z axis are displayed with the blur. However, a processing that is normally not performed for the 3D data of the object is required, and when there is an enormous amount of 3D data, the processing load is large.
Moreover, a technique also exists in which, in order to display a stereo image that is closer to an object in real space, the blur is generated in an image for areas other than the point of interest included in a display image, according to the distance with the object, which is an element that is the object of attention, or the like. More specifically, the position of the point of interest is acquired, and a blurred image is generated based on 3D information relating to the display image. When generating the blurred image, the distance of the display element, which is arranged at the position corresponding to the position of the point of interest, from the viewpoint is calculated as a reference distance, and display elements whose difference from the reference distance exceeds a predetermined threshold value become a target of a generation processing of the blurred image. In the case of such a kind of processing as well, a special generation processing of the blurred image must be carried out for each of the display elements having a distance longer than the reference distance, and thus the processing load is high.
Furthermore, there is also a technique for displaying blurred polygons in at least one of the front direction and rear direction of the point of interest in a virtual space. More specifically, based on 3D coordinates in the virtual space, the Z value of each polygon is calculated in the depth direction from the position of the viewpoint, and a first image is generated by carrying out the perspective transformation of only polygons having a small Z value onto a screen plane. The perspective transformation is a processing to conduct the coordinate transformation based on the screen transformation matrix M. Moreover, the second image is generated by deteriorating the image quality of the first image and making the size of the second image equal to the size of the first image. Then, in order to display each of polygons in the first image at and around the depth position of the point of interest, and to display each of polygons in the second image when going in the direction toward the rear of the point of interest, a processing for making the first image semitransparent based on a transparency factor α, which is calculated based on the Z values of the respective polygons is carried out, and the processed first image is synthesized with the second image to generate the display image. According to this technique, a special processing is required in which, after generating a special first image, the second image corresponding to the first image is generated.
Moreover, a technique is known in which, in the process of determining the position of objects and generating 2D images of the objects, a 2D image having a simplified shape is generated for objects that are located far away from the position of the viewpoint, and generation of a 2D image itself is omitted for objects that are located even further away from the position of the viewpoint, and such objects are not displayed. In case of adopting such a kind of technique, when the position of the viewpoint gradually gets further away from a certain object, and reaches the point away from the certain object by a certain distance, the object, which was drawn in detail up to that point, is changed suddenly to a rough drawing, or when the position of the viewpoint gradually moves closer from a position far from a certain object, and reaches the point away from the certain object by a certain distance, the object that was not displayed up to that point suddenly appears on the screen. So, there is a problem in that a person viewing that graphic image has a feeling of something being strange at the point where the precision of the drawing changes, or at the point where the display of the object is switched between displaying and not displaying. In order to solve such a kind of problem, there is a technique, which provides a graphics apparatus for generating and outputting a 2D image of an object as seen from a designated position of the viewpoint in the direction of the designated view line, based on 3D structure data that expresses the structure of an object. This graphics apparatus includes a semitransparent plane positioning means for determining a position at arranging a semitransparent plane at the position of the view point, which corresponds to calculation for defocusing the objects located on the far side in the direction of the view line: and image defocusing means for performing a defocusing operation for a 2D image or part of the 2D image that corresponds to an object or part of the object that is located behind the position of the semitransparent plane that was arranged by the semitransparent plane positioning means, when viewed from the viewpoint in the direction of the view line. However, the semitransparent plane is a means for defocusing objects that are far from the viewpoint to some degrees and that originally can hardly be seen, and is not something to be used by a user to intentionally designate a desired range to be viewed.
As described above, there is no technique for displaying a region to which the user wants to pay attention in a form that makes it easier to grasp the region, when the large amount of data such as the numerical calculation results is visualized.