1. Field of the Invention
The present invention relates to a virtualized endoscope system, which is used to simulate an endoscopic examination and to facilitate an operator""s control over the tip of an endoscope of the type disclosed by Mori et al., xe2x80x9cVirtualized Endoscope Systemxe2x80x94an application of virtual reality technology to diagnostic aid,xe2x80x9d IEICE Trans. Inf. and Syst., Vol. E 79-D, No. 6, pp. 809-819, June 1996, which is incorporated by reference herein.
2. Discussion of the Background
FIG. 1 shows an electronic endoscope system 1. An endoscope 2 has an endo scope tip 2a which is inserted into the body of a subject 3a on a treatment table 3b. An actual image based on the image signals from the endoscope 2 is displayed on a monitor 5 of the main unit 4. An operator uses the endoscope operating member 2b to maneuver the endoscope tip 2a while observing the actual image on the monitor 5. Thus, different images corresponding to different viewpoint positions and view directions (lines of sight) can be obtained by moving the endoscope tip 2a within the body of the subject 3a. 
It is desirable for the tip of the endo scope tip 2a to reach the area of interest quickly and accurately without damaging the inside of a tubular cavity inside the subject 3a. For this reason, an image processing device 6 is installed near the electronic endoscope system 1 as shown in FIG. 1. The image processing device 6 stores, for example, a three-dimensional image (3D organ image) providing an external view of the organ to be examined. The 3D organ image is displayed on a monitor 7 while an operator operates the endoscope 2.
The operator uses the 3D organ image in the monitor 7 of the image processing device 6 as a guiding image to advance the endoscope tip 2a by comparing the guiding image with the actual image displayed on the monitor 5.
If the actual image on the monitor 5 of the electronic endoscope system 1 displays, for example, a branched tubular cavity, the operator decides which direction the endoscope tip 2a is to be inserted by looking at the 3D organ image on the monitor 7.
Further, if segmented images, X-ray photographs, etc., of the subject 3a have been obtained in advance by using CT, MR devices, etc., the operator decides which direction the endoscope tip 2a is to be advanced by looking at the segmented images, X-ray photographs, etc.
It is difficult for the operator to advance the endoscope tip 2a based on the images from the monitors 5 and 7. Further, the operator may have trouble discerning the orientation of the endoscope tip 2a in relation to the posture of the subject 3a from the display on the monitor 5. Even though the operator can change the view direction freely, it is more important that the operator be able to discern the current view direction and viewpoint position.
Since the guiding image is an external view of the organ, it is difficult for an operator in an endoscopic examination room to determine the direction in which the endoscope tip 2a is facing inside a tubular cavity, the orientation of the endoscope tip 2a relative to the body of the subject 3a, and the direction in which the endoscope tip 2a is moving relative to the body of the subject 3a. Consequently, the operator advances the endoscope tip 2a by trial and error while viewing the monitor 5 of the electronic endoscope system 1.
Even if the relative position of the endoscope tip 2a is known from segmented images or photographs, the depths of tubular cavities are difficult to discern. Therefore, the operator has to refer to multiple segmented images or photographs of a particular tubular cavity to estimate the depth of the tubular cavity before advancing the endoscope tip 2a. 
Accordingly, one object of this invention is to provide a novel virtualized endoscope system that makes it easier to recognize and control the direction of the tip of a virtual endoscope or an actual endoscope inserted into a subject to be examined.
It is yet another object of this invention to enable an operator to quickly and easily obtain guiding images for guiding the tip of a virtual endoscope or an actual endoscope, even when the guiding images are generated in a distant simulation room.
It is still yet another object of this invention to provide an image processing system capable of being used in preoperative simulations and in training to simulate the control of an actual endoscope.
These and other objects are achieved according to the present invention by providing a new and improved method, system, and computer product wherein a three-dimensional object is displayed from a desired viewpoint position and view direction. A three-dimensional model preparation unit prepares a three-dimensional model of the object and an endoscopic image preparation unit prepares a first endoscopic image of portions of the three-dimensional model as viewed from the viewpoint position and view direction. The first endoscopic image is then displayed in a first display.
A rear image floating unit prepares for display a second endoscopic image of second portions of the three-dimensional model which are behind the first portions relative to the viewpoint position and otherwise hidden from view behind the first portions. The second endoscopic image is displayed in the first display such that the first and second endoscopic images are superimposed spatially in relation to the viewpoint position and view direction.
An endoscope tip state computation unit calculates the current viewpoint position and view direction and prepares an outer shape image of the three-dimensional object which is an image of the three-dimensional model with the inner surfaces of the three-dimensional model hidden. The outer shape image is displayed in a second display. The endoscope tip state computation unit also prepares an indicator image which indicates the viewpoint position, view direction, and a reference direction.
An object preparation unit displays a focus mark at a location relative to the outer shape image to mark a condition in a coordinate system which is inclusive of the location of the condition in the three-dimensional object. The focus mark may be used to mark the location of a particular area of interest such as a tumor inside a virtual human body.
A compass image preparation unit generates a compass image and displays the compass image over the first image. The compass reveals the orientation of the image obtained from a virtual endoscope tip relative to the posture of the subject being examined.
A guiding marker preparation unit displays guiding markers in the first display. The guiding markers are used to guide the insertion of an actual or a virtual endoscope into a subject. These markers can also be transferred, via a first data sender and receiver, to a second image processing system in an examination room. The first image, the compass image, and the outer shape images may also be transferred to the second image processing system. The second image processing system in the examination room may request, via a second data sender and receiver, that endoscopic guide data corresponding to a particular subject be sent from the first data sender and receiver.
An image moving and rotating unit changes the first image based on the operation of a toolbox. Keys on the toolbox allow an operator to change the view direction or rotate the first image. Another toolbox controls the operation path history control unit which can record and reproduce a sequence of past images corresponding to a sequence of past viewpoint positions. The sequence of past images can be played continuously or advanced frame-by-frame.
A virtual operation device having the look and feel of an actual endoscope operating member is provided to realistically simulate the control of an actual endoscope. The virtual operation device includes an operation member having a grip and a dial which are used to control movement of a virtual endoscope tip within a virtual human body.