For navigating the medical instruments required during a medical intervention, 2D x-ray images generated in realtime using fluoroscopic imaging of the sites being treated and of the areas of internal tissue surrounding them in a patient undergoing treatment are evaluated. Although showing no spatial details, in contrast to 3D views reconstructed from two-dimensional projection data of a number of axial 2D cross-sectional images combined into one volume dataset having been obtained by means of, for example, spiral CT or rotation angiography, two-dimensional x-ray images are nonetheless available in realtime and enable both the doctor's and patient's exposure to radiation to be minimized because the patient needs to be x-rayed just once from a single irradiating direction.
The spatial depth information is conventionally retrieved by merging 2D x-ray images recorded using fluoroscopic imaging with 3D reconstructions of preoperatively generated CT or MRT cross-sectional images of the regions of the body being treated and areas of tissue surrounding them or, as the case may be, with intraoperatively recorded 3D rotation angiograms, and registering them together therewith. Merging therein involves an image-processing procedure whereby three-dimensionally reproduced image objects are rendered congruent with the relevant image objects in recorded 2D x-ray images and additively superimposed thereon. The image objects that have been three-dimensionally reconstructed or recorded in three-dimensional form using rotation angiography are therefore placed under two-dimensional fluoroscopic x-ray images whose image data is then stored (co-registered) in an image archive along with the image data of the image objects that have been three-dimensionally reconstructed or, as the case may be, recorded in three-dimensional form. Combining co-registered 2D layer recordings and three-dimensionally reconstructed image objects therein makes it easier for doctors providing treatment to find their bearings within a volume area under consideration.
Registering and visualizing of the merged 2D and 3D image data is therein usually performed in two separate steps: It must first be ascertained from which direction a volume area requiring to be imaged has to be projected so it can be rendered congruent with a 2D x-ray image recorded by means of, for example, an angiography system and registered jointly with said image. For that there are various possible approaches which though, having no relevance to the subject matter of the present invention, can be left out of account. The co-registered image data must during visualizing be displayed in a merged 2D/3D representation, which is to say in a joint representation of a recorded 2D x-ray image F (referred to below also as a “fluoroscopy image”) and of a 3D reconstruction M projected into the representation plane Exy (projection plane) of the relevant 2D x-ray image, which reconstruction will then of course also be two-dimensional.
A standard method for jointly graphically visualizing the image data of two or more initial images is what is termed “overlaying”. The respective initial images are therein rendered mutually congruent and overlaid (“cross-mixed”) to form an aggregate image by means of alpha blending—a digital image- or video-processing technique—taking account of the individual pixels' respective color and transparency information. For various graphic formats (for example PNG, PSD, TGA, or TIFF), what is termed an alpha channel is provided therefor in which, besides the actual image data's coded color information, transparency information is stored using m-bit coding in up to 2m gradations, able to be indicated by means of an opacity value α (blending factor) in the range between zero (totally transparent) and one (totally opaque). A merged 2D aggregate image B created by means of alpha blending when the two two-dimensional images F and M are overlaid can be described in mathematical terms as a three-dimensional field (meaning as a third-order tensor) with components having the form (nx, ny, IB(nx, ny)), which is to say as a triplet, with nx and ny being the x and y coordinates of the individual pixel locations in the image plane Exy of the merged aggregate image B and IB(nx, ny) being the gray-scale or, as the case may be, RGB color values of the pixels at said image's relevant pixel locations. While the former instance is a special one-dimensional case where IB(nx, ny) can be described as a scalar quantity IB(nx, ny) indicating the intensity at the site of the respective pixel (nx, ny), the latter instance with IB(nx, ny) is a three-dimensional color vector whose components describe the luminance values of the individual primary colors red, green, and blue of the merged aggregate image B at the site of a pixel (nx, ny). Said vector can therein be calculated using the formulaIB(nx,ny):=α·IM(nx,ny)+(1−α)·IF(nx,ny)∀(nx,ny)  (1)                where 0<α<1,with IF(nx, ny) or, as the case may be, IM(nx, ny) likewise being vector quantities indicating the color values of the pixels at the relevant pixel locations (nx, ny) of the two images and with the scalar factor α indicating the opacity value used (referred to below also as the “blending factor”). That is a special form of linear combining, known as “conical affine combining”, where all coefficients are greater than zero and add up to one (convex combining). The blending factor α is therein a parameter describing what percentage of the gray-scale values of the individual pixels of the merged aggregate image B each of the two overlaid initial images F and M is to occupy.        
What, though, is disadvantageous about that method is that image objects (such as ends of catheters and cardiovascular stent implants etc.) shown in the fluoroscopy image F with a low contrast definition will upon mixing-over of a 3D reconstruction M that is projected into the projection plane Exy of the relevant fluoroscopy image F and has a high contrast definition be virtually obscured by said reconstruction when a blending factor close to one is employed. The image contrast KB, which can be shown as a function of α, is in the one-dimensional, scalar instance therein defined by the formula
                                                        K              B                        ⁡                          (              α              )                                :=                                                                      I                  BH                                ⁡                                  (                  α                  )                                            -                                                I                  BV                                ⁡                                  (                  α                  )                                                                                    I                BH                            ⁡                              (                α                )                                                    ,                            (        2        )            with IBV being the image intensity of an image object BO in the foreground BV of the merged aggregate image B and IBH being the image intensity of an object background BH, obscured by the relevant image object BO, on said image. If the image object BO shown in the foreground MV of the overlying image M can be segmented from the background MH of said overlay image M (which can as a rule be easily achieved by way of a threshold decision), it is customarily provided for only the segmented image object BO to be overlaid on the fluoroscopy image F. It is thereby insured that in the merged aggregate image B the contrast definition of the fluoroscopy image F will be retained at least in the background region BH of the mixed-in segmented image object BO. That does not, though, apply to the foreground region BV of the merged aggregate image B because the contrast definition reduces there owing to overlaying of the two initial images F and M.
A known way to retain the contrast definition also in the foreground region BV, defined by the area of the image object BO, of the merged aggregate image B is to mix only the outline of the image object BO segmented from the background MH of the overlay image M into the fluoroscopy image F. That, though, is expedient for a few applications only. Moreover, the 3D impression of the segmented and mixed-in image object BO and the information indicating that the segmented image object BO of the overlay image M is to form the foreground BV of the merged aggregate image B and that the areas of tissue, implants, or medical instruments (for example aspirating needles, catheters, surgical implements etc.) shown in the fluoroscopy image F are to form the background BH of the merged aggregate image B (or vice versa) are lost with that method. Another method provides for displaying the two initial images F and M not one above the other but laterally mutually displaced. That, though, has the disadvantage in some applications that the information indicating the spatial relationship between the segmented image object BO in the foreground BV of the merged aggregate image B and the areas of tissue and objects shown in the image background BH of the merged aggregate image B can be lost.
Another way to retain the contrast definition in the foreground region BV, defined by the area of the image object BO, of the merged aggregate image B is to segment the implants or medical instruments shown in the foreground FV of the fluoroscopy image F in order to overlay only said objects on the 3D view M projected into the projection plane Exy of the relevant fluoroscopy image F and on the image object BO shown in said view. Because the background FH of the fluoroscopy image F is with that method subtracted as a mask when segmenting has been performed, the areas of tissue imaged therein can no longer be shown in the merged aggregate image B. With a few exceptions that is very disadvantageous in most applications because the information about the exact spatial positioning of the implants or medical instruments that are shown in relation to the surrounding areas of tissue is lost. That method will, moreover, fail if the spatial positioning of the implants or medical instruments shown in the foreground FV of the fluoroscopy image F changes relative to the position of the areas of tissue shown in said image's background FH, for example because the mobile C-arm of a multidirectional C-arm x-raying system or the table on which the patient being examined is lying has been moved or, as the case may be, owing to the patient's moving or breathing or because his/her moving organs (such as the lungs or heart) have moved through pulsating. Standard alpha blending will remain the only option in such cases.