Nowadays several medical image acquisition techniques and systems exist that render a digital image representation of a medical image, e.g. a radiographic image.
One example of such a system is a computed radiography system wherein a radiation image is recorded on a temporary storage medium, more particularly a photostimulable phosphor screen. In such a system a digital image representation is obtained by scanning the screen with radiation of (a) wavelength(s) within the stimulating wavelength range of the phosphor and by detecting the light emitted by the phosphor upon stimulation.
Other examples of computed radiography systems are direct radiography systems, for example systems wherein a radiographic image is recorded in a solid-state sensor comprising a radiation sensitive layer and a layer of electronic read out circuitry.
Still another example of a computed radiography system is a system wherein a radiographic image is recorded on a conventional X-ray film and wherein that film is developed and subsequently subjected to image scanning.
Still other systems such as a tomography system may be envisaged.
The digital image representation of the medical image acquired by one of the above systems can then be used for generating a visible image on which the diagnosis can be performed. For this purpose the digital image representation is applied to a hard copy recorder or to a display device.
Commonly the digital image representation is subjected to image processing prior to hard copy recording or display.
In order to convert the digital image information optimally into a visible image on a medium on which the diagnosis is performed, a multiscale image processing method (also called multiresolution image processing method) has been developed by means of which the contrast of an image is enhanced.
According to this multiscale image processing method an image represented by an array of pixel values is processed by applying the following steps. First the original image is decomposed into a sequence of detail images at multiple scales and occasionally a residual image. Next, the pixel values of the detail images are modified by applying to these pixel values at least one nonlinear monotonically increasing odd conversion function with a gradient that gradually decreases with increasing argument values. Finally, a processed image is computed by applying a reconstruction algorithm to the residual image and the modified detail images, the reconstruction algorithm being the inverse of the above decomposition process.
The above image processing technique has been described extensively in European patent EP 527 525, the processing being referred to as MUSICA image processing (MUSICA is a registered trade name of Agfa-Gevaert N.V.).
The described method is advantageous over conventional image processing techniques such as unsharp masking etc. because it increases the visibility of subtle details in the image and because it increases the faithfulness of the image reproduction without introducing artefacts.
Prior to being applied to a hard copy recorder or to a display device the grey value image is pixelwise converted into a digital image representing density of the visible image.
The conversion of grey value pixels into density values suitable for reproduction or display comprises the selection of a relevant subrange of the grey value pixel data and the conversion of the data in this subrange according to a specific gradation function. Commonly, the gradation function is defined by means of a lookup table, which, for each grey value, stores the corresponding density value.
Preferably the relevant subrange and the gradation function to be applied are adapted to the object and to the examination type so that optimal and constant image quality can be guaranteed.
The shape of the gradation function is critical. It determines how the subintervals of the density range of the visible image are associated with subranges of grey values, in a monotonic but mostly nonlinear way.
In those intervals where the function is steep, a narrow subrange of grey values is mapped onto the available output density interval. On the other hand, in those intervals where the function has a gentle gradient, the available output density interval is shared by a wide subrange of grey values. If the gradation function has a gentle gradient in the low density half and evolves to steeper behaviour in the high density portion, then most of the grey values are mapped to low density, and the overall appearance of the result image will be bright. Reversely, if the gradation function takes off steeply, and evolves to the high density with decreasing gradient, then most of the grey values are mapped to high density, yielding a dark, greyish look.
This way, it is possible to determine how the density intervals are distributed across the range of grey values, by manipulating the shape of the gradation function. As a general rule, grey value subranges that are densely populated (i.e. peaks in the grey value histogram) should be mapped onto a wide output density interval. Reversely, intervals of grey values that occur infrequently in the image should be concentrated on narrow density intervals. This paradigm known as histogram equalization leads to enhanced differentiation of grey value regions in an image.
The density of pixels and image regions is determined by the corresponding ordinate value of the gradation function. The contrast amplification of pixels and image regions on the other hand, is determined by the corresponding derivative value (i.e. the gradient) of the gradation function. As a consequence, if the shape of the gradation function is adjusted to accommodate a large subrange of grey values within a specified density interval, i.e. if the interval has to cope with wide latitude, then at the same time the contrast in that density interval will drop. On the other hand, if a density interval is assigned to only a narrow grey value subrange, then that interval will provide enhanced contrast. If requirements with respect to density and contrast amplification are conflicting, which is often the case, then a compromise is unavoidable.
In one embodiment of the multiscale image processing method as described in the above-mentioned European patent EP 527 525, the gradation function is applied after the reconstruction process, which is the inverse of the multiscale decomposition. The gradation function is applied to the final scale of reconstruction. As a consequence, the contrast-to-grey value relationship, which is specified by the derivative of the gradation function, is identical at all scales.
In some cases however, it is favourable to differentiate contrast adjustment depending on grey value and scale simultaneously. E.g. in chest images it is important to have high contrast in the smaller scales (i.e. small scale contrast) at high grey values to enhance conspicuity of pneumothorax, but only moderate small scale contrast in the low grey value areas like the mediastum. At the same time, large-scale contrast in the lower and mid grey values must be appropriate to visualise e.g. pleural masses.
In some embodiments disclosed in the above-mentioned European patent application EP 527 525 scale-dependent boosting or suppression of the contribution of detail information is applied.
Two different implementations have been described.
In a first implementation the modified detail images are pixelwise multiplied by a coefficient in the last stages of the reconstruction process. The value of such a coefficient depends on the brightness of the pixels of the partially reconstructed image.
In a second implementation a partially reconstructed image is converted according to a monotonically increasing conversion function with gradually decreasing slope, for example a power function. Then the reconstruction process is continued until a full size reconstructed image is obtained. Finally the resulting image is converted according to a curve that is the inverse of the afore-mentioned conversion curve.
Although this disclosure describes scale-dependent suppression or boosting of the contribution of detail information, it does not describe the way in which an envisaged density nor contrast amplification as a function of grey value can be obtained.
It is an aspect of the present invention to provide a method of modifying at least one of contrast and density of pixels of a processed image.
It is another aspect of the present invention to provide a user interface for such methods.
Further aspects will become apparent from the description given below.