A variety of technologies can be used to investigate biological processes and anatomy. The following examples are types of scan that may be used to provide medical images: X-Ray; Computed Tomography (CT); Ultrasound (US); Magnetic Resonance Imaging (MRI); Single Photon Emission Tomography (SPECT); and Positron Emission Tomography (PET). Each type of scan is referred to as an imaging modality.
Typically, a medical scan provides a ‘dataset’. The dataset comprises digital information about the value of a variable at each of many points. The points are different spatial locations that are spread throughout 3 physical dimensions, i.e. each point is at a particular location on a three dimensional grid. The variable may typically be an intensity measurement. The intensity may be, for example, an indication of the X-Ray attenuation of the tissue at each particular point.
In such a three dimensional dataset, the element of the scan image located at a particular spatial location may be referred to as a ‘voxel’. A voxel is therefore analogous to a ‘pixel’ of a conventional 2-Dimensional image.
Although the dataset of the medical scan is 3-Dimensional, it is typically displayed to a user as a two dimensional image on a medical imaging workstation. An image slice from a 3-d dataset is simply a 2-d representation, consisting of those data points that lie on a particular 2-d plane through the 3-d image. A typical 3-d dataset, such as one from an MRI scan, will have a matrix of regularly spaced data points. As a non-limiting example, the MRI-scan may have data points whose centres are spaced by 1 millimeter in the x- and y-directions across any plane of the scan. Consecutive planes may, for example, be parallel and separated by 7 millimeters.
The 3-D scan may therefore be divided up into tens or hundreds of parallel 2-D images, for display purposes. The user of a workstation can then flick through the images in sequence, for example, thereby allowing a view of successive cross sections of the tissue that was scanned.
Typical workstations allow the 2-D slices to be viewed individually, or sequentially in successive steps. The view may typically be along a selected one of three perpendicular directions. For a human subject lying down, the axes of the three perpendicular directions may, for example, be along the ‘long axis’ of the body, ‘across’ the body from one side to the other, and through the body from top to bottom. These axes are conventionally referred to as:    (i) ‘axial’, for a cross-section that lies along the long axis of the body;    (ii) ‘coronal’, for a cross-section that lies along an axis running from the front to back; and    (iii) ‘sagittal’, for a cross-section that lies along an axis that runs from side to side.
Henceforth, the term ‘dataset’ should be construed as meaning a three dimensional dataset that results from performing a medical scan. However, when the scan image is displayed, only a two dimensional slice of the dataset may be on view at any one time as an image.
Medical scan images may include information about a wide variety of anatomical features and structures. For example, a scan image may show various types of healthy tissue, such as bone and organs within the body. A scan image may also show abnormal tissues. The purpose of obtaining a medical scan image is often to detect abnormal tissue. So, a typical example of an application of medical imaging is in the identification and ‘staging’ of cancerous tumours.
‘Multiple modalities’ may be used to provide medical scan images. This approach involves obtaining scan images of the same region of tissue by more than one modality. For example, the same region of tissue may be imaged using both a PET scan and a CT scan. Scanners that can carry out multiple mode scans are referred to as ‘hybrid scanners’. Typically, a hybrid scanner allows the subject to be scanned by both modalities in the same sitting.
The usual prior art approach to images that are not in the same frame of reference is to align the images using a more complex transformation than was needed for images that are in the same frame of reference. This process of aligning images is known as image registration. One aim of image registration may simply to correct for differences in patient position.
There are three well known image registration methods. These are termed ‘rigid’, ‘affine’ and ‘deformable’ registration. FIGS. 1-3 illustrates each of these registration methods:    (i) FIG. 1 shows a rigid alignment method of image registration.    (ii) FIG. 2 shows an affine alignment method of image registration.    (iii) FIG. 3 shows a deformable alignment method of image registration.
There are a number of techniques in the prior art which allow a user to delineate regions using multiple imaging volumes.
One approach presents a first image as a base layer, over which one or more semi-transparent overlays are displayed. This approach is known as a ‘fused view’ in medical imaging. This approach enables the user to view one image, whilst being able to view and use information from overlying images that are derived from another dataset.
However, various datasets may be acquired at different orientations and resolutions. So either a rigid or non-rigid transformation is usually required to produce each overlay image. As a consequence, the image data shown to the user in the overlay images(s) is not the originally captured image data for that image. The data has been warped or rotated, or in some other way resampled, in order to create the overlay image.
This may be problematic. The resolution of the image shown in the overlay may not produce resampled images of sufficient quality. For example, MR images are typically highly anisotropic, which means that the voxels may not be cuboid. The voxels may typically be 3 mm×0.3 mm×8 mm. Such images are best viewed in their original orientation, and do not produce clear images if rotated or warped. This is a major constraint on known imaging systems.
The present invention therefore relates to display logic and/or image processing steps required to produce convenient displays of multiple images which have been acquired in a set.
Medical image display software typically renders 3D scans as 2D cuts through the 3D volume. A processing step known as ‘volume reconstruction’ is used to create a 3D volume from the stack of 2D images produced by the scanner. The resulting displays are called ‘Multi-Planar Reconstructions’ or MPRs for short. For example, it is conventional to show 3D medical images in 3 planes: axial—head to foot slices, coronal—front to back slices and sagittal—left to right slices. Some software provides the user the ability to adjust the orientation of the view. Each view has an orientation, position and extent which determines exactly which part of the 3D image is shown. In some advanced visualisation software it is possible to define views where the orientation is not a plane, but is a curved cut of the 3D volume.
Some of the above medical images may be acquired in groups with little or no patient motion between acquisitions. For example, MR images are very typically acquired using:    (i) multiple pulse sequences, to generate different image appearance;    (ii) gated image sequences, where images acquired for different points of the breathing or cardiac cycle;    (iii) dynamic sequences, where the uptake of an image contrast agent is observed using multiple images.
Similarly, CT and PET images may be gated against some physiological process such as breathing, or acquired dynamically to capture a biological process of the subject as a function of time. In other situations, multiple acquisitions may be made in the same sitting of different parts of the body.
It is typical to consider each such group of images as forming a single group or set, for storage, transmission, display and manipulation purposes.
The groups may be given different names, according to the context and type of acquisition. For example:    (i) Multiple MRI scans are referred to as ‘multi-sequence’ MR;    (ii) In CT, multiple static scans are referred to as ‘multi-phase’ CT. Multiple static scans may, for example, be taken in order to capture the progress of an injected contrast media through the organ. For cases where multiple CT scans have been acquired of different areas of the body in the same sitting, the set is called ‘multi-series’.
In the remainder of this document, such datasets will be referred to, in the general case, as ‘multi-volume datasets’.
Many known medical image workstations provide good tools to display and manipulate single 3D volumes. More challenging is the problem of display and manipulation of multiple 3D volumes. For example, the user may wish to load and visualise 3D scans acquired from different scanners, e.g. a CT and MRI scan of the same patient. Alternatively, the user may wish to compare the same type of scan taken at different points in time, for example to assess the change in disease over time, or to measure response to therapy. One or both scans may comprise multi-volume datasets.
Considering first datasets that are not multi-volume, on requirement is to align the images. These are pairs of images taken from two different datasets, for example an MRI scan and a CT scan of the same patient.
The process of aligning images is known as ‘image registration’. Rigid, affine and deformable image registration methods shown in FIG. 1 can be used to correct for differences, to various extents. Use of these image registration techniques therefore makes the assessment of aligned images an easier process.
Known methods for aligning pairs of images are shown for example in references [1] and [2]. Commercial software is available to perform such alignment automatically. For example, Mirada XD3 available from Mirada Medical Ltd. is one such software application.
Another attribute of such software packages is the ability to create both ‘fused’ and ‘side-by-side’ displays of the aligned images. In the fused display, one image is shown in a view and another image shown as a semi-transparent overlay on the same view. In this case, in known systems, the overlay image is typically transformed and resampled according to the alignment calculated by the registration method. Side-by-side displays show the aligned datasets in non-fused views. However, they “bind” the scroll and zoom controls of the displays, such that they are always displayed in alignment. Some software tools also place a cursor or cross-hair on each display, and keep these in alignment as the user adjusts them.
Most modern workstations allow the user to configure the size and the position of the MPR views in a manner of their choosing. The MPR views may also be referred to as “hanging protocols” in the field of medical imaging.
When two or more datasets are loaded and are to be displayed simultaneously, it is useful to bind the display parameters such that they are correlated or synchronised. For example, when viewing multiple CTs taken over time, the two or more datasets each comprise a set of CT images taken at one sitting. It is useful to bind the zoom and pan settings of the views, such that the corresponding anatomical locations in the CTs can be visualised simultaneously. Display controls such as Window and Level, analogous to brightness and contrast controls, are also useful to bind under certain situations.
The challenges of display and manipulation of single and multiple ‘multi-volume’ datasets are greater still.
A typical example might be where one dataset is a multisequence MRI scan, consisting of images acquired in at least two different orientations. A second dataset to be displayed at the same time as the first might be a CT scan taken of the same subject. Such a multi-sequence MRI dataset may use different imaging parameters for the different sequences, or different MRI sequences within the dataset may typically relate to different parts of the body. Conventional approaches to display and manipulation of one view from the MRI scan and one form the CT scan will not work well.
In general, where at least one of the images in the view is from a multi-volume dataset, known approaches often result in sub-optimal views being displayed. This limits the information that may be derivable from the displayed images.
For example, consider the case where a multi-sequence MRI dataset is to be shown in a fused display with an overlay of a CT scan. Each image in the MRI set may consist of a different orientation. Displaying them only in a single pre-defined orientation will produce a poor quality display. Common MRI protocols used in diagnostic practice typically acquire a set of thickly sliced images. These images have a much greater spacing between images than the voxel size within each image slice. Such images would be best viewed in their original orientation. However, since each image may have a different orientation, the view on known displays cannot be configured to produce good quality displays.
The side-by-side views available in conventional software offer an alternative display of the datasets. However, once again, the user of known systems is required to pre-define or pre-set the orientation, before loading the data. As the user then selects different MRI sequences for display, the orientation of both the MRI display and that of the CT image will not be optimal, nor correspond.
The consequence of these shortcomings is that the user will spend a great deal of time adjusting the zoom, pan and orientation of the display in order to try to visualise their data properly.