Pathological diagnosis often involves slicing a tissue sample (e.g. a biopsy) into thin slices, placing the slices on individual slides, and staining the slices with different methods and reagents. For example, a tissue sample slice may be stained by hematoxylin and eosin (H&E) stain for visualizing histological structures of the sample, while an adjacent tissue sample slice may be stained by immunohistochemical (IHC) stain with a disease-specific antibody. Pathologists commonly perform initial diagnosis on H&E stained samples and then order IHC staining from the same biopsy block for validation and prognosis.
With the trend of digitization, specimen slides are often scanned into digital images (virtual slides) for later viewing on monitors. To make a final diagnosis, pathologists need to simultaneously examine the region of interest on an H&E image and its corresponding area on an IHC image(s) from the same biopsy block. Thus, those stain images need to be aligned on the monitor(s) such that simultaneous and synchronized viewing can be achieved across the images and the correspondent views are accurate regardless of magnification (FIG. 1).
To align such stain images is challenging, since there is often a great difference in image appearance between two adjacent sample slices stained by different methods, and various local deformations are involved. Adjacent samples are often not related by simple transformation, and structural changes are unpredictable across adjacent samples and different magnification. For example, two images obtained from adjacent but different parts of a tissue block may have ill-defined structural correspondences (see FIG. 2). And the stain images may have weak structures that need to be made explicit in order to align whole images (FIG. 3). Furthermore, because tissue slices may be stretched or deformed during sample handling, different parts of each image may transform differently from other parts of the same image (FIG. 4).
Existing systems for image alignment and navigation require the user to manually locate corresponding areas on the virtual slides (images) due to the problems discussed above. This process has to be redone when viewing in regions away from the alignment region and at different resolutions. For very large images (e.g. 100 k×100 k), this process becomes tedious and impractical. In addition, when the images are examined locally at a high resolution, the appearance between corresponding regions diverges rapidly and it becomes difficult to find matching points.
Therefore, there is a need to develop methods and systems for automatically aligning images which are similar in global appearance but have local deformations, for example, large images of tissue samples stained by different methods.