It is common practice to include annotations (e.g., metadata) with an image (e.g., a digital image) to convey information in a human-readable format as well as structured information that can be consumed by an external system. More specifically, annotations, such as, a region of interest, straight line, arrow, measurement results, graphical patterns, characters, and symbols, may be displayed in or near a visible image.
Conventional imaging systems, which may lack context, are complicated due to data residing in disparate systems from which it is generated, especially when the images are accessed by geographically disconnected users. Conventionally, images and related text from a single specialty (e.g., single modality) are maintained in separate systems and are not maintained in a single collection for use in decision-making. Tools for conveying important information and for linking images from one modality to images from a different modality do not exist. Moreover, systems and methods for joining specific areas of interest with annotated regions of interest from individual images from the different modalities do not exist.
While there are standards that define how images are generated for different individual specialties and modalities, standards that aggregate and maintain relationships between such images and related textual information in a human and/or machine-readable format do not exist.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one example technology area where some embodiments described herein may be practiced.