Embodiments described herein generally relate to schemes for cursor control, and in particular to controlling a cursor in a computer system comprising more than one display region, for example a computer system for displaying different images derived from medical image data on different display screens.
A well-known aspect of user interfaces commonly associated with computer systems is the provision of pointing devices for controlling a cursor/pointer on a display. Typically a pointing device will be a physical input device, such as a mouse, track pad, joystick or stylus-based digitizer, which responds to movements by a user to cause a cursor displayed on a display to move in a corresponding manner, thereby allowing a user to intuitively move the cursor around the display, for example to indicate selections relating to what is displayed on the screen by “pointing” and “clicking”.
One field in which computer systems are frequently used to view image data is in the medical imaging field. For example, a user may wish to view images derived from studying a patient with an imaging modality such as X-ray, computer-assisted tomography (CT), magnetic resonance (MR), ultrasound and positron-emission-tomography (PET) data sets. In particular a user may wish to view displays of medical image data to analyze various aspects of the data, for example by taking measurements therefrom. Measurements may be obtained, for example, from locations in displayed images which are selected by a user moving a display cursor to identify elements of displayed images that are of interest.
It is common for users to review medical image data using a computer system comprising more than one display screen. For example, a first display screen may be used to display a first medical image (or images) and a second display screen may be used to display a second medical image (or images). Providing for multiple displays in this way can help a user readily compare different images on different displays. For example, a user may often wish to compare images from studies of a patient from different times, or to compare images from a patient with a corresponding reference image.
In circumstances where a computer system provides a single pointing device to allows a user to interact with multiple display regions the cursor will typically be moved from one screen to the other by a user controlling movement of the cursor so the cursor in effect leaves one display region through one of its edges and enters the other display region through a corresponding position on one of its edges. Conveniently, the entry and exit edges of the respective display screens will be physically adjacent to provide a user with the impression of there being a single continuous movement of the cursor in moving from one display screen to the other.
The inventor has recognized a drawback with this approach is a potential need for a user to undertake repeated large-scale movements, which might be termed “long moves”, in order to move a display cursor (pointer) between regions of interest in images represented on different displays or user interface elements in a medical imaging application that are only presented in one of a plurality of displays. Not only can this become tedious and time-consuming for the user, it can increase the probability of repetitive-strain types of injury.
Previous proposals for helping the process of cursor control in computer systems have involved schemes in which a cursor is moved in response to eye or head movements of a user using vision tracking techniques. However, in some situations, in particular in medical imaging applications, there can be a requirement to maintain precise cursor control in conjunction with the ability to make large cursor movements, for example to move between different displays, and this is not generally possible with current vision tracking techniques.
With this in mind there is a desire to provide improved schemes for controlling cursors in computer systems employing multiple display regions, for example in the field of medical imaging.