Systems for image-guided surgery are known and commercially available. Such systems comprise at least one display device, such as a monitor or a screen, for displaying information which aids the surgeon during surgery. Some of these systems comprise two display devices. The present invention relates to generating images to be displayed by such display devices.
The present invention is directed to a system, in particular for image-guided surgery, comprising at least one and in particular at least two display devices. In particular, the system comprises a position determinator for determining the position, in particular the relative position, of the display device(s). In particular, the system also comprises an image generator for generating images, which are to be displayed by the display device(s), in accordance with the determined position, in particular the relative position. Generating images in accordance with the determined position means that the generated images depend on the determined position. This means in particular that the image generator is capable of generating independent images for each of the display devices.
The image generator can for example generate the same image for all the display devices, the same image for some of the image devices while the other display devices display different images, or a different image for each display device. In particular, the decision as to which image is to be generated for a particular display device depends on the position, in particular the relative position, of the display devices. If more than two display devices are provided, the term “relative position” comprises a set of relative positions which comprises at least one position of each of the display devices relative to another of the display devices.
In this document, the term “position” means a combination of location and alignment. The location means the point in space at which an object is located in up to three spatial or translational dimensions. The alignment or orientation means the rotational angle at which an object is positioned in up to three rotational dimensions. The term “relative position” means the relative spatial and/or rotational displacement, each in up to three dimensions, of two objects such as display devices. The relative position between two objects can be determined either directly or indirectly. Indirectly determining it means for example determining the positions of two objects relative to a common reference and determining the relative position between the objects from the relative positions of the objects and the reference.
In accordance with the invention, a method of generating images for at least one and in particular at least two display devices (for instance, in a system for image-guided surgery) comprises the steps of determining the position, in particular the relative position, of the display device(s) and generating images, which are to be displayed by the display device(s), in accordance with the determined position, in particular the relative position.
In this document, the expression “observing a display device” means observing the image displayed by the display device, hence if a person can see, view or observe a display device, this means that this person can see, view or observe the image displayed by the display device.
In one arrangement, a viewer can see several or all of the display devices. In this case, it is advantageous to display different images on each of the display devices which can be seen. An enlarged view of an object or a graphical user interface of an application is for example spread over several display devices. In another example, graphical user interfaces of multiple applications are spread over several display devices. It is of course also possible to duplicate the same image on several display devices.
In another configuration, each display device or sub-group of display devices can be viewed by a different person or group of persons. In this case, the preferred scenario is to duplicate the same image on several display devices. It is also of course still possible to generate different images for the display devices, for example if different persons are to be provided with different information.
In the present invention, the content displayed on at least two display devices, i.e. exactly two or more than two (for instance, three or four) display devices, is automatically adjusted in accordance with the relative position of the display devices. An image displayed on one display device is for example enlarged and displayed on two or more display devices if one or more other display devices are placed next to the first display device at a distance which is less than a threshold value. If the distance between the first display device and another display device(s) is increased above the threshold value, then the image is no longer displayed in the enlargement and another image, such as the graphical user interface of an application, is displayed on the other display device.
In one embodiment, the system comprises an adjustable mounting for a display device, the mounting consisting of multiple elements, wherein two adjoining elements are connected via an adjustable joint. Typical examples of such mountings are arms or carrier arms. The system also comprises at least one sensor for detecting the state of at least one joint. Preferably, one sensor is provided for each joint. The state of a joint represents the relative position of the elements connected by said joint. If the joint is a pivot bearing, then the sensor output is an angle. If the joint is a bearing which allows a translational movement, then the sensor output is a distance. The position determinator can calculate the position of the display device, in particular the relative position of the display device relative to a reference such as the base of a mounting, from the states of all the joints of the mounting and the structure of the mounting. If this information is known for more than one display device, then the relative position of these display devices can be determined. In general, the relative position is determined from the state of at least one joint which connects adjoining elements of a mounting device for a display device.
In another embodiment, a marker device is attached to at least one display device, wherein the position determinator is configured to determine the relative position from the position of the marker device. In terms of a method, the relative position is determined from the positions of marker devices attached to the display devices. The position determinator can determine the relative position of the display devices from the relative position of the marker devices attached to the display devices and the known relative positions of the display devices and the respectively attached marker devices.
A marker device can for example be a reference star or a pointer or one or more (individual) markers which are in a predetermined spatial relationship. This predetermined spatial relationship is in particular known to a navigation system and for example stored in a computer of the navigation system.
It is the function of a marker to be detected by a marker detection device (for example, a camera or an ultrasound receiver), such that its spatial position (i.e. its spatial location and/or alignment) can be ascertained. The detection device is in particular part of a navigation system. The markers can be active markers. An active marker can for example emit electromagnetic radiation and/or waves, wherein said radiation can be in the infrared, visible and/or ultraviolet spectral range. The marker can also however be passive, i.e. can for example reflect electromagnetic radiation in the infrared, visible and/or ultraviolet spectral range. To this end, the marker can be provided with a surface which has corresponding reflective properties. It is also possible for a marker to reflect and/or emit electromagnetic radiation and/or waves in the radio frequency range or at ultrasound wavelengths. A marker preferably has a spherical and/or spheroid shape and can therefore be referred to as a marker sphere; markers can also, however, exhibit a cornered—for example, cubic—shape.
In another embodiment, the system comprises at least one camera which observe(s) the display devices, wherein the position determinator is configured to determine the relative position from the output image of the at least one camera. In terms of the method, the relative position is determined from at least one output image of at least one camera, wherein the output image shows the display devices. The camera captures an image which shows the display devices. The relative position of the display devices can be calculated using image analysis. Preferably, all the display devices are within the field of view of a camera. Alternatively, different display devices can be observed by different cameras. The relative position of the display devices can then be determined from the output images of multiple cameras. The camera can be a 2D camera or a stereoscopic camera, such as for example a stereoscopic camera of a medical navigation system. A stereoscopic camera can also be used as a marker detection device.
In another embodiment of the present invention, the system comprises a camera which is attached to a display device, wherein the position determinator is configured to determine the relative position from the output image of the camera. In terms of the method, the relative position is determined from at least one output image of a camera which is attached to a display device. Such a camera observes the surroundings of the display device. The position of the camera can be calculated using image analysis. In one example, a 3D model of the surroundings is provided to a position determinator, and a virtual image is rendered for a virtual location and a virtual perspective, i.e. a virtual camera position. If the rendered image matches the camera output image, then the position of the camera matches the virtual position.
Using the camera which is attached to the display device, it is possible to detect incident light upon the camera, and therefore incident light upon the display device. Unwanted reflections of the incident light can be determined from the detection result. The effects of these reflections can be reduced by adapting the displayed image and/or by repositioning the display device. In general, any other device which is suited to detect electromagnetic waves in the visual spectrum can be used instead of a camera. In particular, a device or multitude of such devices, each receiving waves from a defined solid angle, can be used. An example for such a device is a photo detector or photo resistor provided with a lens defining a solid angle from which incident light is detected.
In another embodiment according to this invention or an additional invention, the system comprises at least one or at least two display devices and a viewer detector, such as an RFID reader or camera, for detecting a viewer who is viewing a display device, wherein the image generator is configured to generate an image for this display device in accordance with the determined viewer. In this embodiment, the position determining means is optional. A viewer detector can be configured and positioned to determine the viewer or viewers of one or more display devices. In a preferred embodiment, there is a dedicated viewer detector assigned to each display device for which the viewer is to be detected. In terms of the method, a viewer who is viewing a display device is determined, and the image to be displayed by this display device is generated in accordance with the determined viewer. In general, a viewer who is viewing one, two or more than two display devices can be identified, and/or one or more viewers of a display device can be identified.
Each potential viewer for example carries an RFID chip having a unique ID which can be read out by the RFID reader. The viewer who is viewing the display device is thus identified. In particular, a directed antenna is used to detect RFID chips only in the area from which the corresponding display device can be viewed. Additionally or alternatively, a camera—for example, a camera which is attached to a display device—captures an image of the viewer, and the viewer is then identified by image analysis, for example by comparing the image of the viewer with reference images which are in particular stored in a reference image database or by face recognition. In face recognition, a possible approach is to detect individual facial components and/or features of the person to be identified, such as the distance between the eyes or the distance between an eye and the nose, and so on.
Once the viewer has been determined, the image generator generates one or more images which are adapted to the needs of the determined viewer who is viewing the display device or display devices. For example, the display device or devices being viewed by a surgeon can then show information for navigating a medical instrument or can show medical images such as x-ray, CT or MRI images, while the display device or devices being viewed by other operating room personnel can show medical information such as the heart rate or pulse of the patient or an image of a microscope. In addition to image generation depending on the viewer, or as an alternative, it is possible to configure a touch screen functionality depending on the identified viewer. For example, the touch screen functionality is only provided to a person or group of persons which is allowed to input or amend data. In general, the touch screen functionality of a display device can be enabled, disabled or configured depending on the detected viewer, in particular in combination with the generation of the graphical user interface.
As an option, eye tracking can be performed on the output image of the camera, in particular in combination with the viewer identification by image analysis. In a particular embodiment, the result of the eye tracking can be used for determining whether or not a person which is in a position from which he or she could view the display device actually does so. If the person could view more than one display devices, it can be determined which of the display devices is actually viewed. Some data can for example always be displayed on the display device which is actually viewed by the viewer.
In a specific embodiment, the system comprises: an adjustable mounting which consists of multiple elements, wherein two adjoining elements are connected via an adjustable joint; and at least one actuator for adjusting the state of at least one joint. Accordingly, the method comprises the step of generating a drive signal for driving at least one actuator in order to adjust the state of at least one joint which connects two adjoining elements of an adjustable mounting which holds a display device. A “drive signal” can also be an instruction to generate a drive signal, in particular if the method is implemented by a software which instructs a suitable means to generate the drive signal. The mounting can be adjusted using the actuators, in order to move the corresponding display device to a desired position. This desired position is for example a position in which no reflections occur. As mentioned above, the reflections are for example detected by a camera which is directed towards the display. The images generated by the camera are analysed for reflections, and the position of the display is varied so as to minimise the reflections. The actuators can also be driven in such a way that the display device follows the movement of a viewer, such that the display device is always positioned such that it can be viewed by the viewer. The position of the viewer can be detected by one or more cameras which can be mounted on the display.
The system in accordance with the invention is in particular a navigation system. A navigation system, in particular a surgical navigation system, is understood to mean a system which can comprise: at least one marker device; a transmitter which emits electromagnetic waves and/or radiation and/or ultrasound waves; a receiver which receives electromagnetic waves and/or radiation and/or ultrasound waves; and an electronic data processing device which is connected to the receiver and/or the transmitter, wherein the data processing device (for example, a computer) in particular comprises a processor (CPU), a working memory, advantageously an indicating device for issuing an indication signal (for example a visual indicating device such as a monitor and/or an audio indicating device such as a loudspeaker and/or a tactile indicating device such as a vibrator) and advantageously a permanent data memory, wherein the data processing device processes navigation data forwarded to it by the receiver and can advantageously output guidance information to a user via the indicating device. The navigation data can be stored in the permanent data memory and for example compared with data which have been stored in said memory beforehand.
In one embodiment, at least one of the display devices comprises a touch-sensitive surface. This touch-sensitive surface can exhibit a functionality which depends on the person viewing the display device and/or the relative position of the display devices.
The present invention also relates to a program which, when running on a computer or when loaded onto a computer, causes the computer to perform the method as described above, and/or to a program storage medium on which the program is stored (in particular non-transitory), and/or to a computer on which the program is running or into the memory of which the program is loaded, and/or to a signal wave, in particular a digital signal wave, carrying information which represents the program, wherein the aforementioned program in particular comprises code means which are adapted to perform all the steps of the method as described above.
Within the framework of the invention, computer program elements can be embodied by hardware and/or software (this also includes firmware, resident software, micro-code, etc.). Within the framework of the invention, computer program elements can take the form of a computer program product which can be embodied by a computer-usable or computer-readable storage medium comprising computer-usable or computer-readable program instructions, “code” or a “computer program” embodied in said medium for use on or in connection with the instruction-executing system. Such a system can be a computer; a computer can be a data processing device comprising means for executing the computer program elements and/or the program in accordance with the invention. Within the framework of this invention, a computer-usable or computer-readable medium can be any medium which can include, store, communicate, propagate or transport the program for use on or in connection with the instruction-executing system, apparatus or device. The computer-usable or computer-readable medium can for example be, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, apparatus or device or a medium of propagation such as for example the Internet. The computer-usable or computer-readable medium could even for example be paper or another suitable medium onto which the program is printed, since the program could be electronically captured, for example by optically scanning the paper or other suitable medium, and then compiled, interpreted or otherwise processed in a suitable manner. The computer program product and any software and/or hardware described here form the various means for performing the functions of the invention in the example embodiments. The computer and/or data processing device can in particular include a guidance information device which includes means for outputting guidance information. The guidance information can be outputted, for example to a user, visually by a visual indicating means (for example, a monitor and/or a lamp) and/or acoustically by an acoustic indicating means (for example, a loudspeaker and/or a digital speech output device) and/or tactilely by a tactile indicating means (for example, a vibrating element or vibration element incorporated into an instrument).
It is within the scope of the present invention to extract one or more features of different embodiments or options to form a new embodiment or to omit features which are not essential to the present invention from an embodiment. In particular, images can be generated in accordance with the identity of the viewer in a system comprising one display device only and/or independently of the relative position between two or more display devices.