In the field of medical imaging, image modalities keep producing ever larger and larger volume data sets. The latest computed tomography (CT) modalities produce volumes that can be very large. The last couple of year these multi-slice CT scanners have substantially increased their acquisition resolution. Single volume 3D data sets can now be in excess of one Giga Byte (GB) of data.
Also the speed of capturing a single volume scan has increased dramatically. This enables the capture of several volumes of a patient at a time interval of a few seconds. This results in a time sequence of three dimensional (3D) data sets forming four dimensional (4D) data sets, with three spatial dimensions and one time dimension. These 4D sets can be used to examine for example blood flow via contrast fluid perfusion or the mechanical behavior of the heart valves. These 4D data sets can be multiple GB large in size.
In the usual hospital workflow the volumes scanned by the modalities are sent to the Picture Archive and Communication Systems (PACS) server. Radiologist working in their reading rooms retrieve the volumes from the PACS server onto their workstations. Once the volume has been sent from the PACS server to the workstation, the radiologist can start examination of the volume data set with specialized viewing software running on the workstation.
As the amount of volume data has grown to GB of data, retrieving the data from the PACS server to the radiologist's workstation can take a substantial amount of time, i.e., several minutes. The aging Digital Imaging and Communications in Medicine (DICOM) communication protocol standard is also not very efficient to transfer large amounts of data from the PACS server to the workstation. Depending on the network bandwidth the transfer time can take even longer especially for 100 Mbit/s and slower networks. All this time the radiologist has to sit idle waiting for the volume to be available at their workstation.
Because the sheer amount of data to be processed, the radiologist's workstation has to be very powerful and it may require specialized hardware to accelerate the interactive viewing. As hospitals don't upgrade their workstations very often, the workstation of the radiologist may be actually underpowered to view the large data sets efficiently.
People in other departments in the hospital, like for example the referring physician or the operating surgeon, may be interested in reviewing those large volume images as well. These people may have even lower bandwidth network connections to the PACS server and their reviewing computer may have low computation specifications. This may result in very long waiting times before the volume data is sent to the reviewing computer or because of the low computation power it may not even be possible to do the viewing at all.
In order to circumvent the long transfer time of the volume from the PACS server to the viewing computer and the high-end computation power requirements of the viewing computer, a render server configuration can be added to the network. In this configuration the volume to be viewed is first sent from the PACS server to the render server over a high speed network link reducing the long network transfer time. Also the scanning modality can be configured to send the volume directly to the render server so that the volume already is available on the render ready to be viewed.
Instead of rendering the volume on the viewing computer the volume is now rendered on the render server. The viewing computer, called the thin client, instructs the render server what to do via commands it sends over the network from the viewing computer to the render server. The viewing computer tells the render server what volume to render. The viewing computer also tells the render server how to render the volume, like for example the 3D viewing position of the volume. Also the size of the image to render is communicated from the viewing computer to the render server. Once the render server has received all rendering information an image is rendered on the render server and it is sent from the render server to the viewing computer. The viewing computer then displays the received rendered image on the viewing screen.
For each image to be rendered this communication and rendering chain is repeated. So for each viewing interaction of the viewing user with the rendered volume, like for example 3D rotation, a new image is rendered, generating a stream of images. For interactive display typically more than 10 images per second need to be rendered and transferred over the network from the render server to the viewing computer. The size of a typical rendered image can be 1 million pixels, resulting is data size of 3 MB for a color, red, green blue (RGB), image. The rendered image stream thus generates a network load of typically 30 MB per second or 240 Mbit/s. This amount of data is far too much to be passed over typical hospital networks that have no ore than 100 Mbit/s bandwidth. To reduce the amount of rendered image data, images are compressed lossy, with for example JPEG, reducing data by typically a factor of 10. As 24 Mbit/s is still too much data to pass over heavily used hospital networks, the amount of data can be further reduced by for example rendering quarter size images and up scaling the rendered image at the client side. This compression and downscaling is typically only performed on the image stream generated during interactive volume manipulation like for example rotation. When the viewing user stops interacting with the volume a full resolution image is rendered and this still frame is lossless sent from the server to the client.
The render server can serve several viewing clients simultaneously. This causes the required network bandwidth to increase proportional to the number of viewing clients.
The concept of the render server has been known in prior art such as US 2007/0188488.
The rendered images may be treated as a video sequence. A system has been described that implements a rendered video streaming technique in the context of 3D gaming: Stream my game system is one example.
This system allows a game running on a first computer (server) to be viewed and controlled on a single second computer. The first computer generates a continuous video stream even if the viewer on the second computer does not interact with the viewed scene. For game viewing this is required as the rendered scene can change by itself like for example when standing still on a street while looking at a moving car. This continuous video stream puts a continuous load on the network.
Furthermore the system described here requires the application (game) to run on the first computer (server) including any user interface. The complete user interface has to be rendered on the first computer (server) and sent to the second computer via video compression.