This invention relates to radiographic and other similar systems used to create an image of an opaque specimen by sensing the intensity of a beam of electronic radiation passed therethrough. In particular, the invention relates to scanning slit electronic radiographic system employing multi-linear arrays of electronic radiation detectors.
Broadly speaking, radiography is defined as the technique of producing a photographic image of an opaque specimen by transmitting a beam of electronic radiation through the specimen onto an adjacent photographic film. An image results because the variations in thickness, density, and chemical composition of the specimen block or absorb some of the radiation energy, thereby causing the intensity of the radiation that does strike the photographic film (or other sensor) to be a function of the specimen through which it has passed. Radiography is primarily used in the fields of medicine and industry.
Electronic radiographic systems employ electronic detectors rather than photographic film to sense the amount of electronic radiation that passes through the opaque specimen. Signals generated by the electronic detectors are then processed to form an image which may be displayed on an appropriate electronic device, such as a cathode ray tube. This process of using electronic detectors is broadly referred to as electronic image detection.
Electronic image detection has had a revolutionary impact on radiography in recent years. This is because, in a large part, of the many and varied mathematical and analytical tools available for the processing of the data generated by the electronic radiation detectors. These analytical tools are easily and economically used by means of modern day computers, which makes the handling and processing of large amounts of data a relatively easy task.
In the prior art of radiographic systems, three specific areas have emerged which have had a significant impact on electronic image detection. These areas are: (1) fluoroscopy, (2) computed radiography, and (3) computed tomography. Each of these systems uses different approaches in gathering radiographic image information and combining it to form a desired image.
Fluoroscopy is a term that historically relates to the use of a fluoroscope for X-ray examination. A fluoroscope was a florescent screen, or a screen covered with phosphors, designed for use with an X-ray tube or other suitable source of radiation. Radiation striking the fluoroscope would cause the phosphors to emit light, thereby permitting a direct visual observation of X-ray shadow images of objects interposed between the X-ray tube and the screen. Because fluoroscopy allowed an entire image to be displayed at one time, the term has more recently come to mean a radiographic system displaying an image representing a relatively large area of the opaque specimen. Typically, fluoroscopy involves the use of some sort of image intensifier and video system to allow an entire image to be viewed at one time.
Computed fluoroscopy (hereinafter CF) refers to a combination of an image intensifier and video system plus a high speed digital image processor. The purpose of the processor is to convert the fluoroscopic image to a matrix of appropriate digital signals that can be stored and linearly processed, and eventually displayed.
The most successful use of CF to date has been in the area of time dependent image subtraction. That is, if a low image contrast is present in the fluoroscopic image (such as might exist when iodine is selectively inserted into the opaque specimen so as to provide known attenuation properties of the electronic radiation), CF can be used to enhance the contrast and allow visualization of many internal features of the opaque specimen that were previously not clearly visualized.
Because CF requires the use of a large area image intensifier as well as a video system, the limitations of CF are primarily those of its constituent elements. In particular, the image intensifier limits CF in three ways. First, the field size is presently limited to about 7" diameter (in the opaque specimen) by the currently available 9" image intensifier. As larger image intensifiers are developed (such as a 14" image intensifier being marketed by Phillips Corporation at a cost of over $100,000), larger field sizes will be possible at a significant increase in cost. Besides being very expensive, such systems are bulky, heavy, and therefore require elaborate suspension systems in order reduce their cumbersome maneuverability. Moreover, even these larger image intensifiers are not capable of imaging the typical 14" by 17" field size typically used in chest radiography in the field of medicine.
A second limitation of the image intensifier is the problem of scattered radiation. This is a common problem shared by all prior art large area detectors, and it is particularly noticable for large field sizes and thick specimens. Scattered radiation not only reduces image contrast, but it reduces dose efficiency. That is, the patient (or other opaque specimen) requires an increased exposure of radiation in order to prevent degradation of the image quality. While there are techniques to increase dose efficiency, such as conventional scatter grids, they are not without their cost. For example, conventional scatter grids absorb significant fractions of primary radiation (typically about 40%), thereby reducing the power efficiency of the system. And while other scatter reduction devices have been found which provide little or no attenuation, such as scanning slits or multiple slots, the use of such devices increase the required imaging time.
A third limitation associated with large image intensifiers is the presence of "veiling glare" in the image formation process. Veiling glare results from both electron scatter within the image intensifier as well as light scatter from the input and output phosphors that have been used therein. The presence of veiling glare degrades image quality in much the same way as does the detection of scattered radiation. The amount of glare also increases with field size. For example, in modern day image intensifiers the veiling glare may be anywhere from 10% to 40% depending upon the field size and type of image intensifier employed. It would therefore be an improvement in the art if a large field size, or equivalent, could be obtained without the attendant problems of scattered radiation and veiling glare.
A second prior art technique or method that has evolved in recent years is that of computerized radiography (hereinafter CR). Computerized radiography eliminates the need to use large area detectors by incorporating a fan beam of radiation used in connection with a linear array of detectors. The fan beam of radiation, as its name implies, is a long, but narrow, beam of radiation that falls upon a small linear region of the opaque object at any one time. The width of the fan beam is typically 1 to 3 mm. A large image is formed by passing the opaque object through the fan beam of radiation at a constant velocity with the X-rays (or other radiation) being pulsed once for each fan beam width of travel of the opaque object. Thus, a two-dimensional image is gradually built up one line at a time. This image has the resolution of the width of the fan beam which as mentioned is typically 1 to 3 mm.
The advantages of CR are many. First, it offers excellent radiation scatter rejection in that the radiation is limited to a very narrow area. Secondly, there is little or no primary attenuation associated with CR because the use of conventional scatter grids is not required. Thirdly, as large an image as is required can be obtained simply by scanning the area over which the image is to be formed until the desired image is built up line by line. Fourthly, veiling glare, or lateral communication of the image information, is minimized because of the limited detector area.
Computerized radiography, or CR, is not without its disadvantages, however. One main disadvantage is the poor image resolution that is achieved, typically being 1 to 3 mm. Secondly, the imaging time is quite long. Typically, the opaque specimen can only travel at a speed of from 2 to 6 centimeters per second because each image element must be exposed for a minimum time. Typically, a large number of photons must be detected for each image element in order to have a useful image. However, the number of photons, or photon flux, that is available from the radiation source (such as X-ray tubes) is limited by heat loading constraints. Thus, the number of photons striking the imaging element must be controlled by the speed of the opaque specimen. The total imaging time then becomes the product of the number of image lines (which is usually around 250 for a typical radiography image) and the exposure time per line. In contrast, the imaging time for computerized fluoroscopy is much shorter because all 250 lines (or whatever number of lines are employed) are formed simultaneously.
Some prior art techniques have been used in order to decrease the imaging time associated with CR. For example, it is possible to design the source of radiation so that it may operate at a higher voltage thereby increasing the flux density as well as the tissue penetration. However, the disadvantage of such higher voltages is a loss of contrast for certain types of popular imaging substances that are selectively inserted into the patient or other opaque specimen. This is particularly true with iodine which is a commonly used substance injected into patients so as to highlight certain systems within their bodies.
It would therefore be desirable to develop a system that provided the advantages of computerized radiography while at the same time improving the image resolution and the imaging time. A radiographic system achieving this desired goal is described herein.