Modern implanted medical devices such as pacemakers, defibrillators, neurostimulators and the like are microcontroller-based and characterized by ultra-low power consumption (<100 uWatts), and relatively low processing demands. The typical lifetime for such devices is on the order of 3-10 years continuous operation using Lithium compound batteries with stored energy on the order of 2-8 Ampere-Hours, or nominal average current consumption in the range of 25 to 300 microamperes. For these applications, “performance” has not only a “clocks-per-instruction” component, but also a “power consumption” component. Typically the design goal becomes “adequate performance” for “minimum power”. Throughout the medical device industry, these applications have become know as “ultra-low” power technologies and have begun to be of interest in the broader commercial sector with the explosion of portable “hand-held” computing applications.
Remarkably, one of the primary approaches to achieving ultra-low power consumption in modern medical devices is to utilize techniques more commonly found in “high speed” supercomputers. By employing advanced, high-performance architectural mechanisms to improve the processing throughput of the micro-controller and subsequently retarding the processor clock, we are able to significantly reduce the overall power consumption of the processor. Ignoring static current drain issues, the dynamic current consumed by a CMOS processor is largely linear with respect to the processor clock rate and can be closely approximated as: I=CVF where I is the total dynamic current consumed, C is the circuit capacitance, V is the supply voltage for the processor and F is the clock frequency. Present ultra-low power circuit construction techniques minimize the capacitance and run at minimal supply voltages of 1.8 to 2 Volts. With any given design, it may be assumed that the C and V components of the design are minimal with present technologies, therefore reducing the total circuit complexity (and corresponding capacitance) and reducing the clock frequency are the only available design parameters left to the system architect. Furthermore, the dynamic current consumption is linearly proportional to the clock frequency.
Since reducing clock frequency is the primary approach for reducing current consumption, if we construct a very high-performance (in terms of instructions-per-clock) processor, we can simply slow the input clock to the point at which “adequate performance” is achieved, minimizing the power consumption variable while maintaining adequate processor bandwidth to handle the real-time processing needs.
One might note that the input clock could be maintained at high frequency, and simply have the processor run less frequently, however due to latency issues with starting/stopping the clock, and transistor level efficiencies, this method is less optimal. It has proven more effective to utilize as close to 100% of the processor bandwidth as possible, using a continuous, “slow” clock (on the order of 100 KHz for present generation devices).
The demand for increasingly complex features and more sophisticated signal processing in these devices is nearing a threshold at which current architectural methods will not yield adequate processing bandwidth. Specifically, the number of input signal sources is increasing, from 1 or 2 to 8-16 and more, along with the demand that each be processed in real-time using increasingly complex algorithms. An example of one such “complex” filtering algorithm employs a median filter in which a 256 sample median must be maintained for each of 8 separate input channels. The primary function of the filter is to return the median of the most recent 256 samples on a sample-by-sample basis, a task that requires a fairly sophisticated algorithm and which is generally impractical to implement in discrete logic. Similar applications are being considered for digitally sampled inputs up to 16 channels.
The current generation microcontroller is fabricated in 0.6 micron CMOS and consumes 30 microamps (uA) at a 100 KHz clock rate. The die size is approximately 300 mils per side and contains approximately 40,000 transistors (or approximately 10,000 gates). One obvious option for increasing performance without increasing power consumption is to use smaller geometry fabrication processes. As the channel length shrinks, the dynamic current decreases and transmission times also decrease yielding a fast circuit. However, the drawback for ultra-low power applications in shrinking geometries is the impact on static current drain. Using present technology (with non-insulating substrates), as the device size shrinks, the total static current drain (due to substrate losses and parasitic capacitances) increases. It is presently estimated that the lower limit for geometry based current consumption improvement in CMOS processors is approximately 0.15 microns, at which point the increase in static current drain begins to outweigh reduction in dynamic current and the total current consumption starts to increase. Therefore, it is likely that we can realistically improve the processor performance only by a factor of 4-5 using smaller geometry fabrication processes. This is clearly not sufficient to provide the order of magnitude performance improvement needed to handle the next generation applications.
Since geometry shrinking will only yield a 4-5 times improvement, we must consider more advanced architectural solutions if the next generation demands are to be met. Recent advances in public domain microprocessor architecture have focused on multiple issue super scalar techniques with deep pipelines, out-of-order instruction execution, complex non-blocking cache structures and sophisticated branch prediction schemes to improve the pure processing performance of the computing platform. Such techniques clearly improve the issue rate of the processor, but do so at great expense in terms of complexity and increased circuitry.
The increased complexity comes at a high cost in terms of device complexity at the transistor level. Beyond the simplest techniques, the cost quickly outgrows the benefit in terms of power consumption. Clearly, a quadratic increase in die area (and in the number of active components) quickly proves unacceptable for ultra-low power applications. A solution that seeks to minimize complexity with less circuitry is generally considered preferable.
The characteristics of biological signal data provided by multiple, independent sensors demand high-speed processing of large streams of low-precision integer data and generally share 3 key characteristics. First, the operations on one stream are largely independent of the others. Second, every stream element is read exactly once, resulting in poor cache performance. Third, they are computationally intensive, often performing 100-200 arithmetic operations for each element read from memory. The essential points are that 1) there is a very low level of data dependence (interdependence) and 2) there is significant course-grained thread level parallelism to be exploited. The recent developments in the area of chip-scale multiprocessors, in which multiple “simple” computing elements are arrayed on a single die to form a single-chip multiprocessor hold significant promise as a method for handling the processing needs of “stream” based applications.
General approaches to chip-scale multiprocessing have historically sought to leverage thread level parallelism in a general sense. The STAMPede project at Carnegie Mellon University has focused much attention to the issue of discovering thread level parallelism at the compiler level and providing a CMP architecture to support the execution of this code. Similarly, the Hydra and M-Machine projects also seek to exploit both fine and course grained thread level parallelism in a general-purpose sense. All three share a common architectural approach in which a single integrated circuit contains multiple copies of a simple processing element (ALU) with differing degrees of interconnectivity. Reminiscent of early RISC history, this approach seeks to utilize the additional circuit capacity by leveraging a simple hardware design and relying on compiler technology to efficiently exploit the multiple processing paths in the processor. Although these techniques are generally applicable to the implanted medical device architecture, the need for general processing does not exist when processing data streams. The application program (once loaded) will operate throughout the life of the device. Therefore, the process of “discovering” and exploiting thread level parallelism is not an issue for the medical device application. We can take advantage of this aspect to simplify the architecture.
In contrast to these methods, a stream-processor employs a co-processor approach in which a single (control) processor interfaces directly to the stream-processor through a simple interface. The stream-processor contains 8 “copies” of a simple ALU, which has been optimized for data processing algorithms. Also on-chip is an interface to independent memory banks, which are connected to each stream processor through a stream register file. Each ALU executes a small program that is referred to as a ‘kernel’ in which the specific data/signal processing algorithm is implemented. This simple architecture holds promise for the next generation implantable device applications. For the foregoing reasons, there is a need for an implantable stream processor that provides high-bandwidth processing while retaining the ultra-low power characteristics demanded by the filtering application.
Several proposed medical device applications involve the use of increasingly sophisticated filtering techniques applied to continuously digitized input signals. One such technique employs a median filter. A median filter of size n is a method which, given a new sample, z, from a continuous digitized stream of samples, includes z with the preceding n−1 samples and returns the median value of the n total samples in the filter. For each successive z in the input stream, the median filter returns the median value for z plus the n−1 preceding values at the same rate as the input data.
Prototype median filtering methods have been based on variants of insertion-sort in which the new sample z is inserted into a sorted list of the preceding n samples and the “middle” value of the sorted list returned as the median. These methods generally take O(n) time and currently require the use of a non-implantable computer to implement. Present and proposed implanted device architectures are not suitable to this approach. An example of a median filter that uses a comparison algorithm is show in U.S. Pat. No. 5,144,568 “Fast Median Filter” by Glover (Sep. 1, 1992).