Linear signal amplification represents a core enabling function in most communication circuits. For example, wireless communication transceivers employ linear signal amplification at various stages in their transmit and receive signal processing paths. More particularly, radiofrequency (RF) based communication systems rely on linear amplification in frequency mixing circuits, low-noise amplification circuits, power amplification circuits, and the like, to maintain signal fidelity and to limit the generation of unwanted harmonic frequencies. However, the non-linear current-voltage (IV) behavior of semiconductor transistors, such as bipolar or MOS transistors, represents a fundamental source of signal non-linearity in communication circuits, which rely heavily on the use of such transistors.
Important transistor-related parameters for most analog RF building blocks include transconductance, noise, and output conductance. In particular, transistor transconductance, which is the derivative of the drain/collector current with respect to the gate-source/base-emitter voltage, represents a fundamental measure of transistor linearity. As the transconductance depends directly on the transistor threshold voltage, i.e., the “turn-on” voltage, non-linearity in transistor threshold voltage results in signal amplification non-linearity.
Various circuit techniques offer compensation for transistor device non-linearity. For example, negative feedback loops provide compensation, at the expense of reduced bandwidth and increased circuit complexity. “Pre-distortion” techniques offer another compensation mechanism, wherein offsetting distortion applied to the signal of interest tends to cancel the expected non-linearity distortions arising from transistor device non-linearity. Of course, pre-distortion increases circuit complexity, and depends on accurate characterization of non-linear distortion.
More fundamental non-linearity compensation mechanisms exist apart from or in conjunction with the above compensation techniques. For example, a more linear composite transistor device can be formed by placing two or more transistors with different threshold voltages in parallel. Of course, making the composite transistor device exhibit better linearity depends on selecting the appropriate voltage threshold values for the parallel devices, both in the absolute and relative senses.
Various techniques exist for causing different ones of the parallel transistor elements to have different thresholds. For example, different biasing levels for different ones of the paralleled transistors results in different threshold voltages. More fundamentally, different transistor sizes and/or different dopant concentrations and distribution profiles for different ones of the paralleled transistors results in different threshold voltages. In some respects, the use of different transistor sizes to obtain different threshold voltages represents a preferred approach, particularly in integrated circuit applications.
As one example, the same basic transistor layout in a given process technology may be scaled to two different geometries, resulting in differently sized parallel transistors having different threshold voltages because of their different dimensions. More particularly, for a given process technology, transistor threshold voltage varies as a function of transistor channel length. Thus, paralleling transistors of different (diffusion) channel lengths represents one approach to achieving a composite transistor device with improved linearity.
Complexities arise in the context of shrinking channel lengths. For example, transistor threshold voltage tends to decrease gradually with decreasing transistor channel lengths, but begins falling off rapidly below a certain minimum channel length. The rapid threshold voltage fall off is one of several transistor behavioral changes that often are referred to as “short channel effects.” With the relatively steep slope of the threshold voltage function at or below the minimum channel length, the threshold voltage of a transistor having less than the minimum channel length exhibits significant sensitivity to channel length variations inherent in the manufacturing process.
So-called “reverse short channel effects” (RSCE) further complicate the use of short-channel transistors. In the context of transistor threshold voltage, RSCE manifests itself as a reversal in the fall-off of threshold voltage with decreasing channel length. More particularly, in certain transistor process technologies, such as deep submicron MOS transistors implemented at 0.1 micron channel lengths or less, the threshold voltage increases as channel length reduces toward the minimum channel length, but then begins decreasing at or around the minimum channel length. This RSCE behavior thus results in a peak (maximum) threshold at or around the minimum channel length, with a normally steep roll-off to zero leftward of the peak, and a falling transition into the conventional asymptotic trend line rightward of the peak.
With the above complications in mind, conventional approaches to implementing short-channel transistor circuits, including linearized parallel transistor elements, maintain channel lengths above the minimum, thereby avoiding pronounced short channel and reverse short-channel effects. Another known alternative short-channel transistor design approach relies on fabrication techniques that minimize or at least reduce short channel and reverse short channel effects. For example, because short-channel phenomenon in part arise from the pronounced influence of dopant distribution defects in deep sub-micron channels, certain dopant distribution techniques can be used to compensate short-channel effects.