Typically Lithium battery chargers can be a single, fixed piece of silicon (chip) or a microprocessor based hardware that has digital to analog converters (DAC) and analog to digital converters (ADC) to set and/or determine charge voltage and charge current.
Li-ion cannot absorb overcharge, and, when fully charged, the charge current must be cut off. A continuous trickle charge would cause plating of metallic lithium, and this could compromise safety. To minimize stress, the lithium-ion battery is kept at the 4.20V/cell peak voltage as short a time as possible.
Another issue is that lithium capacity and open cell voltage change as the battery ages, due to phase changes in the anode/cathode, metal dissolution, and electrolyte and electrode oxidation. Any of these effects cause a decrease in the cell voltage as well as the capacity of the battery.
Lithium Chargers typically use a 3 or 4 stage charge cycle, comprising of constant current, constant voltage and/or equalization and charge termination.
The constant-current charge ends when the cell voltage reaches 4.2 V, at which the constant voltage stage begins. Charging is typically terminated by one of two methods, a minimum charge current or a timer. The minimum current approach monitors the charge current during the constant-voltage stage and terminates the charge when the charge current diminishes in the range of 0.02 C to 0.07 C.
The second method determines when the constant-voltage stage is invoked. Charging continues for an additional 2 hr, and then the charge is terminated. Charging in this manner replenishes a deeply depleted battery in roughly 2.5 hr to 3 h.
Another method to determine capacity and cutoff instead of current, is reading the Open Cell Voltage of the battery. This requires periodically disconnecting the battery from the charge circuit, and reading the voltage of the cell(s). This requires high accuracy A/D converters, precision tolerance components, as well as exemplary layout of the printed circuit board.
In order to safely charge a battery using an open cell or even cutoff current assumes a fresh battery. As a battery ages, the cutoff current or end of charge open cell voltage decreases. As a result, a fixed timer to guarantee the charger stops operating may eventually induce failure in a lithium battery due to the application of voltage across the battery, even though, in its present condition, it is actually at maximum capacity for a cell of that age.
For example, a fixed timer may allow 20 hours total charge time for a totally depleted 15 Amp-hour battery charged at 1 Amp. There is no distinction in partially charged, fully charged or empty batteries with fixed timers. For example, one has a 5-year old 15 Amp-hour battery, currently at 75% of charge. However, due to cell aging, it cannot charge to 100% capacity, but rather only 90%. If attempted to charge beyond 90%, the cell starts to heat due to the increased series resistance of the battery (from aging). Current chargers will attempt to charge this battery to full capacity for the entire 15 hours.
Due to the nature of processor driven battery chargers, the charger voltage may be derived from a bandgap or other voltage reference, as well as a fixed amplifier or digital to analog converter. In the DAC approach, the processor sets a digital value to the desired charger voltage output. However, due to silicon variations, component tolerances and PCB layout, the actual voltage supplied by the charger may have offsets from the expected value.
Another safety feature is monitoring the temperature of the Lithium battery. Some circuits stop charging when over heated (and reset the timer), or merely decrease the voltage applied to the battery. Either method may allow continual charging of a failing battery.