Dimmer circuits are used to control the power provided to a load such as a light or electric motor from a power source such as mains power. Such circuits often use a technique referred to as phase controlled dimming. This allows power provided to the load to be controlled by varying the amount of time that a switch connecting the load to the power source is conducting during a given cycle.
For example, if voltage provided by the power source can be represented by a sine wave, then maximum power is provided to the load if the switch connecting the load to the power source is on at all times. In this way the, the total energy of the power source is transferred to the load. If the switch is turned off for a portion of each cycle (both positive and negative), then a proportional amount of the sine wave is effectively isolated from the load, thus reducing the average energy provided to the load. For example, if the switch is turned on and off half way through each cycle, then only half of the power will be transferred to the load. The overall effect will be, for example in the case of a light, a smooth dimming action resulting in the control of the luminosity of the light.
Since the load is connected to a high voltage or current source such as mains power, a defect in the circuit such as a short circuit, can lead to a sudden surge of high current, which can damage the load and any circuitry connected to the load. It is useful for the dimmer circuit to be able to detect the presence of such high, or overcurrent conditions, and act so as to remove the load and/or connected circuitry from the high current source.
A number of techniques exist which allow detection of overcurrent conditions within a dimmer circuit, however, these suffer from various drawbacks including added circuit complexity, excessive power dissipation or lack of robustness.
A typical MOSFET or IGBT based phase-control dimmer circuit comprises two devices connected in an opposing polarity series arrangement, to facilitate the conduction of alternating current through a series connected load. For a given line voltage polarity, only one device is responsible for determining the flow of load current, since the anti-parallel diode associated with the remaining device will be forward biased during this polarity.
One commonly-used technique for fault or overcurrent sensing uses a fixed resistance arrangement as shown in FIG. 1. A fixed resistance (R1 & R2) is placed in series with “common” output terminal of one or both switching devices (IGBT1 and IGBT2, with corresponding co-packed anti-parallel diodes D1 and D2). Voltage comparators (not shown) are used to sense the voltage developed across the resistor(s) arising due to the flow of load current. The appropriate switching device is turned off at the instant when a predetermined current threshold is exceeded.
The main disadvantage of this technique is that the current sense resistor(s) must be sufficiently robust to withstand high peak currents. There is also a power loss associated with the resistance during normal dimmer operation.
Another known technique is to use MOSFET conduction voltage sensing as shown in FIG. 2. In this arrangement, the switches (in this example MOSFET1 and MOSFET2, with corresponding intrinsic anti-parallel diodes D1 and D2), exhibit a resistive V/I characteristic when the gate drive magnitude is fully established. The on-state conduction voltage (Vds) of MOSFET is directly proportional to load current magnitude (notwithstanding variation due to temperature). This can therefore be monitored as a means to detect excessive load current magnitude.
A significant disadvantage of this technique however, is that during MOSFET turn-on or turn-off transition—where gate voltage is traversing the gate threshold voltage region, the device does not exhibit a resistive V/I characteristic. Therefore the method of monitoring Vds magnitude is not applicable during such time, thus negating ability to detect associated overcurrent events.