Constraints on buck regulators for battery powered applications include size, efficiency, cost, and load range. A high performance solution is based on a 2MHz buck converter powered from two Li-Ion cells, providing 1.8V at up to 500mA.

Buck converters are the most widely used type of power converter in battery powered applications. Why? They tend to be more efficient than other types, such as boost or flyback. However, one constraint placed on these converters is size.

Products like cellular phones, PDAs, and digital cameras, are packed with features - with little room left for power. The converter must occupy the minimum amount of space possible. The single biggest size factor is operating or switching frequency. At higher frequencies, the energy storage elements (filter inductor and output bulk capacitor) do not have to store as much energy, because it's being replenished more often, and decreased energy storage requirements mean smaller devices.

When marketing battery-powered products, the ability to say product "A" runs longer than product "B" is a huge advantage. The difference does not have to be overly significant. The perception of the advantage in the consumer mind is what will influence buying decisions. What does this mean to the power converter? Simply, if a converter uses less energy, then more is available for the product to use and the batteries will last longer between recharge cycles. To maximize runtime, maximize converter efficiency.

Although runtime is important, the product isn't likely to need full functionality at all times - so, conserving power is essential. Typically, these devices enter a standby state where power consumption falls to extremely low levels. This presents a problem for fixed frequency buck converters. Efficiency is a function of load, and at light loads, a fixed frequency converter is nowhere near peak efficiency (Fig. 1). As load decreases beyond a certain point, efficiency rapidly falls. This is because gate drive losses in the converter are essentially independent of load and that the converter control circuitry must consume power to operate. As load power decreases, converter power becomes an ever-increasing portion of the total power consumed.

Consider a product with a standby current of 1mA at 3.3V and whose battery is rated at 1 watt-hour (WH) capacity. In an ideal world with perfect power converters, this product would have a possible standby time of 303 hr. If a fixed frequency power converter with 15% efficiency at this operating point is used, the standby time drops to 45 hr - quite a performance impact. Fig. 1 also shows the efficiency versus load characteristics for the same power converter, if it's allowed to go into a variable frequency mode of operation at light loads. In this context, variable frequency refers to brief periods of switching activity followed by a period of inactivity - as opposed to a constant "on time" or constant "off time" scheme. From Fig. 1, this converter is about 73% efficient at this loading. If used in the product, the standby time is about 227 hr, quite a difference from the fixed frequency converter. Constant "on time" or constant "off time" converters improve on this situation somewhat, but leave the control chip powered at all times. At light loads, the converter control chip can consume more power than anything else in the system.

The converter must be efficient at light and heavy loads, as well as being small and inexpensive. That's a pretty tall order, because some of these things are mutually exclusive. For example, you can achieve a small size by using small inductors working to their design limits of current (dc and ac). Unfortunately, there will be higher losses in these small inductors than in a larger inductor driven more conservatively. These higher losses translate into reduced efficiency and shorter run times. Similarly, higher frequency will allow reduced component size for the energy storage components. This comes at the expense of increased losses in the switching transistors and again means lower efficiency. Also, as a general but not absolute rule: small high-performance components cost more than their larger cousins cost.

So what to do? A converter must meet tough specifications. One solution for a high-performance buck converter is based on the TPS62103. This is a 2MHz (can be synchronized to 2.5MHz) synchronous buck converter that can automatically transition from constant frequency to PFM or variable frequency mode as described above, or can be made to transition by external command. What follows is a walk through a buck converter design methodology, with notes about issues particular to the TPS62103 - and buck converters in general.

The first things that must be known are the input and output voltages, and output current requirements. Assume the power source is two Li-Ion cells (5.0V to 8.5V) and the output is 1.8V. The load can range from 1mA to 500mA. Assume that the load dI/dt can be controlled. This allows minimal output filtering, resulting in lower parts count and board space. Next, the ripple current has to be examined to determine the inductor value. In a buck converter, the ripple current is given by:

Where: I subscript RIPPLE4Ripple current V subscript DD4Converter input voltage V subscript OUT4Converter output Voltage L subscript 4Inductance in Henries T subscript ON4Converter on time

For a normal fixed frequency buck, the ripple current would be assigned a value 10% to 20% of the full load current. With the TPS62103, a different consideration comes into play. It has to do with the way the chip decides when to enter its PFM mode of operation, and what the chip does in this mode. There are two key factors to consider. First, the chip will enter PFM mode when it senses that the inductor is about to enter discontinuous conduction (the current in the inductor falls to zero between switching cycles). Second, in this variable frequency mode the TPS62103 essentially behaves as a voltage clamped 80mA current source, and transitions back to constant frequency mode when the 80mA can't maintain output voltage - i.e. the load becomes greater than 80mA. For a smooth automatic transition from continuous frequency to PFM mode and back again, the ripple current must be chosen so that on the verge of discontinuous conduction, the average output current is at most 80mA. For this to occur, the ripple current must be at most, 160mA. T subscript ON is the period that the buck switch is on, and is simply:

With the nominal frequency of 2MHz, a maximum V subscript DD of 8.5V and an output of 1.8V, the inductor must be at least 4.4H. If "off the shelf" components are to be used, then a 6.8H inductor is called for (typical inductor tolerances are 520% and the 4.7H standard value cannot be guaranteed to meet minimum inductance).

Next, the output filter capacitor must be determined. The choice here is more open. The operating frequency is high enough that a single 10F ceramic capacitor may be used if load transients are not too severe (high dI/dt and large absolute step size). Ceramic capacitors come in various types of dielectrics. For an output capacitor, the Y5V dielectric should not be used. The capacitance varies too much (both in initial tolerance and with voltage) and is difficult to compensate. For this type of service, use an X7R or X5R dielectric. For applications that require less sensitivity to severe load transients, more output capacitance will be required. This can be done with either several ceramic capacitors in parallel or a tantalum capacitor in parallel with a ceramic. At this switching frequency, electrolytic output filters benefit from having a ceramic capacitor paralleled with it. The ceramic capacitor's low ESR and ESL will reduce output ripple voltage at the switching frequency.

With the filter components chosen, the next step is to design the feedback compensation network. The feedback model can be broken into several parts: the pulse width modulator, output filter and error amplifier (see Fig. 2, on page 22). The pulse width modulator is a gain block. The gain provided is a function of the ramp voltage swing at the modulator and the converter input voltage, V subscript DD. To visualize the gain function, think of the fraction of the input voltage applied to the output filter for each volt applied to the reference input to the modulator. If the ramp voltage is 1V peak-to-peak, then a 1V change in the signal at the PWM reference results in a 100% or 1.02 change in the time averaged voltage applied to the filter circuit.

For a buck converter that does not have a transformer, the PWM gain can be written as:

The next element of the feedback model is the filter, comprised of the inductor, output bulk capacitor and load. The response of the filter in a voltage mode converter is a double pole occurring at the L-C resonant frequency. Depending on the amount of load placed on the filter (if modeled as a resistance), the response may or may not show gain peaking and rapid phase change near the double pole frequency. The details are beyond the scope of this article.

Finally, comes the error amplifier. This is where the feedback loop is made into a stable negative feedback system. There is a traditional approach to compensating a voltage mode buck converter when the output filter capacitor has a very high frequency zero (as the ceramic output capacitor does). That is to build an error amplifier response that has a double zero below the desired crossover frequency, and a double pole above it superscript [1].

The shape of the line Bode plot for this type of compensation is shown in Fig. 2, on page 22, below the error amplifier block. The error amp zeroes and poles are typically spaced about one decade below and above the crossover frequency, respectively. The overall gain is then adjusted so the open loop response of the whole system is forced to 0dB at the selected crossover frequency. The loop crossover frequency is typically chosen to be 1/5 to 1/20 of the converter switching frequency. This technique works well if the error amplifier is capable of providing the required gain at the required frequencies - i.e. has a high enough GBWP(gain-bandwidth product).

To determine the required error amplifier GBWP, refer to Fig. 3, on page 22. It shows the filter response, and the required error amplifier response to build a traditional feedback loop with a 100kHz crossover frequency. To do this, the error amplifier must supply 10dB of gain at 100kHz. As a minimum, the double-pole frequency of the error amp must be 500kHz. The error amp gain required at 500kHz is 24dB, minimum. That corresponds to a minimum GBWP of 8MHz. Obviously, that's not going to be the case - especially in a low-power design. The first thing to do in this situation is to determine the actual frequency response of the error amplifier. The TPS62103 error amplifier has a guaranteed minimum GBWP of 2MHz, and can be modeled as a single, dominant pole response. This constraint will be used in the compensation network design.

First, decide what the error amp circuit is going to look like. From Fig. 4, on page 24, the components that make up the error amplifier and compensation circuit are R1 through R4, C2 and C3. This configuration will give a pole-zero-zero-pole response as shown in Fig. 5, on page 25. For the loop to cross above the 20kHz corner of the filter, the error amp circuit must shift the total open loop phase response to less than 180 at the crossover frequency, preferably to 135 or less. The first step to take is to place a zero somewhere about a decade below the desired crossover frequency. To reduce high frequency gain requirements, place a second zero somewhere near the desired crossover frequency. In this example, the pole was placed beyond the open loop response of the error amplifier, and the high frequency rolloff is provided by the amp itself.

At this point it is best to use a simulator such as one of the Spice's or Saber types to fine tune the compensation network. Adjust the gain between the zeros so that the loop crossover frequency occurs on the phase response "bump" above the double pole frequency of the filter circuit. The band limited error amplifier can be modeled in the frequency domain as a controlled voltage source feeding an R-C filter, which in turn feeds another controlled voltage source (Fig. 6). Note: this model performs poorly for time domain simulations. The theoretical open loop gain and phase response of the total converter is shown in Fig. 7. The sudden drop in gain and phase just beyond the crossover frequency is due to the limited response of the error amplifier. This approach works well when pushing the limits of error amplifier performance.

With the circuit design now complete on paper, it's time to build it. As with any switching converter, give proper consideration to layout for the circuit to operate properly. With this converter, many of these issues are already considered because the power switching and current sensing elements are integrated into the IC. What needs attention is the ground current path in the circuit's power section. The power currents should not be allowed to flow to the input bypass reference along the same path as the signal currents. That means that PGND and GND should have essentially separate return paths. Any commonality in these two paths should be minimized. If a choice must be made, give preference to the power section for the shortest return path.

When this circuit is in its PFM mode of operation, the COMP pin becomes a high impedance. It is likely that the voltage divider will be high impedance as well to minimize losses. This can lead to erratic operation if the FB node is close to the SW node. High dV/dt swings at SW can couple into FB and cause incorrect behavior. There are two solutions to this problem. First, do not bring the FB node (or any of the other feedback circuit nodes) into close proximity with the SW node. The FB node should be isolated from the SW node by a piece of copper connected to ground. Power circuit return current shouldn't flow in this shielding copper if possible. An Evaluation Module for this part that illustrates these principles.

The 2MHz design presented here achieved a peak efficiency of 75% with an input voltage of 7.2V. While this is not as efficient as the 1MHz-version presented at the beginning of the article, it illustrates the tradeoffs that have to be made. It also shows that lower output voltages (1.8V versus 3.3V), and higher input voltages (7.2V versus 3.6V) also mean lower efficiency. Besides the TPS62103, three companion parts operate at lower switching frequencies of 1MHz (TPS62102), 600kHz (TPS62101) and 300kHz (TPS62100).

Reference: 1. Pressman, Abraham I., Switching Power Supply Design, McGraw-Hill, Inc. New York, 1991.