Xiaofan Chen wrote: >> There is nothing in basic physics that says this needs to be true, >> which is why it isn't. This may be true for some types of >> converters with particular properties, but to say this broadly >> applies is just wrong. > > There is. I've done theoretical steady state analysis, albeit > simplified, > to show that at very low and very high duty ratio, the buck > converter efficiency will suffer. And it is in line with my past > experiences since I have developed two universal input sensor > (20V-240V DC and AC) using a buck converter which converts > the high voltage to > > And quite some PWM controllers actually limit the minimum > and maximum duty ratio. Let's look at where energy gets wasted in a simple buck converter. The input voltage is connected thru a switch to the inductor and a diode from ground. the other end of the inductor goes to the output voltage. The main losses are due to the voltage accross the switch when on, the I^2*R loss in the inductor, and the voltage drop accross the diode when it is on. There is also current loss thru the switch and the diode during its reverse recovery time if the converter is run in continuous mode. Let's start by analysing the ideal zero-cross mode case where a new pulse is started immediately after the previous one has finished. The inductor current is therefore a sawtooth, with the rising edge slope proportional to the input minus output voltage and the falling slope proportional to the output voltage (plus the diode drop). The fact that there is a high ratio between the two slopes is of itself not a cause for inefficiency. As the input voltage is changed with everything else constant, the only difference is the leading slope of the inductor current. The average and peak and RMS inductor currents stay the same, so the IIR losses in the inductor stay the same. Since the diode only conducts during the falling slope, the loss due to its forward voltage stays the same. The losses in the switch do vary as a function of the input voltage. If the switch is ideal except for a fixed voltage drop, then the losses actually go down as input voltage is increased. If the switch is ideal except for a fixed on resistance, then the losses also go down with higher input voltage although the equation is a bit different. So looking at the first approximation case, higher input voltage would seem to be better. The approximation breaks down as real world properties of available switches deviate from ideal. The gain of bipolar transistors goes down as they are designed to withstand more voltage. The on resistance of FETs goes up as they are designed to withstand more voltage. To reduce ripple and to allow for smaller and cheaper inductors, you have to increase switching frequency. Switching losses start to dominate as the switching transition time because a significant fraction of the switch on time per pulse. Higher voltages require shorter pulses with everything else held constant for some configurations. I think this probably your main objection. However, as with most design issues, you can design for some tradeoffs at the expense of others. One obvious solution is to enforce a minimum pulse size. This is in fact what I often do. That means some efficiency is sacrificed at low currents. More frequent shorter pulses would have less IIR loss. But with minimum pulse sizes you also get tolerance to wider input voltages and can access some of the benefits of higher input voltages. I have no doubt that you have experience with power supplies that are less efficient at higher input voltages. Design tradeoffs can certainly be made where that would be the case. But you have to stop and realize this general tradeoff is not a universal rule, but rather a characteristic of specific (perhaps even common) designs. The main criteria to allow for efficient switching power supply design is predictable input and output voltages and currents. If you know those, you can design around the input to output voltage ratio. The more latitude you have to allow in the input voltage, the more some other parameters will be compromised. Something will give, like a larger inductor, more ripple voltage, lower switching frequency, but it doesn't inherently need to be efficiency. As I said before, the power supply I designed for a particular unit was designed to produce a roughly regulated 5.6V from 20-60V input. Final efficiency was pretty flat accross the full input voltage range. My supply probably had more ripple than the ones you are thinking of, but that was fine for what it needed to be. In other applications tight output voltage control and low ripple may be more important, and then efficiency might vary accross a 3:1 input voltage range. It depends on what is important in each case. By the way, this particular supply achieved reasonably constant effiency accross the 3:1 input voltage range and over 10:1 input to output voltage ratio by using a pulse on demand system in discontinuous mode. The switching transitions were guaranteed to be negligeable in all cases compared to the switch on time. The algorithm did not use a fixed pulse size, but adjusted this "slowly" to avoid large pulses with large times between them, but always kept to a certain minimum pulse size regardless. This kind of control algorithm is one of the nice things you can do with a programmable processor that is difficult to achieve with analog systems. In this case the controller was a PIC 10F204. ******************************************************************** Embed Inc, Littleton Massachusetts, http://www.embedinc.com/products (978) 742-9014. Gold level PIC consultants since 2000. -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist