Atomic I/O letters column #86
Originally published 2008, in Atomic: Maximum Power Computing Last modified 16-Jan-2015.Not justification for liquid-nitrogen cooling
I was just reading an article comparing CPU coolers over at Tom's Hardware when I came across this:
"The worse the cooler performs, the more energy the CPU will consume, as it will be operating at a higher core temperature."
The bold type isn't mine; they apparently felt the need to bold this statement, seeking to enlighten even those who might otherwise absentmindedly skim past this shining pearl of wisdom.
Now, before I dig myself any deeper a potential grave, is there any truth to this statement? Do the physical properties of the transistors in a CPU change with temperature in a way that would make it need to draw more power to operate at higher temperatures? Maybe there's more gate leakage for some reason? Or something?
This sounds stupid to me even as I type it (What? I have one of those programs that reads to you as you type), but I guess that with the size of transistors nowadays, there could be some kind of quantum thingamajiggery going on that I don't know about. Or non-quantum thingamajiggery, for that matter. I'm no expert on this stuff. But, if this were to be true, wouldn't thermal runaway be a constant problem? The CPU starts working, producing enough heat to reach, say, 50 degrees, at which temperature it uses (and therefore outputs as heat) a little more power than it did before, which raises its temperature to 51 degrees, at which temperature it uses a little more power, etc, etc?
It's quite likely that this is all really obvious stuff and that I've beaten the poor ex-horse into a bloody pulp by now, but I can't help thinking that the people who wrote this are professionals, and they wouldn't write something like this, much less bold it, if they weren't absolutely sure of it, especially since it seems to go against common sense. Another, much larger, part of me can't help but think that the THG guys just figured that since the CPU is hotter, it must be using more power. But that's too daft to be reasonable. Isn't it?
Anyway, I just figured you could set me (and maybe the THG guys) straight on this rather than duke it out in the forums, where useful conclusions are rare. Like unicorns. Thanks.
Erik
This piece of steel wire is 0.6 ohms cold, and 3.3 ohms when it's this hot. If you're
getting into the precision resistor business, I suggest you not make them out of steel.
Answer:
Yes, what the Tom's piece says is actually true. It's not a large effect, though. I
don't know exactly what the numbers are for current CPUs, but I think you're looking
at less than a 1.3-times power increase for the entire normal operating temperature
range of the CPU. So if you increase the core temperature from 20 degrees C to 80 degrees
C, you should only expect power draw (and thus heat dissipation) to increase by around
20%.
This is not entirely surprising, because it's normal for electrical components to change their behaviour when their temperature changes. The magnitude and direction of this effect can be expressed as the temperature coefficient of resistance.
Ordinary resistive devices, for instance, typically increase in resistance when they get hotter. If you measure the resistance of a light bulb, you'll get a much smaller number than you'd expect, given its power rating and supply voltage. But when the filament's glowing white hot, its resistance is much higher.
The same applies, to a greater or lesser extent, to almost all materials. It takes special materials to make resistors that're stable over a wide temperature range (and there are a few materials, like carbon, that actually DROP in resistance when they get hot - I talk more about this in my old PSU review here).
Things get more complicated when you're talking about semiconductors, but once again changed behaviour with changed temperature is normal. Discrete transistors can easily exhibit the thermal runaway you mention; their gain increases as they get warmer, which causes them to pass more current and dissipate more heat, which increases the gain even more - bingo, one fried transistor. Several other semiconductor devices have this problem too - LEDs, for instance. Unwise LED-array design can easily leave you with a bunch of LEDs competing to see who can commit suicide first.
In CPUs, I think higher temperatures do indeed increase the amount of current leakage from the tiny transistors, and you then have to feed the processor more power to make up the difference - but don't quote me on that, because I'm no kind of expert. It's theoretically possible for a thermal runaway effect to happen here, too - almost certainly terminating in the CPU overheating and shutting down, rather than any actual damage to the chip - but CPUs are generally well enough cooled that the relatively small power increase with temperature isn't enough to get positive feedback happening.
So instead of a 1-degree temperature increase causing enough power gain to create another 1-or-more-degree temperature increase, which'd put your CPU right on the thermal runaway train, a 1-degree increase is enough to cause another, say, 0.5-degree increase, which in turn causes another 0.25, and the effect peters out in a convergent series.
Parallel power
I bought two 40A, 5V switching power supplies for a Peltier experiment I plan on conducting. They each have three channels that provide 13.3A of current. They may or may not be genuinely split, but they say I can connect each channel in parallel.
Would be safe to connect channels from both units together in either a parallel or (more importantly) series fashion? That way I could test the efficiency/performance of the thermo-electric coolers (TECs) at both 5V and 10V.
Supposedly, a 16V TEC at around 5V actually moves about 3-4 watts per watt it "uses" to make it happen. This is almost at the level of phase-change efficiency. Four high-powered TECs operating at 31% voltage might allow for some low-powered, high-cooling processor fun.
Buy hey, it's the fun in the doing that matters. I just hope the doing doesn't end with blue smoke and a melting credit card.
Geoff
Answer:
Connecting many kinds of power supply in parallel is possible (deem all of my usual
disclaimers about how it's not my fault if you electrocute yourself to have been included
here), but it's easy to end up with one PSU actually supplying all of the current and
going into overload when the other one's barely working. I think
this PDF and
this other
PDF explain this pretty well. There shouldn't be any risk of the PSUs just plain
blowing up when you connect them in parallel, though.
It's also possible to wire some types of power supply in series, but I don't know much about it. My knowledge there pretty much stops at "it's definitely a bad idea to try this with PC PSUs, on account of how their DC ground is probably the same as their AC ground...".
But people definitely do it, so it might work with the PSUs you've got. Don't come smoking to me, though.
If 12V is acceptable for testing, you could get it at high current for a reasonable period of time from a cheap car battery. You could also bodge the output voltage down to 10V-ish with a low-value, high-wattage power resistor of some sort. Such a device can be improvised from some fencing or coat-hanger wire wrapped around a brick.
(Bonus points will be awarded if you manage to incorporate the same power resistor in your finished PC.)
Note that plain old iron wire is one of those materials whose resistance, as I said above, changes with temperature. If you make a power resistor out of steel fencing or coat-hanger wire, the resistance will increase as the wire gets hotter. This can act to resist thermal runaway, but if you want the resistance to stay steady, you'll have to sit the resistor in a bucket of water or something to keep it at a relatively steady temperature.
Paradoxically, though, if you hook up a piece of steel wire across a the outputs of a reasonably beefy bench power supply, it'll behave like classic thermal runaway. It'll start out passing, say, ten amps, then pass more and more current as it heats up and starts to glow, and then finally melt. This final glow-then-melt phase can take very little time; I just gave it a go with three inches of fencing wire, and the total time from start-of-test to the wire melting was about twenty seconds, but the time from the wire starting to visibly glow to the moment when it glowed brightly and melted was less than four seconds.
This odd, and entertaining, phenomenon happens because of the bench supply's overload protection.
Let's presume that you have a power supply that can deliver as much as 10 amps, and as much as 15 volts, and you've turned the voltage and current limiter knobs (if it even has current limiting - cheap high-current bench supplies generally don't) up to max.
When you connect the cold wire across the terminals and turn the power supply on, the cold wire is basically a dead short, which causes the PSU to output its ten-amp maximum current at a relatively low voltage - probably only about one volt.
(Actually, bench supplies usually can't deliver full current at very low voltages, but never mind that for now.)
Ten amps at one volt is only ten watts, which takes a while to heat up the wire. As the wire heats up and its resistance rises, though, the bench supply will continue to deliver as much current as it can manage, but also a higher voltage, because now it's got a bit of an actual load to push against. So you'll have ten amps at two volts for 20 watts, ten amps at three volts for 30 watts, ten amps at four volts... and rather soon, a few inches of steel wire will be melting.
This confusing and seemingly impossible apparent thermal runaway happens because of the characteristics of the power supply, not those of the load. If you put the same wire across a power supply with extremely high current capacity and no overload protection - a car battery, say - then for the very brief period before the wire turns into a puff of sparks, the current flow will actually behave itself as you'd expect. It'll start extremely high, then fall as the wire heats up.
You'd need some sort of (rugged...) data-logging device to actually see this happening, though, since the very small resistance of a short cold steel wire will allow a 12V car battery to blow an easy 50 to a hundred amps through it, for a power dissipation of several hundred watts. This will cause the wire to go away rather suddenly.
I talk more about the slippery concept of volts and amps in my old piece, "Avoiding Electrocution".
You'll need an Amiga RF modulator and three coat hangers...
Do you know if it's possible to convert the component output from my original-model Xbox 360 to HDMI? My big fancy schmancy TV does not have component input, so my only choice for decent res is VGA, and I want to use that for a media PC.
My TV has 2 HDMI, 1 VGA and 2 SCART inputs.
Do you have any ideas?
Marc
Answer:
The cheap option, here, is to just use a monitor switchbox to turn the single VGA input
on your TV into two. This should give you considerably better picture quality than a
cheap VGA-to-HDMI adapter box. Such adapters can be had on eBay for less than $70 delivered,
but will probably do horrible things to
your signal.
Component-to-HDMI adapters exist, but they can't put resolution into a signal that wasn't there in the first place, and one with good image quality will probably cost quite a lot more than a new Xbox 360. Because HDMI has Digital Rights Management encrustations and earlier standards (including "VGA" PC analogue video) don't, it's technically impossible to do this job properly.
All versions of the 360 after the original edition that you've got, though, have HDMI output. So if you must have HDMI, you might as well just trade up to a newer 360.
(After I put this page up, a reader pointed out that if the TV can accept RGB input via the SCART connector, a standard 360 SCART cable ought to work fine. I originally presumed that the "does not have component" part meant no RGB over SCART either, but Marc could have just meant no YPbPr, which is usually what "component video" means. The 360 SCART cable can't, by the way, deliver YPbPr to anything; that kind of component over SCART has been done, but there's no standard for it.)
Cold Cathode Fragile Lamps
I am trying to locate four-inch cold cathode fluorescent lamps (CCFLs) that don't have acrylic covers on them. Have you ever come across anything like that? I am using the lights to build lightboxes, and the acrylic covers make them too big to install.
Chris
The black tube is 3mm wide; the return wire's tiny.
(It's a spiral in this CCFL, because this lamp is actually already broken.)
Answer:
The plastic tube around standard CCFLs for PC or car illumination is there for protection,
because the cold-cathode lamps themselves are very fragile, and the hair-thin return
wire that runs next to the lamp itself is even more fragile. The return
wire has to be there because these kinds of lamps have their two power connections on
only one end, but the physical tube has one terminal on each end.
If you want "nude", non-encapsulated white CCFLs without a return wire, you might like to check out replacement backlights for laptops and LCD monitors. They'll have a wire coming out of each end, and no protective tube. They also won't come with a handy-dandy 12V inverter, but I think they ought to run fine from any inverter that'll run an encapsulated lamp of similar length. Expect to pay $US20 or less for one tube, delivered, from an eBay dealer.
The problem with backlight lamps, of course, is that few laptops have a screen so small that it needs only a four-inch backlight, and you can't exactly cut these things to length.
If you absolutely must use a tube length (or colour) that you can't find as a bare lamp, you could try cutting the protective tube off the usual kind of lamp. This might not be as hard as you'd think, since as far as I can see, the lamps are pretty much just sitting loose inside the protective tube.
The caps on the ends of the protective tube are likely to be glued in place very solidly, but if you clamp those ends in place and cut around the tube in the middle - perhaps with a hot-knife or tubing cutter rather than a hacksaw, to minimise trauma for the lamp inside - you should be able to just pull the halved protective tube off either end of the CCFL inside. You'll still need to cut the supply wires to completely remove one half of the protector, but you can always solder them together again afterward.