Atomic I/O letters column #55Originally published in Atomic: Maximum Power Computing Reprinted here March 2006.
Last modified 16-Jan-2015.
Recently my PC has undergone some upgrades in the cooling department. I have installed an Active Cool Peltier CPU cooler, and modded my case to hold two rather hefty fans salvaged from a server rack. The fans are 150mm Comair Rotron branded. Now my old 3.06GHz Northwood CPU runs at a chilly 41 degrees Celsius fully loaded.
My problem is that I have noticed that my 12v rail on my PSU drops to just over 11.3 volts. The PSU is a Thermaltake Purepower 560W. The question is, to what level should I allow the voltage to drop before I have an excuse to buy a new PSU?
PC gear generally expects a 12V rail that varies not more than plus or minus 10 per cent - so, from 10.8 to 13.2 volts. Few PSUs are actually that bad, though. Usually, that big a variation indicates something else amiss (possibly in the motherboard's hardware monitoring - the voltages may be just fine, but the mobo may be reporting them wrong).
P4 systems lean hard on the 12V rail, but are relatively insensitive to low voltage on that rail. This is because most of their 12V load comes from regulating the 12V rail down to feed the CPU; even ten volts still gives the switching regulators plenty of voltage differential to work with. Actually, the regulators will run a bit cooler from a lower input voltage (though other 12V devices, like hard drives, may not be happy).
Assuming your PSU's output is being accurately reported (get in there with a multimeter if you're curious), it's only about 5% low, which is no big deal. If nothing in your computer's misbehaving particularly at the moment - and it shouldn't be - then there's no reason to swap out the PSU.
[Adam got back to me, and said that his multimeter reported that the 12V rail was only 0.2V down. Yep, the motherboard was lying.]
I recently needed to replace a DVI cable connecting my shiny Samsung LCD to my box. A trip to the local friendly electronics store and 10 bucks later I was about to connect said display device to my video card's DVI socket when misfortune befell me.
The cable itself was fine, but the socket on the monitor seemed to be non-standard. The flat horizontal part of the "cross" was too narrow and the monitor socket was missing holes for the flat vertical bit and the four surrounding pins. A few minutes of Googling revealed that a proper digital connection shouldn't need the signals these wires carried, so in total disregard for the warranty, out came my trusty angle cutters and pocket knife. A short while later, minus a few pins, I had some display happening, but only after I'd booted into Windows proper. While the BIOS and Windows loading screens should have been showing, I got nada.
I've since bought a new cable and the missing bits now show up fine, but I was wondering why the signal failed to display until windows bumped up the resolution/refresh rate - a situation I've also witnessed on other PCs using non-hacked DVI cables?
The connector on your monitor isn't non-standard; it's a perfectly normal DVI-D connector, lacking the pins on the right side of the digital-plus-analogue DVI-I socket that carry the analogue signal. You can plug a DVI-D cable into a DVI-I socket, but the extra pins on a DVI-I cable won't fit, as you noticed, into a DVI-D socket.
I was initially mystified by your black-screen problem, but then it dawned on me that your video card is probably defaulting to analogue mode, possibly because it can see that it's got a DVI-I cable plugged in, and so assumes there's an analogue-capable device on the other end of it. Once Windows starts loading, it defaults to digital mode, and bing, there's your picture.
I intend to buy a couple of white lights to rig in my computer case, so when I open it a switch is triggered and the lights turn on for easy visibility without a torch and without moving my PC to where there's a good light. My first idea was to connect the lights to the hard drive power connectors, however these aren't powered when my computer is off, which sort of defeats the whole point. Is there any way of getting enough power to do this from an ordinary power supply even when the computer is off?
Most PSUs can run all of these from the standby rail.
Yes, it can be done; modern PCs with ATX PSUs have a "+5VSB" (five volt standby) rail, which is powered up whenever the PSU's plugged in (and turned on, if the PSU has a physical switch on the back). It's what lets the computer power up in response to a keypress, modem ring, Wake-On-LAN packet, or what have you, and it goes to pin 9 on the ATX connector; that; it's the only purple wire.
The standby rail's good for at least an amp, so it'll be happy to run several ordinary 20mA, 3.6V-ish white LEDs. You won't be able to put even two LEDs in series (not enough volts), but you can wire lots up in parallel; individual 3.6V 20mA LEDs will run happily from 5V through a 68 ohm resistor, or you can try your luck with thermal runaway and go for 36 ohms for two LEDs in parallel, 24 ohms for three, or 18 ohms for four.
For more on LED arrays, check out my old caselight piece.
I recently had a chuckle at a forum post where someone warned not to set up the NaturalPoint TrackIR (old version reviewed here) pointing directly into your eyes, because the IR output used for tracking would be harmful. I dismissed this as tinfoil-hat-talk, and didn't give it another thought until two things happened:
1) I ordered a TrackIR3, and
2) you said here that you "certainly can burn your eyes with ultraviolet or infrared light".
I don't know how many IR LEDs there are inside the TrackIR3, or how much power they're putting out, but is there any risk at all? Is a CRT or LCD monitor already putting out enough light to ensure that the pupils are doing their job, or would it depend on the images being displayed (*cough*Doom3*cough*)?
There's no chance of eye damage from a TrackIR; the output's far too dim. It uses a few remote-control-type IR LEDs. Your eyesight's in no danger if ten people point remote controls at you; the same applies to the TrackIR.
A really big near-infrared light (say, one of the large illuminators you sometimes see near security cameras, which appear to glow dimly red to the naked eye) close to your eyes could give you eye damage, but to my knowledge this sort of thing only actually commonly happens with ultraviolet, or to people who're careless with non-visible lasers.
Fortunately, even quite high levels of acute UV exposure don't cause permanent eye damage. People who get UV burns from viewing welding arcs directly, for instance, often just suffer several days of feeling as if there's sand in their eyes - which is as unpleasant as it sounds - then get better.
You can also, by the way, get UV damage if you wear lousy sunglasses that don't block UV, but do block some visible light. Your eyes think the light's dimmer, your pupils expand, lots of unblocked invisible UV reaches the retinas, and your eyesight fades rather faster than it otherwise would. Chronic UV exposure also increases the risk of cataracts.
I don't know much about thermal goop, but I want to play with a thermoelectric cooler. I was wondering if the goop transfers cold as well as it does heat?
Straight answer: Yes. Ordinary cheap silicone thermal compound commonly has a rated temperature range that goes well down below freezing. It'll probably get pretty stiff and sticky when it's very cold, but it'll still work. Some of the more exotic non-silicone-based compounds may not be as good, but most of them are actually better at low temperatures than silicone goop, I think.
Smart-aleck answer: It's not transferring cold, it's still transferring heat. The heat's just going into the cold side of the Peltier.
I've been told by a friend who has been working in the IT industry for a number of years that, when building a machine, it's best to use both thermal grease and the thermal pad that comes with most stock CPU heat sinks. In effect, a thermal sandwich, which (according to my friend) is supposed to increase thermal transfer and thus make it run cooler.
That said, I've got another friend who says that using grease and a pad in the same application actually decreases the efficiency of the thermal transfer. He launched into a rather complex explanation, but it basically involved the molecules of the two greases not meshing properly, thus decreasing effectiveness, etc etc.
Can you shed some light on this issue? I'm quite content to use a thermal pad for stock heatsinks and leave it at that, but I'd be quite curious to know which of the above to conjectures is correct.
The grease-plus-pad idea might work if the pad's not very good, but I doubt it. There are several kinds of pad out there, and the ones that incorporate "chewing gum" goop (by itself, or on either side of a foil patch) definitely don't need anything extra. That stuff conforms to the CPU and heat sink surfaces pretty much perfectly.
Various other soft thermal pad materials are, as far as I can see, just as good. You'll get a better thermal contact if you scrape the pad right off and use only grease, but adding grease to the pad will do nothing at best and add thermal resistance at worst.
A really lousy, stiff, non-conforming pad would definitely benefit from a bit of grease, but you'd do much better to remove it altogether, too.