Atomic I/O letters column #101Originally published 2009, in Atomic: Maximum Power Computing
Reprinted here February 3, 2010 Last modified 16-Jan-2015.
I've been busy, and so has my boss.
Long story short, he was beta-testing a product and got a comped trip to Vegas to see Winter CES and the adult video awards (gives you an idea of the sort of product it is. An IPTV box for porn movies).
I spotted him the cash for the flight, and he came back with enough scratch to get a new computer, and left me with the old one to cover the trip... and the thing failed catastrophically, with an apparently completely dead motherboard.
I have the cards, the RAM, the processor, the removable-media drives and the presumably dead mainboard; he got the fixed disks back in drive boxes to recover his data. I installed them in the boxes, and they were fine.
What I want to know is this: Is it possible that the cards carry some damage from whatever caused the mysterious fault in the machine, and if they are damaged will they in turn damage whatever machine they are put into?
I understand there is always some risk of this with any hardware of any provenance, but is this risk significant enough that I should avoid putting them into my main machine and test them somehow in a controlled environment?
The cards in question are an ATI Radeon (a great whacking beast that consumes two slots and has its own cooler - it's an X850 XT PE 256M, which I presume refers to its available memory. I have an Nvidia GeForce 8500 GT with half a gig... is one of these cards clearly the other's better?), and a Indonesian TV tuner card called a Hauppauge WinTV HVR-1600, both ATSC and NTSC compatible. The RAM is DDR2, I believe. The processor is I think a dual-core, and if it is it might have a place in a new system when I get 'round to building one, assuming it too survived.
Yes, the components may be damaged. And it won't necessarily be obvious.
There's a surprisingly wide grey area between "A-OK" and "completely dead" for a lot of computer hardware, especially if it's been handled without proper electrostatic discharge precautions. (I remember hearing about how a friend learned about this in university - the professor powered up a computer with an oscilloscope hooked up to the video card, and showed off the nice "row of mesas" waveform, from the pixel clock or something. Then he removed the card from the computer and gave it to the class, to hand from student to student all through the room. Then he plugged it back in, and it still worked, but now the mesas were all rounded and irregular...)
Mechanical damage to hardware is also possible. If power-supply smoothing caps have been knocked off a board or have dried out or otherwise died, for instance, then you can end up with a card that's fine in one computer (with a strong PSU) but flaky in another (with a weak PSU).
It's also possible, but not likely, that the cards will damage a computer you put them in. I think the only remotely probable way in which this can happen is if a card's got a short from a power pin to ground, or possibly to something that's not ground but will still make Bad Things happen if +12V suddenly appears on it.
The usual way in which people manage to do this sort of thing isn't by plugging damaged cards into a slot, though - it's by jamming cards or memory modules into the wrong sort of slot, or into the right slot backwards. Not everybody twigs to the fact that a component that can only be "installed" with the help of a rubber mallet might not be in the right place.
Hardware that can kill other hardware certainly does exist, though. Look at this piece of mine from years ago for instance. I had a computer die in such a way...
...that its CPU killed the next motherboard I put it in.
There's also the "mechanical virus" situation, in which a damaged cable socket damages the cable you plug into it, which will then damage the next socket you try to plug it into, which will then damage any future plugs, et cetera. That ought to only happen with pin connectors, but some genius has probably found a way to do it to an edge connector as well.
I wouldn't be worried about putting pretty much any component from a dead computer into a new computer, though, even if I wasn't quite sure if that component still worked. This obviously doesn't apply if the component was emitting smoke and/or flame the last time you saw it powered up, but if it looks OK, I wouldn't be frightened about trying it.
(I remember I once absent-mindedly plugged a card into an Amiga 2000 while the computer was still powered up. Blew a nice little divot out of a chip on the card, that did. The A2000 was fine, though; it just rebooted.)
Your hefty Radeon X850 card was a high-end product in 2005, competing against the 6000-series GeForces (and losing, but not by miles). Your GeForce 8500 is a couple of years younger than the X850, but was cut down to 6000-series-like specs. So it depends on the game, but the 8500 should be significantly, but not amazingly, faster for most tests. Plus it's got a useful amount of extra memory and should consume, oh, maybe 50 watts less power in 3D mode.
So the X850 isn't _useless_, but it's a lousy choice for 3D gaming today. It'd be fine in a computer that doesn't use 3D mode very often.
The TV tuner card, in contrast, is definitely still a useful product, if of course you actually have the appropriate kinds of digital TV available where you live. (I think it'd be perfectly useless to anyone here in Australia.)
The DDR2 RAM is probably fine, too. It's no big deal if it turns out to be flaky or dead, though, since DDR2 has been almost free for some time, now.
One of the computer rooms at our school is suffering intermittent power outages. That is, occasionally the circuit breaker for that room trips. There's no obvious reason for it, so I'm starting at stage 1 - determining what pattern exists, if any. To do that I need to find some way to log when the power goes off and back on again. The commercial systems available to do this are well out of the reach of our (my) small budget. So I'm looking for alternative methods. I was wondering if you had any suggestions.
One thought I had was to look into using a laptop and logging the change from AC to battery power. I know our laptops have that datum because the power icon will change to reflect the current power source. I'm just not sure how to get access to the event for logging purposes.
Another thought was to get an old UPS (APC Smart-UPS 1400) and to use it in conjunction with its monitoring app ("PowerChute", I think). However I don't have any laptops that have a serial port, and monitoring the UPS with a desktop PC may run afoul of the whole power problem we're trying to trace (the UPS batteries are pretty well useless).
Do you have any elegant (and low-cost) suggestions?
As you have already discovered, there are many ways to do this by spending lots of money. There are, for instance, dedicated hardware power monitors, which are just the ticket for whole medium-to-large organisations plagued by dodgy power, but not for one room.
Alarm panels can do it, too. An autodialer could even phone you in the middle of the night to tell you the power was out! (Or, less entertainingly, just leave a message on your voice-mail, or even send you an e-mail.)
Different UPS manufacturers have different software, but pretty much all of them should give you at least a simple text-file log of when the UPS was active. Better ones ought to give you proper logging in Event Viewer (Windows) or the system log (Mac).
I don't think Windows by itself logs battery/mains power switches by default, but people have come up with code to do it, which would only need a little extra coding to make it do the job you need done.
And here's a standalone Mac utility that does what you want, if of course you've got an Apple laptop you can use.
If, like me, you're not up to proper programming but can bodge together a DOS batch file, you could just make something that appends to the end of a log file every X seconds. All "log.bat" would need in it is a sleep for X seconds, then a timestamp and optional datestamp appended to a file, then the command to run itself again:
date /T >>log.txt
time /T >>log.txt
Around and around it'll go, until the computer loses power and log.txt is left with the last date and time on the end of the file. If the power goes out when it's actually appending to the file then there might be a problem; you could perhaps avoid that by writing to a file on a mapped-to-a-letter network drive, rather than the local computer. Even if the file's munged, though, the last-updated date ought to tell you when it was last touched.
Simple batch-file logging like this won't be able to tell you when the power comes back on again, but if it's a breaker being tripped you know when the power comes back on, on account of how you're the one flipping the switch.
After I'd thought about this for a bit, though, I realised there's an easier way. Well, there is if you don't mind connecting a few $5 eBay items together, anyway.
There's a plugpack in a wall socket, see, on the circuit that keeps losing power. The output of that plugpack goes to a normally-open ("off") relay, which the plugpack voltage is holding closed ("on"). A relay is an electrically-actuated switch, the predecessor of the transistor; a relay's switching contacts can turn a high-powered load on or off, when you feed a small amount of power to the separate coil contacts.
(The plugpack you use would, of course, have to be of an appropriate voltage to hold the relay open without setting the little solenoid coil inside on fire. Remember that old heavy linear plugpacks will, when lightly loaded by something tiny like a relay coil, output something like root-two times their rated voltage. Modern lightweight switchmode plugpacks will pretty much always give you the voltage on the sticker.)
This relay is not being used to switch a large load. Instead, it's passing power from a AA battery, which is running a cheap wall clock set to the correct time (note that a digital clock will... give unsatisfying results). When the power fails, the relay clicks open, and the clock stops. Voila! And no mains-voltage wiring needed, either.
(Or, if you prefer, when the power goes out, the fan turns off, and no longer blows the tissue-paper parachute up the hill, so the steel ball the parachute's attached to rolls down and touches the two nails at the bottom of the hill, completing the circuit between the old car battery and the sealed-beam headlamp that's pointing at the solar panel, and so on and so forth.)
Things get a bit more complicated if you have to monitor power cuts in a situation where the power goes out, then comes back on again. The simple relay-and-clock arrangement won't work for this, because if the power goes out at 1AM and comes back on at 3AM, all you'll see when you arrive is that the clock is two hours slow. That indicates two hours total without power, but gives you no way to tell when the power cut was, or whether there was more than one outage.
If you want to monitor multiple power cuts then there's really no way to do it without some proper data-acquisition setup. A fancy multimeter hooked up to a laptop would do, as would some a few breadboarded components connected to the parallel port of an old laptop.
But if all you need to do is see when the power went out, even if it came back on before you returned, all you'd need is a bit of old-fashioned electromechanical logic that's almost easier to build than it is to describe.
What you'd want is one regular relay - which is in one, "normal" state when its coil isn't energised, and another state when it is - and one "latching" relay, which (usually) has two coils.
Latching relays are like electrically-operated light switches. One coil turns the switch on, and the other turns it off. When neither of the coils are energised, a latching relay always stays in whatever state you last switched it to, and you can only switch it to one state by energising one coil, no matter how many times you turn the power to that coil on and off.
(The basic kind of latching relay has two sets of input contacts, one to turn it on and one to turn it off. There are also "polarised" latching relays, where you connect positive to terminal 1 and negative to terminal 2 to turn the relay on, and positive to terminal 2 and negative to terminal 1 to turn it off.)
So if you have a normally-closed relay that's held open while the mains is up - its coil energised, again, by a plugpack plugged into the mains - then you can use that relay to switch power to the "turn off" pins of a latching relay, which starts out turned on, and is again switching the power supply of a clock.
Mains goes out, normally-closed relay clicks back to its un-energised closed state, through which power is now supplied to the "off" coil in the latching relay, which goes click and cuts the power to the clock, stopping it. If the power comes back on, then the plugpack-connected relay will click back and stop supplying power to the latching relay's "turn off" pins, but that won't change the state of the latching relay; it's already off, and won't turn back on until someone applies power to its "turn on" pins.
There are a variety of W.-Heath-Robinson/Rube-Goldberg ways to implement this same latching setup, some of which could actually be quite practical. Like, you again start with a normally-closed relay being held open by a mains-powered plugpack, but when the power goes out and the relay goes closed, it now allows power to flow from an old Lego battery pack to an old Lego motor, which turns a winch which physically yanks the AA battery out of the back of the same cheap wall clock. (And then, I think, should probably go on to yank out its own Lego power plug, so it doesn't keep flapping the battery around on the end of the string or straining at a jammed winch or whatever for the rest of the night.)
You could set up something much more capable than this with Lego Mindstorms gear, but again, that's not cheap. Lots of people, in contrast, have a bunch of random Lego including one #107 Motor Set. You can do some quite sophisticated things with simple motors, mechanical linkages and very inexpensive electronics, and even 30-year-old Technic Lego can be surprisingly useful.
Plus, you can't help but feel studly when you solve a real-world problem with Lego.
UPDATE: After this page went up, several readers came up with further suggestions. I'm disappointed by the sensibleness of these options, but I suppose they may have some value if you just want to solve the problem quickly, even though none of these ideas involve playing with Lego.
1: Get a Windows laptop (with a decent amount of battery life), plug it into a network switch that's powered by the unreliable electricity, and the Windows system log will contain timestamped entries for when the network connection goes down, because the switch isn't on any more, and when it comes back.
2: Just replace the batteries in the clapped-out UPS. With new batteries (official branded replacements, generic SLA bricks, or field-expedient upgrades), the standard PowerChute software running on a PC powered by the UPS will do the job.
3: The Windows Event Log actually may quite accurately log the time of a power-loss event - it just won't actually write that data to the log until the computer powers up again. (There's a Microsoft utility, inventively titled "Uptime", that should also help.) Most, if not all, PCs can also be told in the BIOS setup to turn themselves on when the power comes back on after an interruption. Look for an option called something like "Restore On AC Power Loss".
4: As more than one reader reminded me, the total power consumption of the computer room has probably crept higher and higher as the years went by and more powerful computers were installed. So the breaker could be tripping because it... works.
To figure out if this is the case, audit the room with one or another kind of power meter and/or measure consumption at the breaker box, and figure out if the total draw is close to the rating of the circuit breaker. If it is, then the breaker is just doing its job, and you need to run some of the computer room from a different, and probably newly installed, circuit.
If the total current flow is substantially less than the rating of the circuit breaker, then the breaker may just be defective; get an electrician to replace it with a new one of the same rating, and see if the problem goes away.
(If it turns out to be a "safety switch" that's tripping, then that may indicate a real and dangerous problem, or may be happening just because it's normal for a tiny amount of current to leak from active to ground in all sorts of devices, and if you plug enough such devices into one circuit, they'll add up to enough for the safety switch to pop intermittently.)
I've heard that some organisations, like the military, use thermite to destroy disk drives that contain (or might contain) secret information.
I know thermite burns really hot, but is it actually hot enough to destroy everything in a modern hard drive? Could some bit of the disk survive, and be theoretically readable?
If you don't use enough thermite, then yes, bits of platter could survive and be, at least in theory, legible.
If you cover a whole drive with a layer of thermite only a couple of centimetres thick, though, "toast" is an inadequate word for what that drive is going to be.
Standard iron-oxide/aluminium thermite burns at around 2500°C. The stream of molten metal it creates is cooler, but not by a lot. These are high enough temperatures that you're no longer looking up the melting points of common substances, but instead checking their boiling points, in case they flash to vapour and cause an explosion.
Zinc, for instance, boils at only 907°C, and will not just burn but explode quite violently at thermite temperatures. Numerous compounds will decompose into their elements at these temperatures, like table salt and water (which is worse than useless for putting out a thermite fire...). Thermite temperatures are adequate to melt many types of stone - volcanic lava is almost never hotter than 1200°C.
The chassis and platters of most hard drives are made of aluminium alloys, whose boiling point is likely to be only slightly above the burning temperature of standard thermite. More and more drives now have glass platters; different glass formulations have different melting points - plain soda-lime window glass has the soda and lime additives specifically to reduce its melting point - but thermite, once again, burns hotter than the boiling point of even pure silicon dioxide ("fused quartz").
The fibreglass, solder, epoxy and plastic components of a drive will of course be obliterated even faster. Any copper wiring, though, may get close to boiling point, but not quite there.
Some of the components of the actual coating on the platters won't even melt - there's likely to be carbon in there, for instance, and that melts (or, actually, sublimes) at 3527°C. But the coating will be all mushed up with the boiled and bubbled other parts of the drive, and very conclusively thermally demagnetised.
(Magnetic materials are demagnetised if you heat them above their "Curie temperature". For hard-drive coatings and magnets, that temperature is likely to be below 1000°C, and definitely far below 2000°C.)
Ceramic drive platters may not melt, but thermal shock will shatter them into little bits, and their magnetic coating will be just as annihilated as the coating on a metal platter.