Atomic I/O letters column #60Originally published in Atomic: Maximum Power Computing Reprinted here August 2006.
Last modified 03-Dec-2011.
8 74Gb Raptors, an 8 channel RAID adapter, and an appropriate case.
Would you notice any difference in performance between this and RAID 0? What would be the potential pitfalls of such a setup? Would you be able to use a standard power supply?
Please don't shatter my dreams.
Eight 10,000RPM drives is an entirely unremarkable RAID array, by enterprise storage standards. And yes, you can get some pretty impressive speed gains from such an array, provided the controller card(s) are on a fast enough bus. Power supply isn't a problem for a medium-sized bank of modern drives running from a chunky consumer PSU, unless they all try to spin up at once on power-up. Even quite basic drive controllers can prevent that from happening, these days - or you can get a PSU that takes the problem into account.
For desktop computer purposes, though, the speed gains from a monster RAID aren't a big deal. Yes, desktop machines can sometimes want to shift bus-saturating amounts of data, perhaps when you're starting a big application and definitely when you're editing low-compression video. But most people's computers won't do anything important significantly faster if their storage subsystem is vastly upgraded.
The big advantages of RAID for the enterprise are fault tolerance and multi-user access speed.
Fault tolerance is important. When a drive in a proper enterprise RAID array (not a RAID 0 or JBOD array with no redundancy) fails, it's either automatically replaced by a "hot spare", or you yank it out and replace it with a fresh "cold" drive. Either way, the array keeps working, the company loses no money and the users notice no difference, unless you rebuild the array onto the spare disk while people are trying to access it, in which case things may get slow for a while.
Multi-user access speed is the big everyday advantage of RAID. A single consumer hard drive may have enough capacity to hold all of the data files for a company, but it'll flog itself to death if 100 people are simultaneously asking it to save and load data. It's only got one set of heads and can only access one side of one platter at a time, so even with a ton of caching it'll be thrashing away like the swap disk for a 16Mb Win95 machine, and people will be waiting a surprisingly long time to access even small files.
RAID arrays can read or write to as many platter-sides at once as there are disks in the array, so they handle multi-user access much better.
I recently purchased a Matrox quad-head graphics card, in PCI format. I can run 5 monitors - 4 from the Matrox card and another connected to my AGP card.
Is it possible to SLI two quad-heads in order to run a total of 9 monitors?
SLI (in its old 3dfx Scan Line Interleave version or its modern Nvidia Scalable Link Interface flavour, which latter is not entirely dissimilar from ATI's un-acronymed CrossFire) means running two cards in parallel to drive only one display. But yes, you can run a whole bunch of monitors by adding another Matrox card.
Matrox have settled into a niche as the kings of super-multi-monitor PC computing. Their basic PCI (and, now, PCIe x1) quad-head cards are great value for what you get, although they aren't of course any use for 3D gaming. There can also be problems with some common monitor-wall tasks, like playing video stretched over a three-by-three block of monitors. But for plain old desktop expansion, including displaying the full nose-to-tail instrument suite of an E-3 Sentry AWACS plane - no worries.
Matrox have managed to work around the ten monitor limit of NT-series Windows flavours (NT, 2000, XP...), but using that many monitors is still pushing the envelope. Just shuffling PCI slots won't necessarily be enough to solve problems with such a system, even if the main graphics card is a Matrox as well.
I bought a 3.5" hard drive enclosure and a 200Gb Maxtor hard drive. I followed all the instructions to fit the drive into the enclosure, plugged it into my PC (which is running Windows XP) and switched it on.
The drive apparently started and the power light came on, but the computer doesn't seem to know there is anything new attached to it. No-one I know has enough knowledge to tell me what I am doing wrong. It's probably that the disk is not formatted, but since the computer doesn't recognise it I can't see how I would format it.
Someone has suggested I need to set the jumpers differently, and said something about needing to set it as a Master or a Slave. The jumpers I think they are referring to have a little diagram which says "No Jumper = DS (Slave)" and then shows a jumper fitted to the second row of connectors which the diagram labels as "CS Enabled." The end row of connectors, labelled on the diagram as "DS (Master)" has no jumper. I have no idea what any of this means.
Should I move the jumper from where it is to the "DS (Master)" connections? Am I likely to damage anything if the jumpers are wrong? And if this allows the computer to recognise the disk, how do I format it?
Unpartitioned, unformatted drives won't show up as anything except in Disk Management (right-click My Computer, select Manage, click Disk Management). The partitioning and formatting interface, then, is very simple; choose obvious new empty disk, right-click, partition, say OK to pretty much everything. The days of DOS floppies and FDISK and all that pain are long gone.
The box probably won't work if the drive isn't set to Master. As you say, drives often come set some other way, but there'll always be a jumper on the pins that you can move to the right setting.
"CS", by the way, means Cable Select, where the cable wiring determines the master and slave status. It's not often used.
Wrong jumper settings won't hurt anything. It just won't work.
A while back, some mates and I tried what seemed a simple mod to get power to an external drive case from an internal PSU.
The case demanded both 12V and 5V from its plugpack. We sourced some 3 pin mini-DIN male connectors, some Molex female plugs, drilled out some PCI slot covers and fitted them with grommets, and made our own cables, to pipe the 12 and 5V rails from the PC out of the case. It looked pretty slick; see the enclosed JPG:
The weird thing is, it just didn't work. The drives aren't happy when connected to the internal PSUs of several machines via the modded cable. They're getting power, sound like they're trying to spin up, but will then make some ugly sounds, like a continual "meep/meep/meep", or a tick-tick-tick, possibly with the read-write light coming on at the same time. They don't mount. Unplug, attach to the plugpack, and away they go.
Reminded me of the symptoms you described in the DriveDock review.
I've since gone with a slim case whose power adapter supplies 12V @ 3A only, no 5V. Before I waste my time sourcing another power plug and attempting another mod, is there something fundamentally wrong with the idea? Surely an ATX PSU can supply 3A over each rail? Or am I missing something?
Indeed, it sounds as if the drives aren't getting enough juice, on one or both rails. Higher-RPM drives need more power to spin and seek, and all drives draw more power when they're spinning up. They assume they've got basically unlimited input current, and if they're being fed through a long skinny wire (as in the case of your adapter), they may not make it. The adapter you made would probably run an already-spun-up drive just fine. Unfortunately, hard drives don't come with crank handles.
If I were you, I'd try thicker wire. I suggest medium duty figure-8 speaker cable. That's cheap, should be beefy enough, and should also have robust enough insulation that you won't pop a PSU fuse if you slam the side of the case on the cable.
An online shopping site doesn't like my main computer. I can't buy anything. As soon as I try to add something to the shopping cart, it tells me my session has expired.
My computer at work accesses this site just fine. And stranger yet, my wife's laptop here at home does too!
They're all running fully patched Windows XP. What the heck is going on?
Believe it or not, this can - and in Jamie's case, did - happen because a computer's set to the wrong time zone.
If you set your time zone wrong, your computer will perfectly happily tell you that it's, say, one in the afternoon, the same as a correctly set computer next to it. But the two machines will have quite different absolute Greenwich Mean Time values.
(A wrong-time-zone computer will get the time wrong whenever it synchronises with an Internet time server, but in further correspondence Jamie told me that he has dial-up Internet access and so his computer doesn't get to sync very often. Some users, of course, just keep setting the time back again when it mysteriously "goes wrong".)
The GMT value is what's used for things like cookie expiry. If a site uses a cookie-based shopping cart, and it sets its cookies to last for, say, six hours, and your computer believes it's in a time zone where one in the afternoon is more than six hours later than it actually is, the cookie will expire instantly.
Result: Permanently-expired sessions.
This problem doesn't come up much for Australian users, since most of our population is at GMT +9:30 at least, and there's not a lot further to go. But it's still possible.