Atomic I/O letters column #23
Originally published in Atomic: Maximum Power Computing Reprinted here July 2003.Last modified 16-Jan-2015.
FireWire beats SATA?
About a year ago, I upgraded a laptop HD and placed the old drive in an external FireWire (IEEE 1394, whatever) case; I use it to move data to and from the office. Using W2K and a couple of FireWire cards, this setup works beautifully and is truly hot-pluggable. An added bonus is that as 2.5" drives only require 5 volts, the drive is powered by the FireWire cable.
I have recently seen little "FireWire bridgeboards" that turn an ATA drive into a FireWire device, without needing the entire external case. I'm guessing that a CD writer connected in this fashion would have fewer "issues" that result in slow, or failed, writes, as occasionally occurs with the standard IDE connection.
I'm wondering if a decent 7200RPM ATA133 HD with a 2 - 8MB buffer and a FireWire bridgeboard might prove to be faster, neater and all around better than Serial ATA.
Are my assumptions totally incorrect and deserving only of scorn and ridicule?
Stuart
External FireWire drive boxes are neat gadgets, but FireWire-to-ATA
hardware isn't ready for main storage use.
Answer:
FireWire conversion probably won't make a CD writer work any better. It's
still running from an ATA interface, because that's all that it has; the
ATA data's just being translated on the fly to FireWire. If anything, that'll
give you more problems; the CD writer will get to be the only device
on the ATA interface, but that's unlikely to have been a source of problems
in the first place, unless you've been doing some quite serious drive-flogging
while you burn your CDs. Even if you are flogging your drives, or doing
something else that interrupts data flow to the CD writer enough that its
buffer can't save it, modern CD writers can stop and resume writing (via
"Burnproof" or "Seamless Link" or "Just Link" or whatever your drive manufacturer
calls it) without a problem.
Problems with CD writing, on current hardware, aren't actually very likely to have anything to do with the drive interface.
FireWire-converted ATA drives also can't be faster than the same drives on plain ATA, for two reasons.
First, there's the translation layer between IEEE-1394 and ATA. When all of the data has to be translated from one interface to another quite different one on the fly, it's not going to move any faster as a result. You might perhaps be able to see a small speed improvement with some really weird multi-drive simultaneous access stuff, but it'd depend on the bridge hardware, it'd only work for two or three drives at most, and it'd only be faster compared with those same few drives running two to a cable on ATA.
Then, there's bandwidth. The current theoretical peak bandwidth for FireWire is 400 megabits (not megabytes) per second; that's 50 million bytes per second, which is less than 48 real 1048576-byte megabytes per second (storage manufacturers continue to insist that a megabyte has one million bytes in it, because that makes their products look bigger and faster).
50 million bytes per second is quite unexciting compared with plain old Ultra DMA/66, let alone UDMA/100 or UDMA/133; UDMA/133 can theoretically move 133 million bytes per second (about 127 real megabytes). And then there's "150 megabyte per second" SATA, which is actually going to perform much the same as UDMA/133, if the data chain contains some parallel ATA componentry with SATA bridge hardware on it. Which, as I write this, it often still does.
Large ATA drives these days can manage sustained transfer rates around 50 and 30 megabytes per second for reads and writes, respectively; peak bandwidth is never the same as actual user data bandwidth, but the large theoretical bandwidth advantage of the top-end ATA standards means that a regular two-connector UDMA/133 controller board or motherboard should be able to shift significantly more data than four 400MBps FireWire connectors (each connector on a FireWire controller has its own channel).
In the real world, none of this is likely to matter; most people don't need super-fast disk storage. Just adding lots of RAM to a PC is a better solution for many tasks.
It should also be noted that if you want to boot a PC from FireWire, you're still pretty much certain to not be able to do it. All you need is a FireWire controller which your motherboard BIOS can recognise as a bootable device, but such controllers are virtually unknown, as I write this. Generally speaking, Macs can do it, but PCs can't.
Disc pics
After reading about Mt Rainier compatible drives in Atomic, I saved some dosh and bought a new CD-RW drive. It wasn't the Yamaha drive that Atomic reviewed; it's a LiteOn 48X-12X-48X drive with Mt Rainier support.
Now don't get me wrong, this new CDRW drive is a wonder, but other than forcing the issue with a Stanley knife and a small amount of creative spirit, how do I get it to write those pretty pictures on the spare bits of my CDs?
Mark
Answer:
You don't.
The feature you're thinking of is Yamaha's annoyingly named "DiscT@2", which can indeed burn patterns on the unused portions of a CD, but which isn't available in drives from any other manufacturer (yet). DiscT@2 and Mount Rainier support are two different things.
Lazy drive
I have a Gigabyte GA-8IHXP motherboard (Intel 850E chipset) and it won't cold boot properly. On a cold boot, it says that the Pri Master wasn't detected, and then looks to boot off something else. However, if I then press reset, it will detect the Pri Master and boot properly. Any ideas how I can get it to cold boot properly?
Taggart
Answer:
It sounds as if you've got a hard drive that takes too long to spin up.
The motherboard will only wait so long before looking at its drives; drives
that haven't spun up yet won't be detected.
Many motherboards have a BIOS option that lets you change the boot delay, to allow for drives that take longer than usual to show up. I've flicked through the GA-8IHXP's manual, though, and I couldn't find such an option. So, unless you want to trade in your hard drive (which doesn't necessarily have anything wrong with it, but which might have a lousy bearing or bad motor), you're going to have to put up with the double-start routine.
How long's a piece of string?
I have a P3 650MHz PC with 128Mb of RAM, Intel 810 IGP and Win98SE and I am wondering if I should upgrade or not, if so what to. And how long before I will need to upgrade again.
Andrew
Answer:
Well, gee, I don't know.
Is your computer too slow for what you want to do with it? Then upgrade. Do you keep running out of disk space? Is your hard drive flogging all day because you don't have enough physical RAM for the programs you run? Do you want to play new 3D games that want a bit more CPU power and a lot more 3D graphics speed than you've got? Then upgrade. If you don't, don't.
This is too open-ended a question; I can't give you a more concrete answer.
Tweaking it down
When I built my PC (P4 2.4B, MSI MAX2-BLR i845E mobo, Sapphire Radeon 9700 Pro, Maxtor 80Gb ATA133 7200RPM HDD), I ran 3DMark 2001, and got 12663. Not bad, but I expected more. I updated drivers, 600 point decrease. I tweaked around and got it to about 12150. I expected a lot more out of a system like this.
I asked around, got a lot of different answers, did a BIOS flash, but that did nothing. I then e-mailed the owner of the company from which I bought the parts, and he told me a few Control Panel tweaks (FSAA, vsync, etc...). Don't ask me how, but I ended up getting 10753!
I was shocked; I had no idea what was going on. Sandra 2002 says my "Video Card does not have an interrupt assigned", which the guy who sold me the parts said may mean my AGP slot's IRQ is also assigned to other tasks, which would slow things down. A friend told me to shift around my PCI cards, which I have yet to try. I also have a PCI modem and a Sound Blaster Live Platinum 5.1 sound card.
Manny
Answer:
As you say, 12663 3DMarks isn't a particularly awful result for this system,
but a 600 point decrease is around 5%, and therefore unlikely to be just
test variance.
The standard deviation of 3DMark 2001 tests, which is worth mentioning at this point, is up around half of one per cent. This means that if you run the standard benchmark over and over on a computer that "deserves" a score of exactly 10000 3DMarks, you shouldn't be surprised to get results in the 9900 to 10100 range, but there should be pretty much nothing outside the 9850 to 10150 range.
If you're wondering why the numbers vary, it's because the tests aren't exactly the same every time. You can see this quite easily - run the default 3DMark benchmark, and pay careful attention to the barrel, or barrels, bouncing around in the very first "Car Chase - Low Detail" test. Sometimes the truck hits a barrel after missiling the first flying bad guy, sometimes it doesn't. Sometimes the rolling barrel isn't even there for the truck to hit. In the next shot, one, two or no barrels can be rolling and bouncing around as the truck handbrake-turns into the next corner.
But all this should only account for around a couple of percentage points of difference, test-to-test, at most.
So where'd the 5% performance drop come from when you changed drivers?
Well, you don't say what driver version you changed from and to, but you shouldn't expect driver updates to give you better performance in any particular test. New drivers often are a bit faster (and then there's the whole, endless, cheating issue...) but the update might have fixed rendering bugs that caused the card to get stuff wrong in a way that made it faster. Failing to render some textures, for instance, or allowing things to show through other things when they shouldn't have, or leaving cracks between polygons that're meant to be seamless, can make benchmarks run faster, but uglier, than they should. New drivers that fix such problems (or cheats...) can give lower performance numbers.
Your original score wasn't too bad, anyway. A 2.4GHz i845E P4 with a Radeon 9700 Pro, with everything at stock speeds, is unlikely to beat 14000 3DMarks. You managed about nine-tenths of that. A general rule of thumb is that you're not going to notice anything that makes a system less than 10% faster or slower for a given task, so you're probably only barely going to be able to pick the difference between a 12663-3DMark box and a 14000-3DMark one.
What causes the difference between apparently identical systems?
Apart from test variance and video card drivers, there's motherboard drivers, BIOS settings, RAM speed, background tasks, and the numerous tweaks which, as you've discovered, will not necessarily do you any good at all.
As far as the AGP slot IRQ thing goes, if your graphics card's sharing an interrupt with some other card then you may see problems, but shuffling PCI cards should only make a difference if you've got ACPI turned off, and you've also got a PCI card in either or both of the first and second PCI slots. Non-ACPI IRQ sharing varies depending on the motherboard; with ACPI turned on, no PCI slot should have a hard IRQ assignment.
In any case, if your problem was related to IRQ sharing, and especially if your SBLive failed to play well with others - as many of them do - you'd probably be getting crashes, not just poor performance.
Check out the next reply for some more information about this.
Ape for APIC
I recently set up a system with an Abit AT7 Max mobo. The "APIC" feature in the BIOS caught my attention. After doing a little research I found this is a standard for enabling the two IRQ's needed for dual processor setups.
Rojak's BIOS Optimization Guide recommends enabling APIC on single processor systems, if you're using Win2K, XP or NT, for "faster & better handling of IRQs". I use WinXP Pro, and with this feature enabled I have IRQs available beyond the six standard PCI IRQs - I have IRQs 0-21 assigned.
A system builder informed me that devices must be APIC compliant, or problems will develop. I know ACPI requires device compliance, is this true for APIC also? If not, I can't see a downside to having more IRQ's available.
Kevin
Answer:
Yes, your Advanced Programmable Interrupt Controller (APIC) is a neat-o
thing. As you say, it gives you a bunch more IRQs to play with, under NT-series
Windows flavours, and also under other operating systems that can use it
(not Win95, 98 or ME).
The big deal about APIC isn't so much that you get more IRQs, though, but that these IRQs are being handled by better hardware, not the ancient cascaded Programmable Interrupt Controllers that IBM compatibles have been using for a long, long time.
There's unlikely to be a down side to turning on APIC on systems that allow it. In Windows 2000, it'll allow the system to spread out IRQs and not pile them all up on the Advanced Configuration and Power Interface (ACPI) IRQ, but it won't make anything any better than it was, if you didn't have ACPI turned on already. WinXP spreads devices out as much as it can anyway, with or without APIC. Old hardware that doesn't share IRQs well may work better with APIC turned on, but I wouldn't bet on it; APIC does allow the system to share fewer IRQs, though. With somewhat recent, standards-compliant hardware, it should work fine.
If your computer works perfectly well with APIC turned off, though, and you've only got one CPU, there's no reason to turn it on. You'll just give the system a big hardware-redetection conniption as everything, apparently, moves to a new place.
There's some technical info on APIC and why it's desirable here.
If that's not geeky enough for you, there's some info on Win2000 and WinXP's different IRQ-stacking behaviour, and what each strategy is likely to cause problems with, here.