Atomic I/O letters column #28Originally published in Atomic: Maximum Power Computing Reprinted here December 2003. Last modified 16-Jan-2015.
I am interested in buying a video card mainly for recording television/video. There are 2 cards that I am choosing from, the 64Mb All-In-Wonder Radeon 9000 Pro and the 128Mb All-In-Wonder Radeon 9700 Pro. I have read that both use the same Rage Theatre 200 chip and the same Philips tuner.
Therefore, my question to you is, will the uber-powerful 256 bit memory controller of the 9700 Pro offer better quality recordings compared to the lower 128 bit memory controller of the 9000 Pro?
Yes, those two cards do have the same tuner and video decoding hardware. So, for that matter, does the cheaper Radeon 7500-based All-In-Wonder VE.
The 9700's a much more powerful 3D card than the 9000 Pro, but its memory controller doesn't do a thing for video encoding.
The AIW 9700 Pro is, however, a slightly more capable video capture card, because the R300 core of the 9700 has a "Cobra MPEG-2 engine" which takes a bit of load off the CPU when you're encoding video. ATI claim it can reduce CPU load by as much as 25%, but in reality 10% seems to be about the most you're going to see. Certainly less than 20%.
No current processor has any trouble with even ten megabit per second MPEG-2 encoding in real time. If your CPU is faster than an original model P4 (or 1GHz Athlon), you'll be fine without the extra help.
Real "capture cards" (generally meant for video editing systems) have on-board encoders that do MPEG-something or Motion JPEG (which is easily editable - MPEG isn't), and don't require any CPU time to do it.
CPUs are so fast these days, though, that you just don't need this extra hardware for basic "digital VCR" purposes.
The R300 core also provides a "VideoSoap" function to reduce noise in low quality incoming video, which gives a better picture and a smaller MPEG file. This uses considerable CPU power as well, though, and it's no use if the video isn't actually noisy.
In the last year or so, we have seen the speed of CD burners crawl up from 8X to 52X, but the reading speed hasn't improved much, if at all! Why is this? Have companies given up going faster?
Personally I would like it to be higher, as most games and software are still on CDs and multiple CDs take forever to install! And Norton Ghost could run faster with a faster CD read speed, thus decreasing the time spent on reinstalling after a fatal crash.
Have companies called it quits at 52X speed? Is it impossible to go faster?
Yes, 52X is a bit of a limit. 52X CD-ROM readers have been around for a long time; recently, burners that can do the same speed in write mode (in theory at least, with good enough media and when the phase of the moon is favourable) have turned up too.
52X isn't a barrier like the speed of light; you could make a drive that worked faster, and someone probably will. But it's getting very difficult, because CD-ROM discs are not manufactured to the same tolerances as hard disk platters, and they're not as strong, either.
At a glance, a CD looks very nicely round and the hole in the middle looks very nicely centred. But it's not perfectly round, and there's probably stuff printed on it too, which can further spoil its centre of gravity.
If you start spinning such an imperfect disc really quickly, you're going to get vibration. Quality CD- and DVD-ROM drives have clamps that centre the disc very well, and many of them also have vibration damping gadgets of one kind or another. But there's only so much you can do.
It's also possible for discs to actually fly apart if you spin them too fast. Yes, even though they're made of super-tough polycarbonate.
Full "52X" rotational speed is fifty-two times the minimum rotational speed of the original "1X" CD-ROM drives, which spun at the same speed as audio CD players. 1X is 210 revolutions per minute; that's how fast an audio CD player spins when it's playing the very end of a completely full disc. It spins faster at the beginning of the disc (because CDs are recorded from the middle out and the data rate per unit length of track is constant), but the "X" figures are all multiples of the minimum 1X speed, because that makes the numbers more impressive.
Modern CD-ROM drives don't use the old drives' Constant Linear Velocity (CLV) variable-speed system; they stick to Constant Angular Velocity (CAV), and spin at the same RPM no matter what part of the disc they're dealing with. CD writers may vary their speed if they're writing at a speed below their maximum mechanical capacity, so as to maintain much the same data rate over the whole disc, but they don't necessarily.
Anyway, 52X is 10920 RPM. Which is bloody fast. A CD has a circumference of 377mm; 10920 times 377 millimetres per minute equals 247 kilometres per hour, around twice the edge speed of a circular saw. At this speed, a weakness in a disc can, at a random moment, produce a loud noise and a drive full of CD fragments. Going faster only makes vibration and exploding-CD problems worse.
We're already at the point where some drives default to a reduced speed "quiet" mode; you have to power them up with a button held down to get them to run at the full sticker speed.
I own an older server case with 300W redundant Power Supply Units, and I wonder if it is possible to use it with a new Pentium 4 mobo? There is a spare P2 AUX plug. Can I use it with a proper adapter?
Yes, provided the PSU has an ATX plug for its main output. The four pin ATX12V "P4 connector" on most motherboards today is just for extra 12 volt power; you can buy inexpensive adapters that let you use a standard "Molex" drive power plug for that.
In Atomic issue 23 there was an article about P4 hyper-threading. In that article it stated that all P4's had hyper-threading built into them, so does that mean my Willamette core P4 1.5GHz with 256KB of cache can be hyper-threaded too?
The only P4s with HT are the 800MHz FSB chips (the "C" models), and the 3.066GHz 533MHz FSB P4, which was the flagship chip before the 800MHz bus P4s arrived. No other P4s have HT.
HT isn't worth getting tremendously excited about, anyway. It does provide a small speed boost, generally, provided your operating system and motherboard and are able to turn it on, which is not a problem for Win2000/XP and pretty much any current Socket 478 board. But a P4 with HT is not the same as a real dual-processor system. Both HT pseudo-processors are competing for the same resources from the cache level on downwards, which massively bottlenecks their performance.
I am a Telstra dial-up customer. I was looking at my dial-up connection and noticed the compression setting. This got me wondering if I'm doing them a favour by compressing my data, but do they compress the data they send to me? According to my Dial-Up status dialog, no, which means I chew through my available megabytes quicker than if I did. Also, wouldn't this apply to broadband customers too? Why doesn't Telstra and for that matter any other ISP use compression on their end?
There are two kinds of compression that can be used on the data being sent over a dial-up Internet connection. That data may already be compressed, if you're downloading Zip files or JPEG images or some other compressed file format; in that case, the other compression won't do any good.
Anyway, the first kind of compression is v.42bis, done between your modem and the modem at the ISP's end of the line. In theory, v.42bis can manage 4:1 compression; in reality, it'll deliver about 3:1 on highly compressible data like text and HTML files. That gives a data stream that fits quite nicely into 115,200 bits per second, so that's the speed a serial-interface modem's serial port should be set to.
And then there's IP header compression, also done between you and your ISP. It reduces the size of IP packet headers, and can help considerably when you're trying to do interactive Internet activities, including games, over a modem connection; it practically eliminates latency caused by packet headers. IP header compression has no significant impact on downloads, only on the snappiness of your interactions with servers.
Both of these forms of compression operate between you and the ISP, though, not from you through the ISP to another server. The ISP has to get the same amount of data from the Internet, and pass it on to you, whether you use compression or not.
Compression certainly can help conserve download allowance, if you're downloading files from somewhere and you get, say, the 16-megabyte zipped version of a Web server's logs instead of the 200-megabyte unzipped version. But the kinds of compression you're talking about just make your connection a bit faster. They make no difference to the amount of data that passes through your ISP.
I am aware that hard drive transfer rates are greater towards the outer edge of the platter. I would like to know if I can improve hard drive performance by placing my data towards the outer edge. The best way may be to create 2 partitions, and store my data on the 2nd partition. What do you think?
That depends on what you mean by "performance".
Hard drives spin at a constant speed and record data at a constant density (to a first approximation), so their outer tracks are faster, at least as far as sustained transfer rate goes, because more data per second passes under the heads. Sustained transfer rate isn't very important for most PC tasks, though, because the drive doesn't often need to read single contiguous enormous files. It doesn't need to do that when it's accessing the swap file, either; the swap file should, ideally, be contiguous, but the computer never reads or writes its way through the whole thing. It just moves chunks of data in and out of swap as necessary.
Cheap commodity hard drives these days all have very fast sustained transfer speed, thanks to their huge capacity and the resulting enormous data density on the platters. Seek speed now matters more. The faster the drive can move its head assembly from one spot to another, and the sooner the spot it's looking for spins around under the head (rotational latency), the sooner it can start using that very fast transfer rate.
The further the heads have to move, the longer the seek time will be. For this reason, putting the swap file, or anything else you access a lot, on the edge of the platters can actually be counter-productive, since you're guaranteeing that every time you access swap, you'll be seeking all the way to the edge of the disk. A swap file about two thirds of the way between the centre of the platter and the edge (which means half of the drive's capacity will be on one side of it and half will be on the other) is, on average, closer to everything.
Of all the computers I have built of late, this must be the worst. AMD Barton 2800+ on an Epox 8K9A2+, with Sapphire Radeon 9700 Pro video. After BIOS upgrades, new drivers all round, tweaking BIOS, playing with advanced settings, and communicating with ATI, Sapphire, Epox and AMD, I still get this "the driver for the display device was unable to complete a drawing operation" error on boot, still have corrupt graphics, and still have system hangs (or auto reboots) when I try to play 3D games.
New ATI and Sapphire drivers actually make it worse, even though they report a "9700 Pro" instead of "9700 series" video card. ATI gave a long list of ideas, all of which I had already tried. The problems under Win98 aren't as bad as WinXP, but I would rather run the newer OS I have paid for. Could you please, please (on bent knee) think of something?
Welcome to the "loop error"! Isn't it fun?
There are lots of possible causes of this sort of error, which can happen on any flavour of Windows but which usually, if not always, manifests in the way you describe on WinXP machines (current ATI drivers may fail more elegantly, but they still fail). It can be caused by faulty software, faulty hardware, over-enthusiastic BIOS settings, or a lousy power supply unit (PSU).
It's a generally good idea to have a spare PSU on your shelf all the time, so you might as well get a decent one, swap it in, and see if the problem goes away. It's a pretty painless piece of shotgun debugging. If it doesn't help, then you still get a spare PSU out of the deal, which is a handy thing to have. PSUs in important computers always explode at five in the afternoon on Saturday.
There are plenty of other things that can cause the problem, though. This piece lists them all.
[Peter got back to me, though. As it turned out, in this case, it was indeed the PSU at fault!]
I have recently built my new PC, the one I'm writing this with.
It has an Athlon XP 2200+, 256Mb DDR333 RAM and a GA-7VAXP mobo. To my understanding, all XP 2200+s have the Thoroughbred core, but also to my understanding all Thoroughbreds have a FSB of 166MHz. But WCPUID says my FSB is only 133.95MHz. Is there something wrong here?
Also, my RAM is running at 267.91MHz; isn't DDR333 meant to run at 333MHz?
Yes, all Athlon XP 2200+s are Thoroughbred-core CPUs, but they come in both 133 and 166MHz versions. You've just got a Thoroughbred-core 133MHz FSB XP 2200+.
This explains why your RAM's running slower than its rated speed. Many motherboards will let you run it faster than your FSB speed, and it should be perfectly happy at 333MHz (after DDR doubling), but you'll see no real speed improvement if you do that and leave your FSB at 133MHz.
Fortunately, the difference between 133 and 166MHz FSB, with the same core clock speed, is slight. Your computer is very likely to be able to manage a 10% overclock (to about 147MHz FSB). If you do that, then you're unlikely to notice the difference, but you'll still have achieved more of a performance boost than you'd get by swapping to a 166MHz-FSB XP 2200+ and running it at stock speed.