Atomic I/O letters column #16Originally published in Atomic: Maximum Power Computing Reprinted here 26-Dec-2002.
Last modified 16-Jan-2015.
After saving for many months I finally get enough together to buy a new beast. I get to the local PC shop dreaming of my Athlon XP CPU. So I'm placing my order and we get to the RAM - "512Mb DDR333 please". The guy behind the counter looks at me blankly and says he doesn't have any, and mumbles that it would be pointless because of the "von Neumann bottleneck", before swiftly trying to talk me into DDR266, of which he has plenty. I cancelled my order.
What is the von Neumann bottleneck? And is DDR333 pointless?
All personal computers and most other computing devices use the "von Neumann architecture", with separate storage and processing components connected by a data bus, and with programs and data sharing the same memory.
The "von Neumann bottleneck" is what happens when processor speed outstrips the ability of the storage subsystem to supply the processor with data, and/or accept its output. This is a common problem for von Neumann machines, and it's why PCs contain so many caches, to smooth out throughput spikes and hold often-used data. The main reason why supercomputers cost so much is that they have very fast memory and very fat pipes between that memory and their processors; the processors themselves are not necessarily all that speedy.
Running 333MHz ("PC2700") DDR memory in an Athlon box is pretty much pointless, if the CPU bus is only 266MHz. Both of these speeds, by the way, are after the clock-doubling that's done by the Double Data Rate memory design, and by the Athlon's EV6 CPU bus. The FSB you'll see in the BIOS display of a 266MHz-bus Athlon is the pre-doubling 133MHz.
But this isn't a von Neumann bottleneck; it's just a simple bus speed mismatch. 333MHz memory doesn't improve anything if it has to talk to the CPU through a 266MHz interface. The computer's subject to the von Neumann bottleneck anyway; the CPU's Level 1 and Level 2 cache wouldn't be necessary if it wasn't. But raising the RAM bus speed doesn't make that bottleneck any worse.
Now, on a system that lets you independently set processor and memory bus speeds (which is the sort of system you must have, if you can run a 266/333MHz split at all), you can goose up the processor FSB until it's closer to the memory speed, provided your CPU can take it. On a motherboard that locks the processor and memory bus speeds together, 333MHz-rated DDR RAM just means that the RAM shouldn't be the limit for your overclocking.
Going from 133 to 166MHz CPU FSB (pre-doubling) is a 25% overclock, which is large, but far from unheard-of. All of the really nutty Socket A overclocking stars these days unlock and reduce their CPU multiplier, so they can manage a stupendous FSB; 50% higher FSB speeds are a weedy overclock by their standards. They either run their RAM well above PC2700 spec, or their memory bus is slower than their CPU one, which means their super-fast CPU genuinely is more starved for memory data than usual.
What all this means is that 333MHz-rated DDR memory is a perfectly sensible thing to buy, but only if you intend to run a similarly high CPU FSB. It's wasted if your FSB is lower than the RAM bus speed.
It would be nice and helpful for users or amateur tweakers if someone could publish a detailed Computer Acronyms Dictionary.
Well, I could try what you suggest, but there are a lot of computer acronyms. Like, thousands. Fortunately, there are some excellent Web sites that help clear things up. I suggest Acronym Finder, for all sorts of acronyms; The PC Webopedia, for general computing definitions, and FOLDOC, for the good old Jargon file, plus more.
Why is it that after saving a few images in IE (right click -> "save picture as"), after a while JPGs start being restricted to being saved as "untitled BMPs"? Seems to happen through different Windows OS's and IE browser versions.
This is a bug that happens when the IE temp files directory is full. The same bug also makes it impossible to view the source of a Web page from IE.
Try deleting your Temporary Internet Files. Go to the Tools menu -> Internet Options tab -> Delete Files button -> OK button.
Apparently, corrupt files in Downloaded Program Files directory can also cause this problem; see Microsoft's page about it here.
Please find attached a screen dump showing conflicting page file sizes. I thought I understood virtual memory, but after seeing this on screen I realise that I don't have a clue.
Why does System Info show a different page file size to that which I've set?
The "Total paging file size for all drives" number is the minimum size of the page file (or files, in total). If the maximum size for one or more files is not the same as the minimum, then the page file can grow as needed, and won't necessarily be sized back down for some time after that. But the total size display will always show the minimum.
You should allow the page file to grow, if you've got the disk space; the alternative is out-of-memory errors.
I have a Microsoft Sidewinder force feedback steering wheel with no Windows XP drivers, so this once great steering wheel will not perform properly under XP.
I have checked Guillemot and Logitech's Web sites and they have written drivers for XP for their old steering wheels; why won't Microsoft support its own products? I know Microsoft sold a lot of these wheels.
By the way, this is the second time that a Microsoft product has let me down. I had a Microsoft joystick that wouldn't work after I upgraded my computer to a 133MHz bus. Another throwaway Microsoft product.
Drivers for the old Sidewinder wheel are built in to Windows XP, though apparently you can't run the button-assigning software, which might be what you're talking about. XP has similar basic drivers for all of the other old Microsoft gameport sticks and wheels.
It may be impossible to make any of the old gameport devices work on your XP box, though. The same goes for gameport controllers from other companies.
Gameports are a nasty old interface that, often, doesn't work right on modern high speed PCs. Late model gameports and controllers have a digital interface grafted on top of the old analogue one, but they still fall victim to timing problems when you change hardware, or even just change your operating system. USB game controllers don't have this problem - but, in case you're wondering, it seems that Microsoft's software support for their older USB game controllers is little better than their support for their gameport devices.
I'm told, however, that the v4.0 driver software that shipped with the USB force feedback wheels works fine in XP, and fixes the shortcomings of the standard XP driver.
Can you please tell me how I can software RAID-0 my hard drive using XP Pro? Is there a program built into the operating system itself that allows you to do this?
WinXP Pro doesn't have what you'd call a robust software RAID implementation (Microsoft only give you that with the expensive Server versions of their NT-series OSes...), but stripe-sets, it can do.
Go to Control Panel -> Administrative Tools -> Computer Management, press F1 for help in Microsoft Management Console, and do a search for "striped". It's pretty straightforward.
You will, of course, need more than one hard drive, if you want a striped volume to achieve anything that an unstriped one wouldn't.
I currently have a Logitech MouseMan Dual Optical, which can operate in either USB or PS/2 mode. I was wondering which port would be better for playing games.
I have been told that with PS/2 corded devices, the reports per second (RPS) is 200; USB is static at 128. Could you help me decide whether usb or PS/2 is better?
You can wind up a PS/2 mouse's sample rate to a huge number with software or registry tweaks; it won't work that fast by default. USB sample rate is, indeed, fixed.
Frankly, though, I doubt you'll be able to perceive any difference between sample rates above 100Hz. I can just believe that the very finest ninja twitch gamers - the ones that make a living playing games - can derive a real benefit from mouse sample rates far above their frame rate, and very far above their monitor refresh rate. But anything above 100Hz should be more than enough for pretty much everyone.
I've been wondering for a long time now - does burning at higher speeds actually reduce the quality and life expectancy of the data on the CD? When I first bought my burner, I burnt at the full 6X (w00t what a super speed). Those few CDs are now either dead or takes ages to read. They spin round and round making fan noises until it's either rejected, or accepted very very slowly.
Does it matter what speed we burn at? And also what speed we read the CD at if we're copying from CD to CD?
The speed you burn at can matter. But there are other variables to consider, mainly having to do with the particular CD writer and media you're using.
In the olden days when CD writers were very expensive oddities, there were all sorts of eldritch problems with particular burners and particular media. Various combinations screwed up in nasty ways when writing data, audio, or both.
Today, an ordinary cheapo burner will probably work fine at a decent speed on ordinary cheapo media. But it's still easy to find exceptions to that rule.
Generally, you shouldn't expect a cheap high speed CD writer to be able to write good discs at top speed if you feed it cheap media. Even pricey gold-standard Plextor burners aren't likely to produce consistent results at top speed on the cheapest and nastiest spindle-discs. But a top class writer should be able to manage top speed on pretty cheap discs; you shouldn't need to only buy big-brand CD-Rs, as used to be the rule.
Deterioration of discs over time is partly related to the original burn quality - if the data wasn't written very well in the first place, then a small loss of integrity can be enough to cause errors. But deterioration is mainly to do with disc treatment, and to a lesser degree associated with disc quality.
Reading speed makes no difference for copying data discs, but it can make a big difference for audio ripping. Many cheaper CD-ROM drives and CD writers can't rip audio well at their top speed.
As hard disk drives get bigger in capacity, I can't help but wonder what is the best ratio for partitioning. Does partitioning make any difference for various users like gamers, visual artists or simply as a workstation? What is the best and adequate capacity for an OS like Windows XP or ME to be stored in one partition?
There's no reason for most users to put more than one partition on a hard drive, these days. You need at least one partition, because that one partition is the thing you format to make the drive accessible. But you probably don't need any more.
When Microsoft operating system users hadn't yet been blessed with the FAT32 and NTFS filesystems, the biggest partition you could possibly create for DOS or Windows use was 2Gb in size, and even that would have 32 kilobyte clusters. Clusters are the indivisible atomic unit of data storage on the drive; when they're 32Kb, every file wastes an average of 16 kilobytes of space, left over in the file's last cluster. To get the cluster size down to something more reasonable using the old FAT or FAT16, you had to make partitions smaller than 1Gb, or even smaller than 512Mb.
FAT32 and NTFS solve that problem. FAT32 still uses 32Kb clusters if your partition's bigger than 32Gb, but with that much space, who cares. NTFS cluster size is variable, but it'll only be 4Kb on most drives. Both of these filesystems support partitions up to two terabytes (2048 gigabytes) in size.
If you want to boot multiple operating systems, you're likely to want to put them on different partitions. There's no longer anything else that people commonly do, though, that requires more than one partition per physical drive.
Since I put this page up, a few correspondents have suggested that a separate partition's a good place to put your Windows install (and nothing else), so that you can easily nuke and reinstall Windows without touching any of your other data. I'm not a huge fan of this idea, because fouled up Windows installs will, by definition, be confined to the Windows install directory; rename that and reinstall and you're in business again, assuming you've got enough spare disk space. You can't do this if you've had a major filesystem failure, but that frequently means that the hard drive itself has a major problem, which will almost certainly scrag any other partitions on the drive.
There's something to be said for making a FAT32 partition on a drive that's almost all NTFS, just so that your good old DOS/Win98/whatever boot disk can read it; you put your commonly accessed data files on that partition, and hope FAT32's shortcomings don't turn out to be a problem.
I am ready to upgrade from my old Celeron system to an Athlon XP. I have been reading about how great AMD systems are, with the best price for performance. But when I talk to my local computer dealers, most of them say they have had too many problems with them ("high failure rates", and so on) and don't stock them anymore.
Should I instead go for a Pentium 4 system, even though it's more expensive?
Yes, there's dodgy Socket A gear out there. Buy a dirt cheap motherboard and you're likely to have problems, especially if you also get cheap RAM, a cheap CPU cooler, a cheap power supply, and so on. None of that's AMD's fault, though.
Get a good Socket A motherboard from one of the big names (Asus, Abit, AOpen, MSI; maybe Epox or Tyan if you're feeling adventurous), make sure the rest of your components are decent quality too (don't buy your RAM down at the markets, and don't assume that a $50 case is going to come with a good PSU in it...), and you're not likely to have any more trouble with an AMD-based system than you will with an Intel-based one. Yes, it is perfectly possible to buy cheap and nasty P4 gear, too.
Brand new Socket A motherboards, especially super-tweaky overclockers'-special models, commonly have a quirk or three when they're new and still on their first BIOS revision. But more than a few Socket 478 boards have had the same sorts of problems; you hear less whinging about P4 problems on the newsgroups, but that's because more tweakers have bought the cheaper Athlon gear.
Buy something that was the hot new motherboard six to twelve months ago, and you'll probably get a good BIOS version out of the box.
As far as "high failure rates" go - yes, a larger proportion of Socket A chips come back broken than do P4s. That's because Socket A CPUs are easier for a clumsy person to destroy when installing a CPU cooler than are P4s. The actual dead-on-arrival rate for Athlons and Durons is perfectly acceptably low, as far as I know. If you're not too handy, get the shop to assemble your PC and you should have no problems.
When I first replied to this letter in Atomic magazine, the overclocking market was still overwhelmingly Socket A based; now, there are various decently priced P4 solutions, and the current P4-core Celerons (especially the 2GHz-and-faster versions, which are highly overclockable) mean you don't even have to pay a whole lot of money for the CPU itself. But Athlon systems are still deservedly popular; they're just not nearly as dramatically superior, in bang-per-buck terms, as they were.