Atomic I/O letters column #38
Originally published in Atomic: Maximum Power Computing Reprinted here October 2004. Last modified 16-Jan-2015.
CPU giblets
How do differently clocked CPUs actually differ in terms of manufacturing? I mean, what do Intel and AMD do to make the higher clocked chips, besides just raise the core multiplier? It's not like they re-engineer the entire chip, as it's the same architecture and all. I thought that maybe all they do is test the threshold of the chips and those with a higher threshold get a higher stock operating speed.
Also, what is the core multiplier? I mean, I know what it does, and that the FSB multiplied by the core multiplier gives the resultant operating frequency. But what exactly is it? When the multiplier is changed what physical... thing... happens to make the operating speed that factor more?
Liam
Answer:
You've pretty much figured out the deal with CPUs of different speeds. Every
CPU of a particular type (a particular core, amount of L2 cache, et cetera)
can end up packaged as the very top-spec version, if all goes well,
and if the CPU manufacturer needs it to be.
All does not go well all the time, though; chip fabrication is a tricky business, and not every chip on a given silicon wafer is going to be the same. The chips are tested before the end of the packaging process, and classified according to the speed they manage to run at. Some don't work right even at the minimum speed for their product line; they're thrown away. Most chips pass the test for one grade or another, and are usually then packaged up accordingly.
Some chips, however, pass the test for high speed, but are packaged as a lower speed chip, simply because there are orders for X many of those lower speed chips and the fabrication process happened to turn out more high speed chips than were needed. Sometimes practically every chip at a given speed grade actually passed the test to run much faster - see the classic old Celeron 300A, for instance (was it only five years ago...?), or the Pentium 4 2.4C. Chips like this come along quite regularly; as I write this, overclockers are pretty happy about the Socket 754 Sempron 3100+ and the Socket 939 Athlon 64 3200+.
The mechanics behind the multiplier are more complex. Inside the CPU there's a phase locked loop (PLL) circuit that uses two dividers.
Put (relatively) simply, the FSB clock is applied to one input of a frequency comparator, the output from which drives a voltage controlled oscillator (VCO). The VCO is the actual configurable gadget which, left to its own devices, spits out approximately the right core clock frequency for the CPU.
The VCO can't be trusted to get it quite right, though, so its output is fed through the two dividers (two dividers let you have fractional multipliers; older processors that only handled integer multipliers have only one divider) that're set to the right ratio to recreate the bus clock from the VCO's output, assuming the VCO output is exactly right.
The output from the dividers goes to the second input of the frequency comparator I mentioned above. This closes the loop in the PLL; the comparator controls the VCO's input, and compares the VCO's divided output to the FSB speed it's supposed to perfectly match, and tweaks the VCO input if it doesn't. The circuit can thus very rapidly settle onto pretty much the exact frequency the processor's meant to run at.
Screen translation
Is it possible to connect an LCD from an unused notebook to a desktop PC, to use as the main display?
Rivai
Answer:
Possible, yes. Practical, probably no.
A few companies (example) sell interface adapters that let you connect various kinds of LCD panel to normal video signals, including RGBHV ("VGA"). The adapters aren't very cheap, though; you can expect to pay at least $US200 for one, maybe $US250.
The adapter isn't all you need to make the panel into a monitor, either; you need some kind of casing, of course, and you also need a power supply. Even if you don't put a value on your time, all of this pushes the price of the resultant contraption up pretty close to that of a brand new regular 15 inch 1024 by 768 LCD monitor - which is probably the best you can expect from your ex-laptop setup, since adapter boxes for higher resolution panels are hard to find.
If the laptop's an old one, you're unlikely to want to stare at its screen all day anyway. If it's a new-ish one, consider networking it to the main PC (wirelessly, if you like!) and using VNC or Windows' Remote Desktop to make the portable into a remote monitor, and keyboard, and mouse!
Dash, plus, backslash, tilde...
I just bought a DVD burner that does both DVD-R and DVD+R, and no one can tell me the difference between the two types of recordable DVDs. Can you shed some light on this?
Chris
Answer:
The difference between the "dash" and "plus" recordable DVD formats was
important when the technology was younger, if only because you had to make
sure that cheap spindle of discs you bought on eBay was compatible with
your drive.
Today, though, cheap drives read and write both flavours, and the only remaining difference is that DVD-R is slightly more likely to be legible in any given DVD-reading device. It's hard to make good stats on compatibility because of the range of different drives and media out there, but there's a quite comprehensive and reasonably up-to-date survey at CDR-Info.
It says that DVD-R compatibility is about 97%, while DVD+R manages about 87%.
Disco mouse
I have a Microsoft Optical Mouse Blue, and I got the idea of modding the light in the mouse. I am just wondering, is that an LED, and if it is, is it possible to change it without affecting the optical mechanism?
Andrew
Answer:
Yes, it should be possible. People have even found that despite the operating
voltage difference between red and blue LEDs (less than 2.5 volts, versus
around 3.5), you can just swap a blue one right in and have it work at decent
brightness.
The only risk you run here (besides accidentally destroying your mouse with a soldering iron...) is that the little camera in the bottom of the mouse expects to be looking at a surface lit by a red LED, and may not be as sensitive to blue light. This may make the mouse more prone to skip on surfaces that it worked fine on before. Again, though, people who've done this trick with various mouses generally don't report problems when using the mouse on regular mousemats or other high-contrast surfaces.
Note that if you've got a mouse with a second, decorative LED shining out the back (the Blue mouse has one, as do many other IntelliMouses), you can change that LED to a different colour with no chance of causing tracking problems.
There are even triple-die red-green-blue LEDs in standard twin-lead 5mm packages, now - I checked out some little flashlights that use them here and here. They include a tiny flasher/chaser controller right there in the package, and there's a reasonable chance that they'll work as drop-in replacements for a red LED, too. They'd be no good as a under-mouse light, but would be particularly trippy as a tail light!
Wonderful wire
I was wondering if there was a real difference in quality between the standard HD15 VGA monitor cables and BNC cabling. I was just given a 19 inch Sony GDM-400PS, which has both inputs, and I was wondering whether it's worth paying for BNC cables.
Nathan
Answer:
A good BNC lead will give you a sharper picture at very high resolutions.
A bad one won't do anything worthwhile. It's perfectly possible for a bad
BNC lead to give you a fuzzier, ghostier picture than a good HD15 lead would.
If you want to see the kinds of problems I'm talking about, buy the cheapest
HD15 extension cable you can find (which can be a useful thing to have for
impromptu connections of monitors to distant misbehaving computers, when
you don't care about image quality), and plug it in.
There's not much point to using BNC for monitors smaller than 19 inches. You can just start to see some difference in 1280 by 960 on a decent 19 inch screen, especially if you're running a high refresh rate (like, 100Hz).
1280-by-whatever at 85Hz (the sweet spot for the GDM-400PS) might be high enough for the difference to be apparent. Don't expect anything dramatic, though, unless the cable you're using at the moment is unusually lousy.
Modem Of Mystery
I am one of the unfortunate people who, thanks to Telstra, are still light years behind in the broadband revolution, even though people living 100 metres away can get ADSL. So I am stuck with a shitty 28.8 connection.
My download speed is usually around 3Kb per second, however when I downloaded this Excel file the speed was a whopping 9kb/s. How is that possible?
Tim
Answer:
It's very highly compressible data.
Your modem, like every other modem for a lot of years now, does v.42bis data compression by default; that reduces the fatness of data passing between your modem and the one at the ISP's end of the line.
Usually, v.42bis doesn't achieve much. It'll squish HTML data quite successfully, but things like JPEG and GIF images, and most separate files people download, are already compressed and have no more "air space" in them. V.42bis automatically turns itself off when you're transferring incompressible data, since it'll do more harm than good then.
That uncompressed Excel file, however, can be zipped down to less than 10% of its uncompressed size, so v.42bis can compress it considerably too. The most compression the v.42bis algorithm will ever manage is 4:1, but you're seldom likely to see more than the 3:1 you just got.
Note that if Telstra had zipped the Excel file themselves, you would have got it even faster, ignoring the time it would have taken to unzip it at your end. V.42bis wouldn't have made the zip file any smaller, but there would have been a tenth as much data to download in the first place.
I've written more about this sort of thing before, here and here.