Find lots more of my writing at www.dansdata.com!

Graphics card technology decoded

Copyright © Daniel Rutter 1996. All rights reserved.

 

Computer video adapter technology advances day by day, along with hardware technology in general. Each new generation of graphics cards moves graphics faster and faster and allows higher displayed resolutions and numbers of colours. But the underlying principles involved are changing much more slowly, and a good grounding in them will let you make an informed purchasing decision even as the benchmarks climb.

Now that Power Macs can use the same video cards as PCs, everybody needs to know the same stuff. So here it is.

 

The basics

To get a picture onto a screen, a computer – any computer – has to somehow decide what colour every pixel has to be and create an analogue signal from this screen map, because all remotely modern monitors require an analogue signal. The old fashioned (pre-1992) dumb way of doing this is by making the computer’s CPU construct a map of exactly what colour every pixel is, then send the image data to the video card RAM – separate from the computer’s main RAM. The dumb video card chipset takes the data from the video RAM and lightly massages it into a digital image suitable for the digital to analogue converter (DAC) which makes the analogue signal to drive the monitor. This method is unacceptably slow for high resolution, high colour modes, but video cards that use it are cheap, because they are simple. They are also now extinct.

Accelerated video cards have more intelligent chipsets which take some load off the host computer’s CPU. Video boards designed to accelerate windowing operating systems like Windows and the Mac OS have the ability to take simple instructions – draw a box here, take this rectangular block of the image and move it to here – and do the gruntwork of figuring out pixel colours themselves. The CPU talks not to the video card’s RAM, but to its processor, and has to send a lot less information; the CPU gets to be the foreman instead of the bricklayer.

Different accelerated video chipsets have widely differing capabilities. Some just do basic drawing of graphic primitives, some have 3D acceleration functions that can be addressed by appropriate software (usually games), some have built-in video playback and/or scaling. But they all look the same to the operating system because it just talks to the video driver, which translates standard function calls into whatever lingo the graphic card speaks. Well, that’s the way it works on Macintoshes, anyway; because there is no standard for SVGA graphics on IBM compatible machines, incompatibility problems can and do occur.

 

Video card Q&A

How does high colour differ from true colour?

High colour ("thousands of colours", to Mac users) is 15 or 16 bit graphics. 15 bit colour gives 32768 simultaneously displayable colours, with five bits each for red, green and blue; 16 bit assigns five bits to red, five to blue and six to green for 65536 total colours. True colour ("millions of colours") is 24 bit mode, giving 16,777,216 colours for true photographic image quality – according to most people, anyway. High colour mode is acceptable for most applications and uses less video bandwidth and memory, allowing higher resolutions and refresh rates.

 

How come cards with the same RAM can’t do the same resolutions?

You’ve got a Brand X video card with 4Mb of RAM, which can happily do 24 bit colour in 1280 by 1024. Your friend has a Brand Y with the same RAM, but his tops out at high colour for that resolution. What gives?

Possibly, his card is doing clever extra processing that uses sufficient of its RAM that it can’t handle that graphics mode. But more probably, his card doesn’t actually have a 24 bit mode, as such. It’s really 32 bit.

Since the card’s processor is optimised to work with 32 bit words, with 32 bit wide memory to each RAM bank, it’s running 32 bit mode and just wasting the extra eight bits per word. 1280 by 1024 in 24 bit takes up 3.75Mb of RAM per frame; to do it in 32 bit you need 5Mb.

This isn’t the end of the story. 1280 by 1024 24 bit may be possible on a 4Mb card, but the speed of even snappy WRAM cards enforces a bandwidth limit of about 120Mb/S. In 24 bit, 120Mb/S will be soaked up by 53 frames per second, an unacceptably flickery refresh rate. 32 bit is even worse – 40Hz. To allow reasonable refresh rates and reduce RAM demands, modern RAMDACs use what’s called "packed-pixel" mode to compress the video data. This lets them do 1280 by 1024 in 24 bit at 72Hz and only use 3Mb of RAM.

 

What’s pixel addressability?

Pixel addressability is the technically correct term to use when you’re talking about the number of pixels you’re putting onto a monitor. Resolution, in this case, technically refers to the smallest object that can be displayed on a monitor, and is related to the quality of the tube and electron projection system. Popular usage, however, has resulted in resolution being the universally, if grudgingly, accepted term.

 

What’s a 128 bit video card?

Video cards are coming out with more and more impressive "bit counts" – up to 192 bit, in more recent models. A card advertised as "128 bit" has a video processor that can handle 128 bits of data in every clock cycle. This has nothing to do with the colour depth of the graphics modes or the bit width of the peripheral bus the card’s plugged into, but it does mean that when the card’s been given its "shorthand" instructions by the PC, it can construct the required screen image more quickly. Widening the on-card data path also allows the card designers to use cheaper, slower RAM; if you’re moving more data at a time, you can get away with moving it less often.

There are a plethora of components in a modern computer that each have their own bit-width specification. A Pentium processor can internally process 32 bits at a time each clock cycle. The system bus which connects this Pentium to its memory can handle 64 bits per clock cycle. If this computer has the usual complement of old ISA and new PCI slots, cards in the ISA slots get 16 bits at a time and cards in PCI get 32 bits at a time. But PCI, clocked at up to 33MHz, can send data every clock tick (30nS); standard 70nS RAM takes three 33MHz clock ticks to deliver data to the CPU. This design means PCI is less of a bottleneck than it looks; it has a 132Mb/S throughput ceiling, the same as that of a 33MHz 64 bit bus with less inspired design.

The PCI spec allows for doubling of both the clock speed and the bus width, allowing future quadrupling of bandwidth. Don’t hold your breath for either enhancement in mass market computers, though, because the faster clocked version reduces the number of connectable devices and doubling the bus width can’t be done without increasing the number of pins in the slots.

For applications where lots of video data must be sent from some other source to the video card – video editing, recent games and so on – local bus or PCI video improves performance. But if all you’re doing is running a windowing operating system, the accelerator functions on the card mean not nearly as much data has to be sent and a cheap card running from the slow, old-fashioned ISA bus will not be nearly as much slower as the lousy bus specs might suggest.

 

What’s a RAMDAC?

The Random Access Memory Digital-to-Analogue Converter (RAMDAC) is the part of a video board that translates the digital information generated by the computer and any video accelerator hardware to the analogue signal that drives the monitor. Every RAMDAC is composed of three digital to analogue converters (DACs) and a tiny amount of very fast static random access memory (SRAM). All RAMDACs are not alike, and they’re generally not upgradeable; these days, the RAMDAC is likely to be part of the graphics accelerator chip on video boards with single ported RAM (see "RAM Flavours").

The maximum number of colours a RAMDAC can generate is determined by the number of bits each DAC can handle – there’s one DAC for red, one for green and one for blue. Old fashioned VGA RAMDACS had three six bit DACs, giving a total of 262,144 (2^18) colours to choose from. But they only had 256 18 bit "words" of SRAM, meaning only 256 colours could be displayed from that palette at any one time.

Modern graphics cards capable of 24 bit display have three 8 bit DACs which are either fed directly with display data, bypassing the SRAM, or each have their own SRAM cache.

 

Does PCI really let you use the same cards in PCs and Macs?

Yes – provided you’ve got a device driver, and the card’s ROM supports both systems. A PCI video card with the right ROM will work on an IBM compatible computer and in a Mac, if there’s a driver for it, and the same goes for any other PCI 2.0 expansion card. Relatively few cards have the right ROM.

 

What’s AGP?

The only application that PCI’s 132Mb/S throughput isn’t adequate for is video. High end 3D video systems that want to exchange prodigious amounts of data between cards on the bus and still accommodate disk and CPU data transfers are pushing the limits of 32 bit, 33MHz PCI.

Enter Intel with the Accelerated Graphics Ports (AGP) specification. AGP is built on PCI, and is 32 bit like basic PCI, but runs at 66MHz for twice the bandwidth. It can use the computer’s main memory to store textures, reducing the amount of expensive video card RAM needed, and it can actually have the graphics chipset built into the computer’s motherboard. This is not necessarily a good idea, because an AGP machine with graphics chips on the motherboard can’t have an AGP slot and hence can’t be upgraded.

Existing PCI machines cannot be upgraded to AGP. It has to be built into a computer to start with. Expect AGP-equipped Pentium Pro computers early in 1997, with the technology probably trickling down to entry level Pentium machines later.

 

Whaddaya mean there’s no SVGA standard?

The last widely accepted PC graphics standard was Video Graphics Array (VGA), defined by IBM. VGA’s best graphics mode is 640 by 480 in 16 colours; this is somewhat behind the times. Super VGA (SVGA) may sound as if it carries IBM’s blessing too, but it doesn’t; it’s just a catchall term for anything that beats VGA specs. The original SVGA was proclaimed by VESA as 16 colours and 800 by 600 pixels; this may well win the Least Respected Standard In History award. If your graphics card does 256 colours in 640 by 480, it’s SVGA. If it does 24 bit in 1600 by 1200, it’s also SVGA. IBM’s best attempt at a VGA successor was the Extended Graphics Array (XGA), which enjoyed support from, rounded to the nearest ten companies, nobody.

The upshot of the more-than-ten SVGA chipsets, none of which share a programming interface, is that drivers are all-important. This now applies to Macintosh users going for PCI graphics cards as well as PC owners; if the Your Operating System driver for Card X is flaky, all of those impressive numbers may be useless to you. Generally speaking, name brand cards will work fine with established versions of popular operating systems; the Windows 95 drivers for modern video cards are, by and large, reliable. But incompatibility problems can and do arise. If you’ve got the choice between a brand new whiz-bang card with a v1.0 driver and a less impressive, cheaper model whose v2.3 driver is known to work perfectly, consider whether you really need the speed right now. If the new card’s driver hangs the computer whenever you run Red Alert, you will not be a happy puppy.

There are other, less annoying side effects of the absence of an SVGA standard. Windows computers with different video cards often also have different utilities for changing colour depth, resolution and refresh rate. Elegant ones slot into the Display Properties window. Less elegant ones are standalone Windows programs. The worst have to be in autoexec.bat. Microsoft’s QuickRes utility works with most cards and by and large saves you from the embarrassing restart-to-change-video-mode problem, but incomplete compliance with the Infallible Word Of Bill Gates on the part of driver authors can create annoying problems. If there were an SVGA standard, Windows and card makers could all conform to it and, provided everyone read the rules, none of these problems would arise. Then again, the absence of a standard clears the way for video card makers to create ever faster and more capable cards, without having to squeeze their bright ideas into a standardised framework. Swings and roundabouts.

 

Why does my video card get faster when I add more RAM?

Many video cards today use 64 bit architectures – on the card, data is processed 64 bits at a time. Well, it is if there’s enough memory.

A 64 bit card with one 1Mb memory module installed cannot operate in 64 bit mode, because that single memory module is only 32 bits "wide" and forces the card to operate in 32 bit mode, halving the available bandwidth. Add another 1Mb module and the card can operate to its full potential. The same phenomenon will be seen if you have a 32 bit video chipset that can use memory interleaving, like the Tseng Labs ET4000W32.

 

What’s the deal with refresh rates and flicker?

A computer’s graphics system has a certain, fixed bandwidth, or amount of data that can be pumped through it per second. Let’s say a hypothetical system can handle 100 megabytes per second (Mb/S) of throughput.

Let’s also say that this system is running at 800 by 600 resolution, in 24 bit colour. This means each full frame of video requires 800 times 600 times 24 bits of data, or 1.37 megabytes. With 100 Mb/S bandwidth, this system is theoretically capable of transferring almost 73 frames per second. In practice, fewer frames can be sent because of the time required to blank the monitor and other overheads, and modern video cards use their RAM in more advanced ways than the frame buffer system implied by straight throughput measurement, but the example illustrates the basic principle.

If your video card can send 72 or more frames per second to your monitor (and your monitor can display video with this high a refresh rate at the selected resolution), the screen will appear pretty much flicker-free and will be less fatiguing to use for long periods. The default refresh rate for cheap systems is 60Hz, which has a much more perceptible flicker. 60Hz is frequently set as the default for video cards because it’s guaranteed to work with even crusty old monitors; it’s the refresh rate used by the original VGA.

Only bargain basement computer dealers still try to unload computers whose monitors can only display reasonable resolutions in interlaced mode. In interlaced video, the screen image is drawn in two passes of the raster beam, each pass "painting" alternate lines. It is and was the way in which all but the classiest TVs put a picture on the screen, and it works OK for television, with fuzzy, low-resolution screens and a signal typically free of thin, contrasty horizontal features. It is a much less acceptable way of putting a picture on a computer monitor. Old Amiga hands will recall with a shudder the Amiga’s 50Hz interlaced graphics mode, which is wonderfully video-compatible but a pig to look at all day.

 

Are 3D accelerators worthwhile?

If you’re an avid player of 3D games, maybe. If you’re not, probably not – but you won’t be able to avoid them, because 3D functions are popping up in all sorts of video cards.

3D rendering is a very, very computationally intensive task. 3D games don’t have the photorealism that makes pro rendering packages like 3D Studio and Lightwave take hundreds of trillions of instructions to generate a complex scene, but the games make up for it with their high frame rate. Rapidly rendering reasonably detailed views of complex, shaded, texture-mapped objects will redline the fastest processors currently available. Enter the 3D accelerators.

3D accelerators are regular graphics accelerators that also have 3D instructions, and they take the load off the CPU for 3D in much the same way that a conventional accelerator helps with 2D graphics; the CPU sends the recipe for the screen image, and doesn’t have to bake the cake. Many mainstream graphics boards now have some degree of 3D acceleration included; the specifically badged 3D accelerators, theoretically, do more and do it better.

The "recipe" sent by the CPU to a 3D accelerator is more complex than that for a 2D scene. While it doesn’t have to figure out what colour every pixel is (the actual rendering process), the CPU still has to determine what is currently where in the game world and generate a 2D perspective view – a wireframe, if you will. A 3D accelerator may be doing more computation than a 2D accelerator, and the CPU may be doing a smaller proportion of the total work, but there’s still a lot more for the CPU to think about.

As yet, we’re just pushing into the second generation of 3D accelerators, with the big names like Creative starting to get in on the act and cards emerging that are supported by more than a handful of games. One large problem with the early 3D accelerators was their lack of features; if an accelerator couldn’t handle the 3D feature that the programmer had to have in his game (Phong shading, for example), its accelerated functions had to be largely or completely ignored and the processing load went back to the CPU. The lack of standardised ways of writing games to work with different 3D accelerators has also damaged their early popularity; if Direct 3D for Windows 95 and QuickDraw 3D on the Mac become the standards they hope to be, this too can change.

If you’re doing more than play games, consumer level 3D accelerators probably won’t make your job any simpler. If you drool over serious 3D workstations that can do real-time updating of images that take a minute per frame on your Pentium, you need one of the emerging breed of professional 3D boards, like Intergraph’s Intense 3D. The Intense 3D is a PCI card for Windows NT systems that, essentially, plugs the graphics section of a $30,000 Intergraph workstation into an IBM compatible for about a tenth of the price. Anything that uses the popular OpenGL standard – which includes many games and rendering packages like Lightwave and 3D Studio MAX – is suddenly much, much, much, much faster.

 

Video playback Q&A

Historically, things like hardware MPEG playback, TV tuning and similar unusual video applications were impossible to elegantly implement on an IBM compatible computer. Because the standard video card didn’t have these functions, they had to be supplied by separate hardware – and integrating the video streams from the video card and from the separate playback board, tuner or whatever was and is a difficult task.

More recent PC video systems have the advantage that the PCI bus allows direct high speed data transfer between cards.

 

What’s a feature connector?

This is a question which can, more and more, be answered with "A very bad thing that’s dead now."

The Video Electronics Standards Association (VESA) Feature Connector (VFC) was the leading half-baked way to get video inlays, hardware MPEG playback and similar tricks done on an IBM compatible without PCI slots or video cards to go in them. Not all VESA video cards have a VFC, but if yours does you can connect it to a second card and image data from that second card can be integrated into the main picture. Unfortunately, the VFC has many drawbacks.

The VFC is not very fast, so it can give you high resolution, high number of colours OR high refresh rate - but not all at once. This means your video card will be forced into low resolution, 256 colour mode, or a flickery slow refresh rate. There’s also no standard way to program overlaid video, so VFC graphics systems tend to have problems lining up video with windows and drawing things over the top of video. Always-on-top video can be very annoying – correctly adjust the video into its window, and you can’t use the menus because there’s video on top of them. Great.

The VESA Feature Connector was the child of the old VGA Feature Connector, which was limited to VGA graphics capabilities. That meant it couldn’t do any better than 256 colour graphics in 320 by 200 pixels, and it’s now of only historical interest. VESA tried to get manufacturers excited about a new standard called the VESA Media Channel (VMC), a full bus system that allows up to 15 devices to insert their own info into the video card frame buffer. VMC does not support overlay-mode video – it can’t overlay high-colour video on a lower colour screen. And PCI is just as good and obviously the way of the future. So VMC has sunk without trace.

 

What’s a loopback connector?

Another dodgy IBM compatible video integration attempt. The VGA loopback connector connects your graphics card’s monitor port to a matching port on an add-on board, and sends the original graphics plus the add-on board’s contributions to the monitor. It’s analogue instead of the VFC’s digital approach, so it can work with high colour, high res, high refresh rate graphics – but the same problems with positioning the video and seeing through it arise.

 

What is MPEG anyway?

MPEG stands for Motion Picture Experts Group, a joint venture of the ISO (International Standards Organisation) and the IEG (International Electrotechnical Commission). MPEG exists to make standards for video compression, and thus far two of its products have been widely adopted.

The first, MPEG-1, was created to deliver VHS-grade video accompanied by high grade audio, but can be used for different resolutions, data rates and audio grades. The original idea was to get decent video onto a CD that could be played back at 150 kilobytes per second – single speed – with 16 bit stereo, 44kHz (DAT quality) audio. The video has to be compressed by a factor of about 52 to meet these requirements, and you end up with 352 by 240 resolution for 30 frames per second. Describing this video as "VHS quality" is pretty much true – although any decent PAL VHS system with a newish tape can produce a sharper picture.

MPEG-2, on the other hand, was originally specified as 720 by 480 resolution for 30 frame per second playback, with data rates from 500k/S to more than 2Mb/S depending on image content and quality.

Neither flavour of MPEG is a compression algorithm, as such; it’s more a description of the file format from which MPEG players will reconstruct their data, and there is considerable room for innovation at the encoding and decoding stages. This explains the plethora of MPEG encoder products, varying widely in computer power required, quality of output and price.

 

Frame names

MPEG video is composed of Intra (I), Predicted (P) and Bi-directional interpolated (B) frames, generally in 12 to 15 frame strings called groups of pictures, or GOPs. The MPEG encoder takes the original uncompressed video, processes it down to a lower frame rate if that’s what the user has requested, then starts the laborious task of compressing frames according to their position and content.

I frames are generally lightly compressed, because an I frame starts every GOP and is the reference for the first two B frames and the first P frame. P frames are described in terms of their differences from the last P or I frame, whichever is closer – they refer backwards in time only. B frames refer both forwards and backwards, to the immediately preceding and succeeding P or I frame, whichever yields the smallest size for the B frame. B frames are the smallest, because of this double-dip strategy.

 

What’s the difference between MPEG encoders?

You can encode MPEG video on your desktop, with achingly slow software systems or accelerated purpose-built cards. So what’s the deal with giant studios that charge orders of magnitude more to do the same job?

There’s a lot of room for ingenuity in MPEG encoding. Preprocessing of the video to be encoded is a whole field in itself, and the encoder’s ability to identify redundant data and modify the locations of different frame type in the GOPs and the placement of the GOPs themselves can make a huge difference. Low cost, low powered encoders will produce a perfectly legal MPEG stream, but the video quality can vary greatly.

 

What’s the difference between MPEG playback and MPEG acceleration?

Any remotely recent PC or Mac can play back MPEG video. But not necessarily very well.

MPEG decoding is much less processor-intensive than MPEG encoding, but there are still a lot of numbers being moved around. Anything with less computing grunt than a 100MHz Pentium will be incapable of maintaining full frame rate, full resolution playback; MPEG allows underpowered decoders to drop frames or only partially decode them, giving lower resolution video with pixelisation or striping. To get a slower machine to play MPEG well you need an MPEG accelerator built into or in addition to your graphics card. The accelerator does the playback.

A modern high speed Pentium or Power Mac machine with a fast video card, on the other hand, is quite capable of decoding and displaying MPEG video just as well as any dedicated card or standalone player. The software decoder might only decode the 16 bit, DAT quality MPEG-1 audio track in 8 bit at a lower sampling rate, but this is seldom noticeable.

This still doesn’t mean software decoding’s a great idea, as the processor and video card will be heavily loaded and system performance will suffer. If you need to maintain full system speed while playing MPEG video, a playback accelerator is still a good idea, and many graphics cards include hardware support for MPEG playback.

 

What about other hardware video playback functions?

Video cards with hardware motion video acceleration functions can do several things. They can have built-in codecs for playing particular video formats – MPEG, QuickTime or AVI, for example. They can do colour space conversion, from the YUV (luminance and the scaled and filtered versions of the blue minus luminance and red minus luminance colour difference signals that comprise composite video) colour space to the more straightforward RGB colour space used by computers. And they can also improve the appearance of video with filtering algorithms that change the distinctive blocky look of scaled-up low res video into a more pleasing "well-played VHS" fuzziness.

 

What’s the difference between interframe and intraframe compression?

Most video compression systems use two forms of compression to reduce the size of the video stream. Intraframe compression looks only at each individual frame, without reference to the others. The core function of intraframe compression is to eliminate redundant information in each frame – parts of the picture which qualify as similar, according to the degree of compression requested by the user. In MPEG, intraframe compression is used only on those parts of a frame which are different from the frame to which it refers – see "frame names".

The idea of comparing not just parts of a frame but parts of different frames, interframe compression, is what really allows video compression to work. It looks at frames as a sequence, and finds redundant data over time as well as the standard X-Y co-ordinates of each frame. This is what earns it its alternate name – temporal compression. The down side of interframe compression is that because frames are dependent on other frames, you can’t edit the video without recompressing it, which may cause quality loss depending on the encoding system used and will in any case be too slow for real-time work. So editable digital video formats, like editable MPEG, motion JPEG and the Digital Video (DV) version of editable MPEG, use intraframe compression only and are much faster to encode, but also considerably larger in file size, than interframe compressed video.

 

Know your artifact

Digital video encoding systems may be able to near-magically reduce the size of a video stream – but not for free. When compression techniques are taken too far or hit something they weren’t designed to deal with, visible compression artifacts can emerge. Here’s the quick rundown.

 

Aliasing

Aliasing occurs when a digitising system samples at less than twice the frequency of the highest frequency component of the incoming data stream. In most digital video encoding systems, aliasing manifests itself as prominent vertical lines. It can be eliminated by filtering the original video signal to remove the higher frequency components, but this blurs the image.

 

Blockiness

MPEG video is made of blocks eight by eight pixels in size, and other systems use similar designs. The content of a single block can vary from one flat colour to highly detailed depending on the original video and the amount of compression the encoder decided to use. MPEG video in which the blocks are clearly visible is called blocky. Blockiness frequently appears in action footage.

 

Gibbs Effect

As it applies to MPEG and other video compression systems, Gibbs Effect creates a fuzzy area around the borders between detailed areas and backgrounds. It’s caused by the nature of convergence of the Fourier series. Remember that, in case you ever need to sound like an expert.

 

Mosquitoes

When Gibbs Effect is noticeable enough that moving objects on screen appear to be surrounded by a haze of twinkling insects, the said insects are referred to as mosquitoes.

 

Quantisation noise

When an image is reduced to fewer colours than it originally contained – for example, when a frame of analogue video with a functionally infinite colour palette is converted to a 256 colour digitised image – the reduction in colour palette can manifest itself as noise in previously smooth areas or as banding – roughly parallel stripes of solid colour – in gradients. If the digitisation colour depth is high enough – 24 bit for colour, 8 bit for monochrome – this noise will not be perceptible.

 

Clipping/overload

If a digitisation system is set up with its input sensitivity set too high for the incoming analogue signal, those parts of the incoming waveform that exceed the maximum digitisable value will all be "clipped" to that value. In video, this manifests itself as an overexposed appearance, where light areas become glaring pools of pure white. Some digitising systems "wrap around" out of range values to the black end of the intensity scale, producing obviously unwatchable video in extreme cases and more subtle noise from slightly out of range input.

 

Digital signal degradation

Digital data is, theoretically, incorruptible and infinitely reproducible. All of those zeroes and ones are, however, stored on media which can and do become corrupted. Those video compression systems which can deal with errors in the data stream without aborting playback can generate various bizarre effects when confronted with "flipped bits". Rectangular blocks of image that suddenly decide to rotate all of their colours to completely inappropriate values, for example, are a dead giveaway.

 

Overactive compression

Video compression algorithms are powerful, but have very limited comprehension of the content of the video they’re working on. They can tell whether things are changing quickly or slowly and can see trends, but have no way of telling whether an image feature is really important and should be preserved, or insignificant and can be discarded. It is therefore up to the operator to set the compression parameters so that significant image features which appear unimportant to the algorithm are preserved. The classic example is the tennis game, in which the algorithm removes the ball because it doesn’t seem large enough to matter.

Lower priced compression systems may offer insufficient customisation features to allow certain video this to be acceptably compressed without producing unworkably large files.

 

RAM flavours

Video card memory varies in quality and quantity. Quantity is simple enough – the more RAM a card has, the more colours it can display at a given resolution and the more extra space it has for ingenious ancillary functions. 1Mb of RAM is adequate for 24 bit operation at only 640 by 480 pixels, but for 1280 by 1024 in 24 bit you need 4Mb.

Things get more complex when the type of RAM becomes an issue. The kinds you’re likely to see on a video card at the moment are DRAM, EDO DRAM, VRAM and WRAM. What’s the difference?

DRAM, also referred to as Fast Page Mode or FPM DRAM, is the same sort of memory that most computers use for ordinary temporary data storage. "Dynamic" means the memory has to be refreshed many times per second, and when this refresh is occurring it can’t be accessed. DRAM can also do only one thing at a time – the computer can write to it, or read from it, not both. Since video cards simultaneously accept video data input from the host computer and send data to the RAMDAC for conversion to monitor-driving analogue data, DRAM has to serve one request at a time and reduces performance – DRAM based boards can’t handle the highest resolution, colour and refresh rate graphics modes. But DRAM is cheap.

For somewhat more money you can get VRAM, the most common kind of memory on high-performance graphics boards today. VRAM is dual-ported DRAM, which means it can be read from and written to at the same time. This allows read and write operations to each use the entire memory bandwidth, effectively doubling it. All other things being equal; a DRAM board that can do a given graphics mode at a 60Hz vertical refresh rate could do it at 120Hz if it were built on a VRAM architecture. There are several variants of the VRAM idea, including triple-ported designs.

VRAM still requires periodic refreshing, and is not significantly faster overall than DRAM. DRAM and VRAM cards with the same chipset will have much the same performance if they’re both doing the same number of colours, resolution and refresh rate. The VRAM-equipped card is capable of greater total throughput than its DRAM cousin, but if that greater bandwidth is not being used no great differences are apparent.

Window RAM or WRAM is the only kind of memory specifically designed for video card applications. It’s technically described as dual-ported, block-addressable RAM, and it takes advantage of the fact that windowing operating systems tend to work with a lot of rectangular areas of the same colour. Instead of requiring every memory location that needs to be changed to be individually, and laboriously, addressed, WRAM allows large blocks of RAM to be simultaneously set to the same value. It also uses fewer silicon components than VRAM, so it’s around 20% cheaper. And it can be clocked at up to 50MHz, versus the around 33MHz limit of DRAM and VRAM, which gives up to 50% more bandwidth.

Extended Data Out (EDO) DRAM is used for video and system RAM; it’s slightly more expensive than standard DRAM, but can be clocked at 40-50MHz and is faster at accessing memory locations close to each other, providing a net bandwidth of around 105Mb/S, versus 80Mb/S for DRAM.

Multibank Dynamic Random Access Memory (MDRAM) uses standard DRAM technology, configured in up to 32 banks, each of which has its own row and column structure. This means that different processes trying to access the same memory can be accommodated more quickly, accelerating screen refreshes, off-screen data rearrangement and standard windowing OS screen drawing functions.

 

Email me

Back to Samples page