Computers in space
Originally published in Atomic: Maximum Power Computing.Published here 28 October 2004. Last modified 03-Dec-2011.
NASA was created in 1956, when people were just getting used to the idea of a "computer" being a roomful of valves, not a person with an adding machine.
At the time, the idea of computers in space was pretty ridiculous, and the idea of computers in unmanned spacecraft was very ridiculous. Valve mainframes needed a team of service personnel to keep them running.
Transistors certainly existed in 1956 - that was the year when the people that invented them, back in 1947, won the Nobel Prize in Physics for doing so. The first all-transistor computers were still two years away, though.
But by 1976, the Viking probes made it to Mars (a ten-month journey), and landed, and sent back pictures for years on end - none of which would have been possible without their meagre on-board computing power.
As you'd expect, the capabilities and complexity of the computers we shoot into space has increased dramatically over the years. Computers have become part of the fabric of space agencies; designing spacecraft in the first place can eat up as much computing power as designing any other complex system, and aerospace contractors have a great appetite for computerised machine tools, and-
Yes, yes, I hear you say, but what about the computers in the spacecraft?
Hm. Yes. About them.
The computers that get manned and unmanned spacecraft into space and keep them there tend, I'm afraid, to be boringly simple.
Boringness is a very desirable quality in a spacecraft control system. Rockets are quite exciting enough already without a crashed nozzle gimbal controller causing them to arc over and plough into Kansas at Mach 15. Re-entry is a more than sufficiently stimulating experience even when bad attitude-control code doesn't cause the craft to fall like a billion-dollar leaf.
There's a lot of quite advanced computing technology being flung out of the atmosphere today, but the hardware doing the flinging may, in fact, have little more intelligence than your car's engine-control computer.
Little brains, big engines
You don't need a great deal of computing power to perform the basic tasks that define a spacecraft - getting out of the atmosphere, and staying there.
Actually, you don't need any computers at all. If Germany had done a bit better in World War II, something like the projected Sänger Amerika Bomber could, in theory at least, have made it to orbit with 1940s technology. Manned travel any further out would still have had to wait until at least the 1960s, but orbit ain't that hard. Even aiming for the flyspeck-in-a-football-stadium that is another planet can be done with slide-rule maths, if you're sending an unmanned probe.
Exact answers to the multi-body problems of orbital dynamics are, famously, exceedingly hard to reach. But it's quite adequate to get pretty much the right answer, light the blue touch-paper, see where your spacecraft ends up going, and then do some corrective thruster burns to tidy up its course.
36-storey-tall launch vehicles tend not to have pinpoint accuracy, anyway. Even if your original flight plan's perfect, you still have to tweak the actual result if you're aiming for anything more than a basic orbit.
The computational simplicity of basic spacecraft tasks was good news back in the 1960s and 1970s. Back then, very modest computing power by current standards meant a monstrous mainframe system, which you couldn't possibly loft up the gravity well.
NASA's Mercury capsules of the early 1960s had no computers at all. For re-entry, retro-rocket timing and attitude information was radioed to the spacecraft from a tapes-and-teletypes computer centre on the ground.
The later two-man Gemini capsules had their own rudimentary computer (capable of seven thousand instructions per second!), which helped with tricky tasks like rendezvous operations. Without that, the Apollo moon missions wouldn't have been possible.
The Apollo missions themselves were a computing landmark, too. The Apollo system computer prototypes consumed, at the time, roughly two-thirds of the world's total supply of integrated circuits.
Enough antiques
OK, fast-forward to the present day, and the International Space Station.
The ISS is packed with processors to keep its crew happy, or at least alive, but at the core of its operational hardware are the Command and Control Computers.
They're 80386SX-20s.
But they've got 80387 co-processors! A couple even have hard drives!
Ahem.
Well, the ISS doesn't need a whole bunch of brains. It doesn't go anywhere but round and round.
What about the Space Shuttle?
Well, the Shuttles were originally equipped with five parallel redundant IBM AP-101 general-purpose computers. Each of those was equipped with a mighty 1,310,720 bits of ferrite core memory (more had to be tacked on to accommodate the elephantine 700-kilobyte size of the Shuttle's control software...), and was good for a neck-snapping 0.48 MIPS and 0.325 MFLOPS.
Magnetic core memory is still used in mission-critical space computing
applications, because it's radiation-proof and non-volatile.
As you'd expect, Shuttles these days have greatly upgraded computing power. Apart from a slow increase in the population of special-purpose chips all over the orbiters, they got a major upgrade in 1991.
Since then, they've been running the mighty AP-101S.
It's a whole three times as fast as the AP-101. And has some more memory. And draws less power.
That's about it for its improvements.
One reason why the Shuttles haven't been refitted nose to tail with cutting-edge hardware is that they just don't need to be. The (slightly updated) old gear runs the vital systems just fine.
The other reason has to do with just how vital those systems are, and just how hazardous space is to computers.
The challenge
Making computers work in space is not too hard. They have to be able to survive the vibration and G-forces of launching, but a laptop can do that with a bit of padding. It's making computers keep working in space that's the tricky bit.
The basic challenge of space is remoteness. A submarine only 200 metres beneath the surface of the ocean is in a considerably more dangerous environment than a spacecraft 400 kilometres up (20 atmospheres of water pressure trying to get in, instead of only one atmosphere of air pressure trying to get out...), but it's usually not very hard for a sub to return to the normal sea-level world. If it can't, it's not terribly hard to send a rescue vessel down, if the sub's just stuck down there on the continental shelf. That really isn't very deep - if the Kursk had been balanced on its nose on the bottom, its tail would have stuck out about fifty metres above the waves.
Until we get space elevators and/or antigravity happening, though, space is a lot more difficult to get to. It's that simple fact that's the biggest obstacle to reliable space computing. You've got to take what you need with you the first time.
Some of the obvious challenges of space, in contrast, turn out not to be such a big deal.
Weightlessness, for instance, is not a problem for most perfectly ordinary desktop computer components. OK, old ball mouses won't work, and printers may get paper jams, but basic PC components will work just fine, provided they don't rely on convection for cooling. Hot air doesn't rise when no direction is up. If everything's fan-cooled, it'll be fine.
Unless, of course, there isn't any air.
An ordinary PC or laptop won't work at all in vacuum, initially because the hard drive heads need an air cushion to float on. Many people think hard drives are hermetically sealed, but they're not - they just have no through-flow ventilation. In vacuum, a hard drive will instantly eat itself when turned on.
Hard drives typically have a maximum operating altitude rating of about 2500 metres, meaning they don't like pressure much below 11 pounds per square inch (psi). Normal sea level atmospheric pressure is 14.7psi, but many manned spacecraft use lower air pressure. That makes it easier to keep them sealed, but can rule out conventional hard drives for storage. Even if you run at full atmospheric pressure, you don't want all your computers to die if you lose some air.
For this reason, good old tape's used for secondary storage in many space computer systems, including the Shuttle's.
Even with solid-state disks, though, conventional computers won't work for long in vacuum. Without air, there's also no air cooling. Many normal computer components will overheat without air flow to cool them; purely radiative cooling is a lot less effective than even passive convection.
And then, there's radiation.
One of the oldest excuses for mysterious system crashes here on earth is "cosmic ray strike". Some high-energy particle from a supernova millions of years ago whizzes through the vacuum, whacks into a RAM chip, and either it or the spray of secondary radiation from its impact flips one or more bits in the memory. If error detection and/or correction doesn't repair the damage, hilarity ensues.
Cosmic rays have the energy to make it all the way to the planet's surface, but it's very unlikely that they'll actually cause problems for terrestrial computer systems. They're a bigger radiation source outside the atmosphere. Space computers also have to deal with the sun's "solar wind", which is composed of electrons, protons (hydrogen nuclei), and small doses of heavier nuclei. The solar wind sleets out from the star all the time, though it's stronger when there's lots of sunspot or solar flare activity.
The earth's magnetic field traps solar-wind particles in the Van Allen radiation belts, which stop most of them making it to the atmosphere. But the Van Allen belts aren't very high up. Even spacecraft in low earth orbit plough through the belts, and they're a significant source of spacecraft radiation exposure.
Shielding ordinary commercial computing gear (and astronauts) against even the mild radiation of low earth orbit has been officially described by NASA as "futile". You just run the thing. If it crashes, you reboot it. Deal with it. Don't use it to control your oxygen supply.
For more critical systems, spacecraft use special radiation-hardened versions of current CMOS chips (hardening can be achieved by something as simple as reducing the transistor density), or they stick with good old bipolar transistor technology, which is inherently much less radiation-sensitive.
A radiation-hardened 80C85 (the CMOS version of the Intel 8085) did the thinking for the Sojourner rover on Mars. The Pathfinder lander and the later Spirit and Opportunity rovers, though, got a lot more grunt; they all use the popular RAD6000, which is a hardened version of the 25-MIPS IBM POWER chip that came before the PowerPC.
From your desk to LEO
It's not all hardware older than Britney Spears up there. Actually, people use regular laptop PCs on spacecraft, too.
Us earthbound misfits like laptops for the same reasons astronauts do - compactness, convenience, somewhere to store your digital photos. Laptops are good for space use for one more reason, though: Even when they're not running from battery power, they don't draw a lot of juice.
Electrical power is a problem for spacecraft. They often don't have a lot of it, and they don't want to use too much, even if they've got plenty.
Lack of power has to do with where spacecraft get it. Solar panels don't work when something - like a planet - is between them and the sun. Fuel cells don't come with limitless fuel. Radioisotope thermoelectric generators can run for years without maintenance, but they don't make a whole lot of juice.
The reason why it's a bad idea to use a lot of power in space even if you've got it coming out of your ears is that every watt of power consumed aboard a spacecraft gives a watt of heat that then must be disposed of, along with heat from the sun. (Reflective foil on spacecraft is there to stop the sun warming them up.)
A spacecraft hanging in vacuum is essentially a huge Thermos bottle, and getting rid of waste heat is a major problem. Many spacecraft have big black radiator panels for this purpose; they often resemble solar panels, but are aligned edge-on to the sun.
Most of the heat sources on spacecraft aren't computer-related - solar panels and microwave transmitters are major culprits. But every little bit helps.
Hence, laptops with modest CPUs, not Prescott P4s.
The future
So far, space computing has leaned strongly towards Keeping It Simple, Stupid.
Sure, there are super-sophisticated science and communications and spy satellites up there, but they do a lot more data acquisition, delivering and relaying than they do processing. Their technology's in the sensors and antennas, not in the computers. If you've got enough microwave-link bandwidth, there are good reasons to do computational gruntwork on the ground, where it won't cost you a hundred million dollars to get someone to replace a toasted expansion card.
We live, however, at the beginning of a new age of private space exploration. The big space-agency bureaucracies have, historically, been about as enthusiastic about private enterprise in space as they've been about Catastrophes At Take-Off. But their stranglehold is slowly, slowly loosening.
There's nothing magical about private enterprise that'll exempt its orbital (and beyond) IT ventures from existing problems with power, heat, radiation-hardening and software reliability. But just having a whole lot more people working on the problems can only help, and cheaper launch systems will let more people loft more satellites for less money, making the higher reliability of old, slow systems less critical.
Moving more MIPS into space will, for instance, allow us to create actual space-based server networks, giving efficient extraterrestrial data routing for all, with the signal touching dirt only at the endpoints. Iridium's there already, but it's expensive and slow. A mere 128 kilobits per second would be a vast improvement.
It's early days yet. To win the much-publicised $US10 million X Prize, all Scaled Composites had to do was loft three people (actually, one person plus ballast equivalent to two passengers, who could have been sitting in the other two seats but, in a massive vote of confidence in SpaceShipOne's engineering, weren't) to 100 kilometre altitude and bring 'em back alive, twice in a fortnight. Straight up and straight down was fine; no orbit was needed. Making orbit is rather harder than just going up and down, and getting back from orbit involves real be-your-own-meteor re-entry, which SpaceShipOne also avoided.
There are private companies making real orbital launch vehicles, though, even as the chipmakers we know and love reduce watts-per-MIPS and push into other new, less space-sensitive technologies. Optical computers should care a lot less about ionising radiation.
Real-time webcam footage from space tourists? LAN parties with computers stuck to all six walls with Velcro?
Watch this space.