Hacking and heritage

Originally published 2008 in Atomic: Maximum Power Computing
as "Hacking in the real world".
This edition last modified 29-Aug-2015.


"Hack" is an overloaded word.

A hack can be a quick and clever piece of programming, or it can be a dumb and clumsy one. It can be a cunning way of making a device, technology or whole area of human endeavour better, or it can be the heart of a scam that steals from millions of people.

"White hat" hackers have been trying for decades to get people to call the bad kind of hacking "cracking", or at least something, anything, other than just "hacking".

But that battle's been lost. "Hackers" in the popular media today will usually be stealing credit card numbers, or (allegedly) scheming to destroy the Western world.

Authoritarian fantasies, lousy reporting and terrible movies aside, all real hacks, good and bad, have a long and fascinating history. Hacks in the context of computation go back as far as computation itself, which is much, much longer than the history of the electronic, or even the mechanical, computer.

When Diebold's Web site offered qualified buyers the chance to buy spare keys for their AccuVote-TS voting machines, for instance, the listing included a detailed picture of the actual keys.

Ross Kinard of SploitCast.com was, I imagine, rather pleased with himself when he thought of grinding blank keys to match that picture. And yes, it turned out that all of the keys were the same.

The details of this story are very modern, but devious key duplication is, of course, about as old as locks. Which is to say, several thousand years.

A large part of the history of hacking is repetitive, because the same old tricks can keep working over and over when people trying to secure a system - a computer, an office, an election - don't try hard enough.

In the computer-hacking world, there's not much interest to be found in the millionth Web site that doesn't stop people from typing in SQL commands as their username, or the millionth program with a buffer overflow vulnerability, or the millionth company that puts a file full of plaintext passwords on an open server which Google then cheerfully indexes for them.

If you discover that some dumb Web site will treat you as having an account if you just append "?loggedin" to a URL, congratulations, now you're a hacker. But only about as much of a one as someone who can jog around the block is an athlete.

So never mind the boring stuff. Herein, I have chosen a few recent hacks that made big news - in certain circles, at least - all of which are new, but all of which also have their own, often lengthy, history.

/* This code copyright Euclid, 300 BC */

3D computer games use Pythagorean geometry to figure out distances. This makes it important that they have a very fast way to calculate inverse square roots - or, at least, approximate inverse square roots, close enough to keep the dungeon walls flat and connected to the floor.

So there was some excitement when John Carmack came up with just such a super-fast inverse root function, in the original Quake.

(A reader's now pointed out to me that the actual parentage of the function is a lot less simple than that.)

You can read more about it here, but the take-home message is that this new approximation technique had just a teeny bit of history behind it. It was an ingenious addition to "Newton's method", discovered not by Helmut Newton or on an Apple Newton but by Isaac Newton, in the late seventeenth century.

This isn't exactly the sort of breaking-into-the-Pentagon hack you might have expected me to start with, but I think it underlines the depth of history we're talking about, here. We're used to regarding eight-inch floppies as if they roughly coincided with the Mycenaean Empire, but the history of hacking is really the history of human cleverness, and that goes back further.

Bulletproof anonymity. According to some dude.

In 2007, Swedish security consultant Dan Egerstad kindly set up some "exit nodes" for the Tor anonymity network. Tor is a distributed "onion routing" system, through which the path of data is very difficult to track. For all intents and purposes, nobody can tell where data coming out of a Tor exit node entered the network, or what a person connected to an entry node is doing.

So Tor's a great way for people in repressive countries to do stuff on the Web that their government wouldn't like. It's also a great way for office workers to look at porn.

Dan Egerstad "sniffed" the traffic coming out of the exit nodes he operated. In that traffic was all sorts of confidential information, most notably including usernames and passwords for hundreds of e-mail accounts belonging to the staff of embassies all over the world.

The Tor network prevented Egerstad from seeing where this data had come from, but there was nothing stopping him harvesting all the information he liked from the flow of anonymised, but not encrypted, data coming out of the network through the boxes he was operating. And looking at that data often made the identity, if not the exact location, of its origin very obvious.

The Sydney Morning Herald called the Egerstad affair "the hack of the year"; sniffing unencrypted data passing through a computer you own is not, if you ask me, actually much of a hack, even if it does get you questioned by heavy-handed policemen. But the boringness of the hack is more than made up for by the lesson it teaches: Don't use an interceptable, unscrambled communication system to transfer confidential data.

Data inside the Tor network is highly secure, but it's coming from and going to computers outside the network, and then it's as wide open as any other plain Internet connection.

People put confidential data in accessible plaintext all the time. You've probably done it several times. Do you, for instance, shred your ten-year-old bank statements when you throw them away? If you don't, and if that account is still open, anybody who gets hold of the statement can use the information on it to help them steal your identity, or just make fake cheques.

And shops just won't stop throwing away unshredded credit-card carbons. That's why credit cards have all sprouted those little Card Security Code numbers, usually on the back of the card; there was no other way to stop dumpster divers from collecting valid card details. Online stores that don't ask for a Card Security Code will still accept such stolen details.

Oh, and have you ever put confidential information in a plain e-mail? Admin staff at your ISP can, but probably won't, read any mail you send through their servers. Every Internet relay point for traffic from your ISP's server to the recipient's server - that'll probably be at least two or three companies on top of the ones that own the sending and receiving servers - can also trivially sniff unencrypted SMTP e-mail. And then there's whoever runs the receiving mail server.

The single greatest weakness

"Good afternoon, Mr Bloggs. My name's Steve, and I'm calling from Visa International. Have you made any large credit card purchases in Estonia lately? No? Well, no problem, we'll clear that right up for you. I've got your card number here - it's 3141 5926 54- it isn't? Oh, sorry - could you just read it off for me...?".

OK, maybe the above script wouldn't work on you. But you'd better believe it'll work on someone. Steve'll be buying a dozen iPods using someone else's Visa details before the day is out.

Calling someone on the phone and persuading them to tell you their financial information, e-mail password, alarm code or what-have-you is one of those boring hacks that work over and over again. Phone scams, like sending people fake traffic tickets or bills for things they never bought, are really just confidence tricks, not hacks as such. But confidence tricks are all forms of "social engineering", a category into which an amazing number of hacks fall.

Technological social engineering hacks tend to involve laying some sort of con-job trap. Phishing, for instance, is immensely popular - and tricking people into entering login details on a fake Web page that looks as if it belongs to their bank, or whoever, is classic social engineering.

Phishing traces its lineage back to door-to-door impostors and fake institutions of all kinds. Some of those scams don't work any more - the fake betting shop depicted in The Sting wouldn't work with today's ubiquitous instant communications, and "bucket shops", brokerage firms that deal with customers but don't bother to actually buy or sell any real securities on their behalf, also died out long ago.

But other fake institutions are still around today. All of the best Nigerian advance-fee scammers have a fake bank at their disposal, for instance.

"Calvary greetings! The Amalgamated Blue Chip Commercial Bank of My Dad's Garage will be very pleased to offer you a secure escrow service, to make perfectly sure that the advance fee you're sending does not vanish before your seventy million dollars are delivered!"

Social engineering can also be very simple.

Want to bring down a company's server? Well, you could look for open modem lines and unpatched vulnerabilities and try to hack the Gibson and so on. Or you could just pay someone in the building to unplug a cable.

The most elegant social engineering hacks involve almost no lying at all.

Suppose, for instance, that you'd like to get people inside a given business to run some sort of Trojan or other, so you can access their network or otherwise steal their secrets.

Why go to all the trouble of "real" hacking, trying to find externally accessible vulnerabilities in what may be a very well-secured company network, when random employees will be perfectly happy to run your software for free?

Just gather some cheap USB flash drives (or CD-ROMs, but USB drives are more attractive), write something tempting on them, and set them up to auto-run your software using U3 or something. Or just put the software's installer on the drive and call it "hotstockpicks.exe" or "amazinglesbian.avi.exe" or something.

Now, leave those drives lying around in the target company's parking lot. Or in a public parking lot, if you don't care whose Internet-banking login you steal.

Professional attackers customise the software to send specific information to them via one or another seldom-firewalled avenue, but there are plenty of off-the-shelf options for less advanced attackers.

This method of attack seldom makes the headlines, but it's about as old as floppy disks - though it really only came into its own when offices started being networked and Internet-connected as a matter of course.

There's no way to foil social-engineering attacks in general. You can stop them in particular situations by educating your users (which is conceptually simple but often extremely difficult to actually do), or by creating systems that use authentication methods - like fingerprint readers - which users cannot give away to an attacker, no matter how much they want to.

Even if you make huge investments in cryptographic key-fobs and iris readers and DNA analysers, though, you still have to make sure that every link in your security chain is just as strong. The continuing prosperity of TV evangelists suggests that now, as for the last hundred thousand years or so, you may have a lot of trouble fixing those last few weaknesses.

Cryopreservation - it's not just for human heads!

Your work PC's whole hard drive is encrypted. Every time you start your computer you type a separate decryption password to access the drive. Without it, everything but some early-boot system files might as well be (pseudo-)random noise.

One day someone walks into your office while you're at lunch and your computer is in standby. He removes the side panel from your PC, and turns the computer off while spraying the RAM modules with an upside-down can of "air duster", which has a boiling point of about -25C. Then he removes the hard drive, and the frosty memory modules, from the PC. He wraps the RAM with newspaper and puts it in his pocket. He replaces both the RAM and the drive with identical units, replaces the side of the computer, turns it back on, and departs. Elapsed time: Maybe two minutes.

When you come back, you'll have a mysteriously blank hard drive. But that's all the evidence there'll be, if nobody checks serial numbers on the drive and RAM. Nobody's very likely to do that, because everyone knows the drive was encrypted. Even if some James Bond stuff did happen while you were at lunch, it's not as if the stolen drive could possibly be of any use to anyone.

Down in the parking lot, the attacker slots the still-chilly memory modules, which at low temperatures can preserve their contents for several minutes, into a computer powered from his car battery via an inverter.

That computer boots a bare-bones OS that dumps the RAM contents to a file. That only takes about 30 seconds per gigabyte.

At his leisure, the attacker can now scan the dump file for the distinctive numeric fingerprints of a variety of different encryption keys.

(He could have done the same thing quite quickly by just booting a thumb drive set up to do the same thing on your computer. It's also possible to access main memory of a running Mac, or possibly even PC, from another computer via FireWire. But let's say he didn't have the time for that. Or he just loves feeling like a secret agent.)

If and when the attacker finds your encryption key, he can now decrypt your drive. Even if RAM data decay has caused a small percentage of the key to be lost, rebuilding the rest can be the work of minutes, versus uncountable zillions of years to brute-force the whole key.

(If you're concerned about this, by the way, you can just turn your computer off or put it in "hibernate" mode when you're away. That'll power down the RAM, which will lose all of its contents within a minute or two at most. Probably only a few seconds, at normal PC interior temperatures.)

Usually, attacks that require physical access to the target computer aren't very interesting, but this one's a classic hack. That's partly because it defeats technology specifically created to protect against attacks by people with physical access, and also because it's a real-world scenario and thus includes multiple elements.

The core of it is a kind of hardware hack, based around the surprisingly long "data remanence" of dynamic RAM. Memory may need to be refreshed millions of times a second while the computer's running, but it turns out that it does actually retain its contents for quite a long time when it's powered down, especially at low temperatures. And powering it back up again doesn't erase that data.

But there's more to this attack than that. The attacker has to get access to your computer, which probably involves some social engineering to get him into your office. And finding the encryption key (or other desirable data) in the RAM dump requires a bit of software ingenuity. It's not exactly cheating at roulette by using a hand-made 8-bit computer hidden in your shoe, but it's still a pretty neat trick.

(For more information on this sort of attack, including the finding and repair of encryption keys and the source code with which you can do it yourself, check out the Web site of the nine researchers at Princeton's Center For Information Technology Policy who discovered it.)

The eavesdropper at the end of the rainbow

Some algorithms - for instance, the Luhn formula that verifies the basic integrity of credit card numbers - are computationally so trivial that you can do them with a pencil and a small piece of paper.

Others - like brute-force cracking of the better forms of cryptography - are so computationally intensive that we can say with confidence that no computer based on anything resembling current computer science will ever be able to do the job.

Between these extremes are hacks that someone years, or decades, or centuries ago, figured out were possible - but only with far more computational power than existed at the time.

And now, that computational power does exist.

"Rainbow tables" are a great example of this. A rainbow table is a lookup table for reversing hash algorithms.

Hashing is a standard technique used to, for instance, secure passwords. No sane software stores user passwords to disk in plaintext; instead, it uses a hashing algorithm to turn each password into a string of pseudo-random data, which is computationally extremely difficult to turn back into a password. When you type your password in, the system hashes it, and checks it against the stored hash. Storing or transmitting the hashes themselves is quite secure, so many systems do.

One of those systems is the GSM digital mobile phone standard, used all over the world, including here in Australia. One of the big selling points for GSM was that it was encrypted and secure. You could listen to conversations on the old analogue phones with a radio scanner, but GSM eavesdropping was impossible.

Until now.

A rainbow table is a big list of possible inputs - passwords, encryption keys, whatever - and their corresponding hashes. If a given hash is present in the rainbow table, you can look up the corresponding original string.

To make a rainbow table you need to grind through the hashing algorithm for every string you want to be able to reverse. This requires lots of processing power, and the final table is likely to be very large.

Rainbow tables for many serious cryptographic algorithms would require more bits of storage than there are particles in the universe, but some algorithms, including the A5 algorithms used by GSM phones, are much more attackable. With custom Field-Programmable Gate Array (FPGA) computers and a few terabytes of storage, a rainbow table for even the more secure A5/1 algorithm (here in Australia, we only uses the far less secure A5/2...) can be yours.

That hardware is slightly out of the price range of home users at the moment, but not by much. If the NSA doesn't already have a universal GSM cracker that fits in a briefcase, you've got to wonder where all of those billions are going.

Rainbow tables are one of those things that people have known about approximately forever, but for most of that time have regarded as obviously impossible. Like flying machines.

Dick-pill supercomputers

In early 2008, in a development which strangely did not attract the attention of the TOP500 supercomputer list, a new distributed computing network became the biggest in the world.

I feel fairly confident in saying that the reason why that network didn't make it into the list along with BlueGene/L and Red Storm Sandia/Cray is because it exists primarily to send penis-pill spam.

The new network was colloquially referred to as Mega D, after one of the pills that its principal users used to sell. Mega D surpassed the Storm botnet - also used for pretty much nothing but spamming - and in the next couple of months, it in turn was beaten, by the Rustock and Srizbi botnets. Then along came Kraken, and by now some other 'net is probably the king. The leaderboard changes monthly.

What won't change much will be the proportion of the world's spam - now more than a hundred billion messages a day - sent by just the top few botnets. That's well over half of all spam, and maybe as much as 85%.

All of these botnets are composed of hordes of Trojan-infected PCs, some of which may be members of numerous botnets, large and small. And the supercomputer comparison is actually facetious, because botnets generally don't actually use much of the vast processing power available to them.

Instead, the major botnets have thus far used their millions of nodes to send spam, and to host spammy Web sites - fake pharmaceuticals, fake "discount" software, fake watches, even fake marijuana - and the nameservers for those sites. There've also been occasional ventures into Distributed Denial of Service attacks, and of course the sending of e-mail with more copies of the relevant Trojans attached.

Botnet operators also use their zombie slaves to "click" on Web ads, or install spyware and/or adware. Watch this space, though; I'm pretty sure bot-herders have only scratched the surface of what you can do with 25 million computers about whose owners you do not care.

The history of the botnet has two branches. Viruses on one side, worms on the other.

There's little conceptual difference between a botnet client that installs itself when some gimboid runs that amazing new 53-kilobyte version of BioShock he just downloaded (and yes, that's social engineering again), and the elegant old floppy boot-sector viruses. This has been a real case of newer species displacing older ones; plain old-fashioned viruses are very thin on the ground these days, but botnets, spyware, adware and other malware are an epidemic, as anybody who's been asked why Great-Uncle Bert's computer now takes 37 minutes to boot will know.

On the other branch of the botnet family, you've got Internet worms back to the original "Morris worm" of 1988. That one was actually only intended to figure out the approximate size of the Internet at the time, but Robert Tappan Morris, then a Cornell student, made the mistake of allowing the program to infect one computer multiple times. Infected systems rapidly bogged down into uselessness.

Curiously, the very first practical network worm was, like the botnet Trojans, intended to create a distributed network. It was written in 1978 by two Xerox PARC researchers, and it propagated across the local PARC network, looking for idle PCs that could be given something useful to do.

1978 was also, coincidentally, the year of the very first e-mail spam, an invitation to a DEC product presentation.

There really is nothing new under the sun.

Give Dan some money!
(and no-one gets hurt)