3 min read


Featured Image

In late 2016, reports flooded in from Moscow about a strange disturbance near the Kremlin. When passing by the fortified complex at the cold heart of the Russian government, drivers found the GPS systems in their cars had been suddenly spoofed.

In December, CNN confirmed that instead of showing the cars where they really were — cruising along the Moskva River — the GPS suddenly insisted the cars were 20 miles away, at the Vnukovo International Airport.

When this news reached Todd Humphreys, halfway across the world in Austin, Texas, he couldn’t help but feel vindicated. The foremost mind in technological trickery (the foremost law-abiding mind, anyway), Humphreys was certainly unsettled by Russia’s actions. But after almost a decade testing the potential for devious transit takeovers in the lab, he tells Inverse he was excited to have finally found a “spoofing case documented in the wild.”

As head of the Radionavigation Laboratory at the University of Texas, Humphreys is the world’s leading expert in spoofing and jamming. Though his job sounds downright musical, it’s actually defined by pencils scratching paper and wheels screeching on pavement. That’s because Humphreys spends most of his time thinking up worst-case scenarios of planes, ships, and automobiles hacked by nefarious forces. He says that in an era where partially automated vehicles are already on the road and annual sales of fully autonomous cars are expected to hit 12 million by 2035, what happened at the Kremlin might soon seem quaint.

Footage of Todd Humphreys’s homemade spoofer. “The world seems upside down when this little blue dot you’ve come to trust goes traipsing off without you,” he tells *Inverse.* “I saw opening up all of the problems that could happen if someone with my know how didn’t have my restraint.”


Driverless Vehicles are Uniquely Vulnerable

“The self-driving car doesn’t have ESP,” Humphreys says, “it gets information from its sensors. It determines its locations from its sensors, if there’s a crash coming up ahead, if the light is green or red — from its sensors.” Right now, a hacker could still send a confusing signal to a car, interrupting the real data coming from satellites and show it somewhere else. But drivers are still in control and they typically know where they are, regardless of what a GPS says.

In an autonomous car, however, if the operating systems are sent bad data, the car can make the decisions itself, allowing a hacker to remotely send a vehicle off the road or drive it down a different course.

Though Humphreys is quick to assure people he thinks autonomous cars are inevitable and exciting, he says skepticism about driverless vehicles is far from misguided. In fact, these doomsday scenarios are totally plausible. That’s why his lab is working on technology that can spoof-proof a vehicle. But, Humphreys says, many autonomous car manufacturers have been reluctant to pay up and some, like Tesla, have actually made safety modifications to their car that could compromise security.

Last summer a Tesla test car got confused, causing a deadly crash. It wasn’t caused by hacking, but an investigation into the tragedy did shape Humphreys’s work on intentional attacks. The accident report showed that, to the car’s front-facing cameras, a passing white truck was indistinguishable from the bright Florida sky, so the autonomous car careened into the nearby vehicle. The car’s radar had actually recognized the threat, Humphreys says, but it was overruled by the blinded front-facing camera.

To rectify this, Tesla has reportedly changed its system. Where the car’s front-facing camera and its radar used to have to agree in order to trigger a change in direction, reports indicate now only one system has to identify a problem in order to change the car’s course. Humphreys understands why such a change makes sense in context, but he says any move to reduce redundancy actually makes a car even more susceptible to hacking.

“Now I can just stop the car by just spoofing the radar, I don’t have to spoof the radar and the camera,” Humphreys says. “It’s an example of where, when you fix one problem… you perhaps make it less resistant to intentional attack. I’m fairly certain with a radar-spoofing device… we could stand by the side of the road and watch for Teslas and stop them in their tracks.”

*Originally published by Inverse