Autonomous vehicles are a pretty slick concept. So, too, are lightsabers and X-wings. Unfortunately, none of them is likely to wind up in your arsenal anytime soon, despite what you may have read in recent media. The reason? There’s simply too much going on in your average commute than today’s systems can adequately—read: safely—navigate.
So where then do we stand? Will you be driving that autonomous Tesla one year from now? Five? As it turns out, built-in cybersecurity solutions may make up a far bigger piece of this puzzle than navigation algorithms do. To understand why, let’s take a look at the current state of the self-driving car.
Plowing the field
Sure, you’ve seen headlines about Google’s autonomous army training on closed-off courses or massive trucks hauling pallets of golden beer on the highway without human intervention, but where’s my autonomous Uber? As John Deere has discovered, things are more complicated than they seem.
The tractor company’s gargantuan machines have a simple enough job, at first glance. Drive a straight line, turn around, repeat—the perfect use case for a young technology like autonomous vehicles, right? But for 20 some years, John Deere has been testing this hypothesis with results that still can’t fully replace human drivers.
Even after throwing GPS, tracking sensors, image sensors, telematics, and the kitchen sink at field navigation, problems still persist. What they’ve discovered is that dynamic variables like dust, weather, other harvesting equipment, etc. require more input handling than current systems can tackle.
Danny Shapiro, senior director for automotive at Nvidia, explained it succinctly to Business Insider.
“As a human you have senses, you have your eyes, you have your ears, and sometimes you have the sense of touch. You are feeling the road,” he told the source. “So those are your inputs, and then those senses feed into your brain, and your brain makes a decision on how to control your feet and your hands in terms of braking and pressing the gas and steering. So on an autonomous car, you have to replace those senses.”
We’ll need a little more oomph in our computing capabilities before these senses can be accurately replicated.
Built-in cybersecurity solutions: the missing link
Replacing the natural sensing and reacting ability of human drivers is an obviously large hurdle to overcome before we send our kids on a school bus with no driver, but cybersecurity is maybe even more of a hindrance. How can we be sure that the computer brain behind our vehicle’s steering isn’t controlled by some malicious hacker?
The answer: We really can’t. At least not yet. In fact, The Hacker News recently shared research that shows us just how insecure this technology can be. Researchers were able to take full advantage of autonomous vehicles’ reliance on image recognition to perform some clever trickery.
In one case, researchers were able to fool cars into thinking stop signs were actually speed limit signs at an impressive 100 percent rate. That’s right: If you were driving—ehem, riding—an autonomous vehicle today, a few stickers on a stop sign could cause your car to accelerate right through. Not exactly safe.
If altering street signage is enough to fool today’s advanced autonomous technology, what could a real attempt at hacking could produce? A scary thought indeed. All of this, of course, is to say that we’re likely a ways off from perusing the news while our autonomous vehicles navigate us safely to the office.
That being said, we can take solace in the fact that great headway is being made in other areas that could transition nicely to autonomous technology. Things like self-healing printers that actively monitor their own cybersecurity health show us that the future of digital security isn’t as hopeless as it sometimes seems. And maybe, just maybe, autonomous vehicles aren’t either.