Alexa goes rogue—how digital personal assistants are changing privacy

March 26, 20173 Minute Read

Select article text below to share directly to Twitter!

Dismiss

Voice assistants are all the rage. Chances are, you likely have multiple digital personal assistants in your home, intended or not. Amazon Echo’s Alexa, Google Home, Microsoft’s Cortana, and Apple’s Siri all offer similar experiences, though they look a little different—and apply different rules to your data.

These voice-controlled technologies have landed on many office tech wish lists, and an estimated 8.2 million customers have purchased the Echo (and its personal assistant, Alexa) in the United States alone. The biggest question with newer home appliance systems, like Alexa and Google Home, is: How much can you trust them? When a speaker sits idly waiting for your command, what else is it listening to?

Murder, she recorded

Both Google and Amazon will go to great lengths to explain they’re only listening for their “hot words”—the phrase that activates them, like “Okay, Google”—before the microphone begins listening, but is that really the truth?

That’s new ground to break, legally, but law enforcement is already testing it out. In late 2016, the police used Amazon Echo to solve a murder trial by capturing data from conversations in the background. Police didn’t specify what was recovered from the Echo, but it was reportedly used to play music on the night of the murder and could have captured data outside of when it was used, as well.

Amazon’s Echo device doesn’t actually store data, but it is stored in Amazon’s cloud for an indeterminate amount of time, which means law enforcement could use a warrant to seize that information. The biggest problem with hot words is that the microphones in these devices are on at all times to ensure they’re ready to respond to your command. Supposedly, your device won’t stream any data to the cloud for processing until you say it, but when you do, it grabs a few seconds before and a few seconds after, too.

Securing digital personal assistants

As WIRED reported, both Amazon and Google use security technology to ensure hackers can’t access the data, and it’s sent to data centers in encrypted form. But there’s one big risk, even if you’re not the suspect in a criminal case: someone hacking into your cloud account.

Amazon and Google log all the queries you make and let you access the audio files. You need to manually delete them later, which means they’re fair game if your account is compromised. That data is also used for marketing purposes and to learn what people use their devices for the most.

Amazon, for example, knows that more than 250,000 people have asked Alexa to marry them. Are you part of that stat? Siri, however, focuses on privacy: It doesn’t log your identity with your requests; instead, it uses a string of numbers to track who said what. This way, Apple doesn’t know who you are, and it’s automatically deleted after six months.

The rise of smart devices, like printers, voice assistants, light bulbs, and other IoT products, is exciting, and all these devices are fun to use. It does, however, prompt bigger questions about privacy that haven’t been answered yet: How does the average consumer know what’s tracked, and is it possible to protect yourself against accidental data leakage?

With all new things, you’re making a trade-off, and you need to consider: How useful is this to me, and how much data do I give up to actually use it? Much of the time—with smart devices, like connected printers, digital personal assistants, and IoT appliances—it’s worth it, especially when these devices make their way into the business world or an office setting. If these personal assistants are this smart now, imagine how our data will be captured in the coming years.

  • Recommended for you
  • Recommended for You