It’s fun to write about developments in artificial intelligence like they’re harbingers of an impending AIpocalypse. Jokes about our new robot overlords notwithstanding, computers are getting scary smart these days, and it’s not always flattering to compare humans with AI. The machines can outperform humans in a lot of important ways: we routinely trust robot surgeons, diagnostic databases, and autopilot chauffeurs with our lives, just to name a few.
Google is among those pushing the horizon of AI superiority further and further. The company’s neural net/machine learning project, Google Brain, has been working on problems in medical imaging, robotics, and natural language processing, among others. “Google is not really a search company. It’s a machine-learning company,” Matthew Zeiler, a Google Brain alumnus, told Wired. Now a team from Google Brain has demonstrated that neural networks can learn to protect the confidentiality of their data from other neural networks.
The team started with three neural nets: Alice, Bob, and Eve. Alice was supposed to send Bob a secret message, while Eve attempted to eavesdrop. To give Eve a challenge, Alice had to convert her plaintext message into cipher text, in a way that Bob could understand and decrypt, but Eve couldn’t. Both Alice and Bob started their conversation in possession of a predetermined string of numbers, the key. Beyond that, though, the Google Brain team didn’t teach the neural nets any cryptographic algorithms. They just set up the system, plugged it in, and walked away to watch.
The neural nets didn’t disappoint. Alice slowly dreamed up her own method of encryption, which process Bob followed to learn how it worked. After getting some practice, Bob could translate Alice’s cipher back into plaintext, but passive adversary Eve was still on her own; even after lots of tries, her results were no better than random chance.
Alice’s cipher was simple, and as the authors note in their report, “Neural networks are generally not meant to be great at cryptography.” This still evokes images of Jarvis battling Ultron, or Neuromancer and Wintermute. But the larger point here is that while Alice’s method of encryption was simple, it also represents something that the neural network arrived at on its own — something that Bob was able to follow, while Eve could not.
For now, these cryptographic schemes are relatively easy for humans to perceive and break. But the day may one day come when two computers working together can self-generate a custom encryption scheme more robust than anything humans can imagine, then discard within minutes and create a new one as a form of information security.