Google Brain starts an information war between AI

Written by | Technology and Science

Elon Musk

Photo: www.techzaa.com

The motto at Google Brain, an artificial intelligence research team, is:

“Make machines intelligent. Improve people’s lives.”

Let’s just hope the machines remember that. As with every breakthrough in AI research, the latest study performed at Google Brain is both fascinating and disturbing, both reaffirming of the inevitability of artificial intelligence and the enduring necessity of humans.

Basically, it’s some cool shit.

This cool shit in particular is about an experiment involving two AI trying to communicate with one another without a third AI discovering what information they passed.

The three neural networks involved were named Alice, Bob, and Eve. Alice was taught to send encrypted messages using a special key to Bob. Bob was taught to use the key to decrypt the messages. Lastly, Eve was taught to attempt to decipher the messages without the secret key used by Alice and Bob.

Google Brain AI Chart

The AIs were not taught any cryptographic algorithms, instead they simply had to learn as they were run through repeated simulations the best way to achieve their objective. The setup was reminiscent of real life human interaction where the battle for secure information never ends. The results, however, were less than thrilling.

Bob Eve Chart Google Brain

While in some runs Eve was able to keep up with Alice and Bob’s encryption/decryption systems, ultimately it was outpaced by the two. The researchers ultimately concluded that it was proven that neural networks can learn to protect communications by telling them to value secrecy above all else, however it is unlikely that AIs like Eve will ever become a strong opponent.

“While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis.”

Perhaps the most interesting aspect of the study is not the experiment itself, but how it was setup. Setting two entities against one another to encourage development is what is known as “adversarial learning”.

Pedro Domingos, a professor from the University of Washington’s computer science department, explains,

“Looking beyond this paper, though, adversarial learning is a very interesting topic, because learning in the real world is increasingly often done against adversaries, and because adversarial formulations can lead to better learning.”

Some researchers have speculated that, though adversarial learning has its flaws, it could provide the framework for building unsupervised learning models that would allow the machines to teach themselves instead of requiring extensive programming by humans.

Adversarial learning could be the key to developing true artificial intelligence. Until then let’s just hope that if our future computer overlords start to duke it out, they’ll try to keep the pitiful meat-bags out of the crossfire.

Last modified: November 8, 2016