If you visit the This Person Does Not Exist website, you’ll be greeted by a simple portrait. Reload the page, and another face will pop onto your screen. But what is so mysterious about these seemingly regular pictures of everyday people?
At first glance, the images featured on the website might seem like regular pictures of people, something you’d see on LinkedIn or maybe even Facebook. But creepily, it isn’t. Every single photo on this site has been generated using artificial intelligence, and are probably photos of people who don’t even exist! Every time the site is refreshed, a shockingly realistic — but totally fake —picture of a person’s face appears.
This page was set up by Uber software engineer Philip Wang to demonstrate what artificial intelligence is capable of. Each of the faces on this site is created by using a special kind of artificial intelligence algorithm called generative adversarial networks (GANs). The code that made this creepy website possible was written by Nvidia and featured in a paper that is available before peer review on arXiv. Called StyleGAN, the neural network has infinite applicability for everything, from gaming to creating false documents. As with almost any kind of technology, it could also be used for more sinister purposes. Deepfakes, or computer-generated images superimposed on existing pictures or videos, can be used to push fake news narratives or other hoaxes. That’s precisely why Wang chose to create the mesmerizing but also chilling website. There have been many instances of Deepfakes created, that are just so natural that they could fool any viewer. Some popular deepfake videos include :
- A video wherein the face of the Argentine President Mauricio Macri was replaced by the face of Adolf Hitler
- A deepfake of Barack Obama which served as a public service announcement to increase awareness of deepfakes
Coming back to Wang’s motive behind creating his site, he says that in a society where pictures and images are the standard surrogates for “proof,” GANs — by automating the work that once required painstaking labor on the part of imaging experts — will soon allow anyone to furnish “proof” that any imaginable person did any imaginable thing. “I have decided to dig into my own pockets and raise some public awareness for this technology,” he wrote in his post. “Faces are most salient to our cognition, so I’ve decided to put that specific pre-trained model up. Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector.”
Now coming to our fundamental question of how this all works : GANs. All GAN’s have two networks: the generator and the discriminator. The generator synthesizes new samples from scratch, and the discriminator takes samples from both the training data and the generator’s output and predicts if they are “real” or “fake”. The generator receives a random vector (noise) and therefore its initial output is also noise. After it receives feedback from the discriminator, it learns to synthesize more “realistic” images. Simultaneously the discriminator is also learning by comparing generated samples with real samples, making it harder for the generator to deceive it.
Wang notes that simply being informed about GANs will make people less susceptible to being fooled by them. Heightened awareness will make it easier for us all to enjoy the improvements to 3D graphics StyleGANs will likely bring about. As you constantly while away your time, trying to periodically refresh https://thispersondoesnotexist.com/ , hoping to stumble upon known faces, this post is definitely something to ponder about!
References :
- https://www.inverse.com/article/53280-this-person-does-not-exist-gans-website
- https://edition.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
- https://interestingengineering.com/this-person-does-not-exist-website-is-a-creepy-look-into-the-future
- https://www.symantec.com/blogs/election-security/ai-generated-deep-fakes-why-its-next-front-election-security