These people are not real. Imagine a website, that can create a new fake person’s face every 2 seconds. Or an AI model so powerful that it’s already been banned from the public. Or a trust worthy female news anchor, that is nothing but AI.
Welcome to another month of Artificial Intelligence in 2019.
Episode #011: These People Are NOT Real (AI in The Simulation Part-Two ft. GAN & GPT-2)
In a previous episode of Are We Living In A Simulation, we explored just how far Video Game AI has come over the last 30 years. From the most basic competitor script playing Pong, to the real time weather dynamics of games like Read Dead Redemption 2.
In this Episode we now focus on more recent mind-boggling developments in Artificial Intelligence.
First up, a website that can create a new person’s face every 2 seconds.
These faces are not CG style photos, but life-like photos that you or I could easily post on social media. The website – thispersondoesnotexist.com – was created by Uber’s Phillip Wang as a demonstrate just how powerful AI has already become in 2019.
Wang uses a clever machine learning method, that basically pitches two AI systems against each other. This process is known as generative adversarial network or GAN (you all know I love my abbreviations).
Okay before we get more technical, let me just say that the most impressively freaky aspect to this website, is not that AI can create a new image of a face every 2 seconds. Nope, the most fascinating aspect is that the images feel familiarly real.
The faces here all tell their own stories.
The faces here all tell their own stories. Their fake eyes betray our brain by peering back at us and telling us a history that doesn’t exist. All without saying a word.
Even when we look at these faces knowing they aren’t real, there is still a doubt. A familiarity that is highly confusing for us to process. It would seem, that we are now hitting a point in technology, where our animal like brains are having trouble catching up.
*Editor Note: To clarify, the eyes on these images are actually from real sample photographs used on the GAN machine learning method. However, they do not belong to the rest of the face. The same applies to the other facial features too. So the faces you see here don’t exist, but the familiarity remains.
It would seem, that we are now hitting a point in technology, where our primal-like brains are having trouble catching up!
It’s this aspect of AI that is perhaps now fringing on a possible upcoming crisis, at least if unchecked. If our poor brains are already struggling to spectate AI from I, at these still early days, then where do we go from here?
It may help first if we can see how these faces are created.
Let’s quickly break down this computer witchcraft and wizardry by starting with GAN – Generative Adversarial Network.
GAN is basically a machine learning system in which two neural networks contest against each other in a zero sum endgame. This super popular machine learning method can generate images that look at very realistic to humans, having many authentic characteristics.
But don’t be confused, GAN is not two networks each trying to create the best looking photo. Instead, GAN uses two components – a Generator and a Discriminator.
The job of the Generator is to create the images – the fraud if you will.
While the job of the Discriminator is to determine if the image is real or fake.
Both continue two get better and better at beating the other. So at first, the Generator may create images that look very unreal or get confused where certain facial features should be. Making it easy for the Discriminator.
But then, just like all good machine learning it then improves on its results. Learning instantly from its failure results. Soon the images have become so real that some of the fake images are accepted by the Discriminator.
However, remember too that the Discriminator is also getting better by learning of its mistakes too, improving along the way. Soon the generators fake images can no longer get passed the discriminator, so it must improve.
This cat and mouse game continues to evolve, but due to the speed in which machine learning can move at and with such huge data sets, that things become very real very, very fast.
Unlike a human eye, which could take years of training to be able to tell a fake photo from a real one, with machine learning, years or even the equivalent of a century of learning can be reduced to a fraction of time.
This is what causes concern for people like Elon Musk, the speed and power of which AI can move at. So these realistic looking photos that we see from this website are simply the results of an AI war or battle if you like. And while the battle is still taking place – the GAN methods being improved on – already the results look eerily real.
…already the results look eerily real.
But let’s not panic. These images are just that. Images.
They don’t have a story to tell, Right?
Let me read you something. The following text was not written by a human. It is a fake quote written by AI.
Actual Quote by OpenAI’S GPT-2:
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”gpt-2 ai quote
Can you believe that that wasn’t just written by an angry misguided human?
Once again, the interesting thing here is not so much that a machine can string some words together, but that the phrases feel like they have personality or a history to them. As we read those words out loud, we can imagine who or what that person may look like. Why they are saying what they are saying.
Yet this person is no person. There is no personality. It’s simply AI, a machine learning system. And this is why this new system, called GPT-2 has already been banned from public release, at least for now.
OpenAI, the organisation behind this latest model said they do not want to risk enabling malicious or abusive used of the technology adding that it was a very tough balancing act.
GPT-2 simply needs a few words of input text to begin and then from this can identify the subject matter, before creating its own sentences of uniquely written content all based on its own predictions from the initial input.
And this AI’s scope is already vast.
GPT-2 has already been trained on a dataset of around 10 million articles, meaning the AI’s topic knowledge and intelligence is already massive. 15 times bigger than previous models.
While the tool could be used for good, it could also be used for bad.
Imagine a troll or hacker using a system like this to create 10,000 fake reviews of a product or movie. A company or the public may believe that there is a genuine issue here. Remember, this new AI model can write new content faster than any human could.
But what if a troll was to go one step further, rather than simple bad reviews, what if they were to spread false news stories.
A few weeks ago, China welcomed onscreen their first female AI news anchor. Her name is Xin Xiaomeng. Developed in partnership between the New China News Agency and Sogou, a popular Chinese search engine company, this new AI news anchor is realistically modelled after Qu Meng, a real news reporter at the agency.
Editor disclaimer: We are in no way suggesting that this Chinese AI News Anchor is anything other than genuine and sincere. There is also no evidence that Xin Xiaomeng will be hacked. This article is simply exploring the risks of relying on Artificial Intelligence to deliver news.
So perhaps in the future if Meng feels like a day off, she can now use the AI version of herself to fill in.
But what if the AI version of Meng is better than the human version. The AI Meng won’t get sick, or need a vacation and can even work 24 hours without needing a sleep.
She also doesn’t need a salary.
Don’t expect AI reporters to be solely on Asian TV either, with more and more media groups cutting staff numbers, the person reading your local news could soon be a machine too.
Now if we consider these three recent developments, the fake faces that our eyes believe are real. The fake words and sentences that appear to be human written and authentic. And the female news anchor, the face that millions of people often trust for their news.
If we take all three of these recent developments and consider what would happen if someone was to combine them all and use them for malice?
An AI news reporter reading words typed by an AI system that is simply generating them on the fly. Making stories up as it goes along as it were. Now one can see how dangerous this could be to society.
This could be fake news 2.0.
And if then consider the landscape in 5-10 years from now. Imagine real time interviews with fake people saying fake words to back up a fake major story, backed by fake witness statements.
Global crises could in theory be created out of nothing but a skilled troll using machine learning for bad, not good. That is why a huge part of working AI going forward has to be moderation and to an extent, control.
But how do these recent developments play in to AWLIAS?
Remember, my channel and website is not a conspiracy theory community.
I am in no way claiming that AI is being used right now to trick us. But If we consider for a moment that our brains are already being confused by machine learning.
If we can witness photos of life-like faces being created in seconds by a machine, well written stories created by basic machine learning and AI news reporters that are so realistic they can replace human versions. Then does surely not leave many big questions for humankind and our origins?
Could our ancestors have already been here before?
Is this a simulation of a simulation, with the simulation advancing with it’s own machine learning creations?
Or are we on the crisp of a first or second simulated Singularity event?
But anyway, that’s enough existential crisis for this week. What are your own thoughts on the advancement of AI? As always please let me know over on the forum.
Now for some Good News: as you may have noticed our little website has had a face life and is now powered by WordPress. One of the advantages of this change is I will be able to create more written content including Text Versions of each Episode. So from now on, you can read the text, listen to the podcast or watch the Episode on YouTube.
As always, AWLIAS.com does not have any adverts so your free to enjoy the content. For those that would like to support the community, please consider making a patreon donation from just $5. Our little forum is called The Alliance and all patrons get full write access to post their own Simulation Theories and Hypothesises.
I hope to be able to run this Simulation Theory community full-time one day, but the website, YouTube and Podcast are currently making zero revenue so I would really appreciate your support. The next Episode #012 will be out later this month (April 2019).
Enjoy my content? Please help me by Becoming a Patreon Member from just $5 > – I build, write, record and support Awlias.com all by myself with zero annoying ads or creepy 3rd party tracking cookies for you and the community to enjoy peacefully. Patreon Awlias Members also get Full Write Access to The Alliance Forum so that you can create your own Simulation Theories on here. Thank You.