Have you ever wondered what it would be like if the 1985 classic Back to the Future was remade with Ironman and Spiderman?
If so, then take a look at this deepfake video which has gone viral.
Doc Brown and Marty McFly are two of the most iconic characters in movie history.
The faces of Robert Downey Jr. and Tom Holland have now been superimposed over the original actors Christopher Lloyd and Michael J. Fox in this latest deepfake video.
The clip was made and uploaded by YouTuber EZRyderX47 and has had almost 4 million views after just three days online.
The clip looks seamless and is so expertly done that it is hard to believe that this has been computer generated, albeit a little unnerving.
Using video manipulation technology, the deepfaker replaced the original cast members with their modern counterparts.
The name Deepfake comes from “deep learning” and “fake”. Deepfake is a product of Deep Learning (or machine learning), which is a subset of artificial intelligence (“AI”) and based on a specific method called “Generative Adversarial Networks” (“GANs”).
Twitter is already fighting back against deepfakes and recently announced changes to its policy around posts that are deceptively manipulated or AI-altered videos that distort reality.
In a blog post, Twitter announced changes to the company’s synthetic and manipulated media policy, which it defines as any photo, audio, or video that’s been “significantly altered or fabricated” to mislead people or change the original meaning of the content. Under the new rules, Twitter will remove this kind of media if the company finds it likely to cause serious harm — such as content that threatens people’s physical safety or could cause “widespread civil unrest.”
Deepfakes represent a major challenge in tackling fake news and also present unique compliance risks for organisations.
The same technology used for viral videos such as the Back to the Future clip could also be used to synthesise corroborating evidence for false allegations against private individuals, to sabotage professional reputations, or to generate fake authorisations or instructions. There could be massive implications for a business if people are duped into believing that have been given an authorisation or instruction from senior management.
The original material might be protected by copyright, the use of personal information might mean that GDPR rights are in play, the synthetic content might be defamatory, or the requirements on online platforms to take down illegal content might be engaged.
Perhaps now is the time for the government to look at widening legislation to include “fake news” within the regulations.
An outright ban on the tools used to create deepfakes would be unworkable as it would stop legitimate uses of these techniques.
Perhaps the most logical solution would simply be to make sure organisations have robust compliance and authorisation procedures.