Reading Time: 6 minutes

Public opinion and social media trends are profoundly motivated from social media posts found at popular web sites like Facebook, LinkedIn, Twitter, and Instagram.  As is commonly reported, many of these posts are completely false or they are powerfully opinionated that they are posses a lopsided and extremely biased perspective, therefore they are not worthy of serious consideration by open-minded, intelligent, thinking people seeking knowledge on any popular topics or current events.

One of the most insidious new developments in biased media is the advent of ‘deep-fakes’.


What is a deep-fake?  Deep-fake (a portmanteau of “deep learning” and “fake”) is a technique for human image synthesis based on artificial intelligence.  Because of these capabilities, deep fakes have been used to create fake celebrity pornographic videos or revenge porn.  Deep-fakes can also be used to create fake news and malicious hoaxes.  These are video and audio file generated or manipulated by artificial intelligence systems to create or alter rich media content to the whim of the programmer.

Interest in the phenomenon of “deep-fakes” has died down a little in recent months, presumably as the public comes to terms with what seems like an inevitability in 2018 when they first emerged — that people can and will use AI to create super-realistic fake videos and images.  But a recent news story by BuzzFeed surfaced the term again in an unexpected setting, inviting the question: what is a deep-fake anyway?

Comedian and director Jordan Peele used his spot-on impression of former President Barack Obama to create a convincing fake video of Obama saying things like “President Trump is a total and complete dipshit.”

The article in question was titled “A Belgian Political Party Is Circulating A Trump Deep-fake Video.”  From the headline you might expect that this was a high-tech political propaganda campaign; someone using AI to put words in Trump’s mouth and mislead voters.  In other words, exactly the sort of scenario experts are deeply worried about with deep-fakes.  But if you watch the actual video, it’s clear this isn’t the case.  The clip is an obvious parody, with an exaggerated vocal impersonation and unrealistic computer effects.  (The creators said it was made using Adobe After Effects — so, not AI.)  At one point “Trump” even says: “We all know climate change is fake, just like this video.”

Watch this interview as actor / comedian Bill Hader is morphed into actor Tom Cruise.

What is coming is a next generation of war, information war that is.  Propaganda is not a new thing nor is it even borne from social media.  Its roots can be traced back for centuries.  What is new is the application of artificial intelligence, in the form of machine learning and neural networks.

Neural networks mirror how the human brain works.  The more the human brain is exposed to examples of something, such as how to shoot a basketball or the lyrics of some new song, the quicker and more accurate the brain can reproduce it.  Neural networks use this same concept; the more examples that are fed into the network, the more accurately it can create a new example from scratch.

Neural Networks

But neural networks are only half of the equation.  Without GANs, deep fakes would not be as realistic as they are.  Generative adversarial networks (GANs) are the brain child of Ian Goodfellow, a Google researcher, who combined two neural networks in adversarial roles to improve the end product.  The first neural network, known as the generator, is as described above.  Its job is to create the new, false video or audio by attempting to replicate the dataset it is being fed.  Then, both the original dataset, and the newly created deep fake, are fed into a second neural network, known as the discriminator.  The discriminator’s job is straightforward: decide which videos in the dataset (that now contains the deep fake) are real.  If the discriminator can identify the deep fake, the generator can then “learn” how the discriminator determined the fake and correct whatever error was made.  With each replication of this game, the deep fakes become more and more difficult to discover.

Mark Zuckerberg said Facebook might start treating deep-fakes — photos and videos doctored with artificial intelligence — differently than misinformation or fake news, making it easier for the company to take them down.  The remarks came during an interview between Zuckerberg and the Harvard law professor Cass Sunstein at the Aspen Ideas Festival.

“I definitely think there’s a good case that deep-fakes are different than traditional misinformation,” Zuckerberg said.  “But I do think that you want to approach this with caution, and by consulting with a lot of experts, and not just acting hastily and unilaterally.”

When asked why Facebook doesn’t automatically take down deep-fake videos, Zuckerberg said it was difficult to establish a precise definition of deep-fakes.  While some videos are purposefully altered to twist the truth and misinform, other videos are edited for journalistic purposes and risk being deemed deep-fakes.

During this interview, comedian Bill Hader is morphed into Al Pacino and Arnold Schwarzenegger

During the 2015 Canadian federal election campaign, a number of candidates withdrew after compromising videos and social media posts they had made in the past were made public online.  While the content varied, the videos and posts had at least one thing in common: the candidates did not deny that the content was real.

Now, however, because of the rapid development of what is known as “deep fake technology,” Canadians might not necessarily be able to trust the videos they see or the audio clips they hear.

Using deep-fakes for political gain is about to become a common practice.  So, will the elections for Canada and the United States be undetectably altered as a result of this technology?

What happens when it’s easy for anyone with a laptop and access to the internet to fake a video of a state leader declaring war on the United States, or vice versa?

With the USA 2020 election only months away, the threat of election interference is perhaps the most menacing and urgent when it comes to deep-fakes.

In this video clip, actress Jennifer Lawrence has actor Steve Buscemi’s face morphed onto her body during a press interview.

Experts have been looking for ways to counter deep-fake algorithms, including creating algorithms and training machine learning programs to process and log common mannerisms of public figures, match habitual facial expressions and mannerisms with certain actions, and determine when content is suspect.

Studying deep-fake patterns has also revealed some useful keys, such as the fact that in fake videos, subjects tend to blink less than humans normally do, making it possible to detect when a video is a fake.

If deep-fake technology continues to evolve without a check, video evidence could lose its credibility during trials.  Innocent people could be charged for crimes they did not commit.  And people who are guilty of crimes could go free, potentially posing a threat to civilians.

Deep-fakes are a threat to the truth on which we base our democracy.  If we get ahead of this threat now, we can still prevent permanent damage to the fabric of our society in the future.

There are great benefits to advancements in technology, but we need to look at our creations with a critical eye, and call them out when they inflict harm on our justice system, economy, and society as a whole.

We need to take action now.  If we don’t, we’re on borrowed time before the threat of deep-fakes becomes dire for our elections, our government, and our society.


Clarke, Y. (2019). Deepfakes will influence the 2020 election—and our economy, and our prison system. Quartz Creative.  Retrieved on August 14, 2019 from,

Dack, S. (2019). Deep Fakes, Fake News, and What Comes Next. Henry M. Jackson School of International Studies, University of Washington. Retrieved on August 14, 2019 from,

Schiffer, Z. (2019). Facebook might start treating deep fakes differently than fake news, Zuckerberg says. Business Insider, Insider, Inc. Retrieved on August 14, 2019 from,

Siekierski, B.J. (2019). Deep Fakes: What Can Be Done About Synthetic Audio and Video? Parliament of Canada, Economics, Resources and International Affairs Division. Retrieved on August 14, 2019 from,

Vincent, J. (2018). Why we need a better definition of ‘deepfake’. Vox Media. ‘The Verge”. Retrieved on August 14, 2019 from,

About the Author:

Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.

He is a business and technology consultant. Over the past 14 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He is a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). 

He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.

He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) [now Ontario Tech University] and on the Board of Advisers of five different Colleges in Ontario.  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.