Student Fellows Analysis: Deepfakes and Hot Takes

Thu, 05/13/2021

Brandi Bue and Alex Paun: NGTC Student Fellows

Deepfakes and Hot Takes: An In-Depth Look at AI-Manipulated Media

What is a deepfake?

When you think of deepfakes, there’s a good chance funny manipulated videos come to mind—like the 2018 internet craze where users superimposed the face of actor and living meme Nicolas Cage into scenes from dozens of iconic films. However, the incredibly interesting and often twisted history of Artificial Intelligence-manipulated media is a bit more complicated. For those unfamiliar with deepfakes, let’s start with a basic definition. The Miriam-Webster dictionary defines a deepfake as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”

Deepfakes can appear as photographs, videos, audio, and even as forged text messages. One of the first known uses of deep learning to create synthetic media is the creation of the program Video Rewrite, by data scientists Christoph Bregler, Michele Covell, and Malcolm Slaney. Video Rewrite used old speeches and videos to generate new video clips, such as this clip of former President John F. Kennedy.

The term “deepfake'' is thought to be a portmanteau of “deep learning”—a type of artificial intelligence that imitates the human brain’s knack for patterns and data processing—and the word “fake.” Coined by a reddit user of the same name, using deepfake technology gained popularity among average consumers in 2017 as a means to superimpose the faces of women onto porn videos. According to a 2017 Vox video, 96 percent of deepfakes are pornographic.

What are some positive ways we can use deepfakes?

Luckily, the future of manipulated media doesn’t have to be so sinister. One way deepfakes can be used in a more positive way is pushing past the current limits of stage makeup to de-age well-known actors in film. An early example was the use of AI to insert Tom Hanks’ character into historical scenarios in the 1994 film, Forrest Gump. A recent example is the use of CGI to depict Robert Di Niro at various ages in the 2019 film, The Irishman.

Deepfakes can also be used to help those in developed countries better understand the devastation of war-torn neighborhoods across the globe. Deep Empathy, a collaboration between MIT researchers and UNICEF, uses the technology to show what cities like London and Chicago might look like under conditions of the Syrian Civil War.

While still a relatively new process, audio deepfakes have potential to increase accessibility for Amyotrophic Lateral Sclerosis (ALS) patients. ALS impairs motor skills, and eventually inhibits one’s ability to move, speak and eat. Synthetic speech can help ALS patients speak in their own voice using text-to-speech services, as seen in the 2016 documentary, Gleason. The AI technology can also be used in the classroom to create more interactive lesson plans. In 2018, speech synthesis research company CereProc used 831 speeches to recreate JFK’s resolution to end the cold war—a speech the late former president never got to deliver.

Deep Fakes – Negatives & How to Spot Them

 Negatives

            The most widespread issue created by deepfakes is not politics, as is commonly asserted, but pornography. And it’s not just celebrities that deepfakes can impact. While deep fakes of celebrities are common and widespread, people with more common everyday lives are being targeted as well. Vox writer Cleo Abram did an article on the issue in June of 2020, focusing on Kristen Bell who is a celebrity and Noelle Martin, who is not. Deepfakes in porn take regular photos individuals post on social media and photoshop them into nude photos or porn videos. However, with the growing use of social media and the internet in day-to-day life and our work environment, these photos and videos have the real potential to damage a person’s reputation. Actress Kristen Bell had deepfakes made of her and her husband, Dax Shepard, by similar means outlined above.

            Many different forms of deepfakes exist: audio, visual, video, and text; the idea isn’t new, but technology has allowed the scope of the problem to become larger. Modern deepfake technology is going far further than just transferring a head onto another body. Deepfakes are allowing facial features, expressions, and even voices to be manipulated. Deepfakes can also take the form of fake texts or direct messages that present the victim as having said something that they never did, which could have potentially serious reputational costs to an individual, should the content be believed.

            Sensity AI, a research company that has researched online deepfake videos since December 2018, has found that between 90% and 95% of deepfake videos are nonconsensual porn. Nonconsensual porn refers to the manipulation of an individual’s image in order to make it seem that the person had appeared in a pornographic video. The creation of nonconsensual pornography has played a central role in the rise of deepfakes; in a sense, it would be accurate to say that deepfakes started with pornography and have expanded outwards since.

            This trend of deepfakes also stretches across age ranges. In March 2021, the Philadelphia Inquirer reported that a woman had anonymously sent coaches of her teen daughter’s cheerleading squad fake photos and videos depicting the girl’s rivals naked, drinking, or smoking to help raise her daughter in the squad ranks.

Spotting deepfakes

            Spotting deepfakes can be difficult – there are, however, some things to look for to help determine if what you’re seeing is real or fake. In the case of a video, you want to look for unnatural eye movement or facial expressions that seem off in some way. If the eye movement is lacking, such as an absence of blinking, this is a red flag. This stems from the fact that it is hard, from a software engineer’s perspective, to replicate blinking in a way that looks natural.

Facial expressions and facial-feature positioning can also be a useful key in spotting a deepfake. If the facial expression of the subject doesn’t look quite right, or their nose seems to be pointing another way than the rest of their face, for instance, this is another obvious red flag. Another key indicator is to look for a lack of emotion in the subjects face. Often individuals featured in deepfakes won’t seem to exhibit the same level of emotion that corresponds with their tone of voice, for example. Another indicator is an awkward-looking body position or unnatural body movement. This might take the form of a body that seems to be in an unnatural or distorted position, or a body that moves in a jerky manner. Notably, deepfakes generally tend to focus on facial features rather than the whole body, so this indicator might, unfortunately, be less useful. Finally, look for unnatural coloring in the form of abnormal skin tone or discoloration, as well as hair or teeth that don’t look real.

            When evaluating potential photo deepfakes, look for the same red flags as you would in video deepfakes, but also be mindful of a number of other tell-tale signs that the image has been manipulated. These might include blurring or misalignment of elements of the image. A common place where these can be found is at the point where the neck meets the body. Another tool an interested party can use to identify a deepfaked photo is to reverse image search the photo, which could bring up similar videos or images that suggest what you’re watching is not real.

Turning to audio deepfakes, first listen to see if the voice sounds natural. How well does it match up to existing recordings of the person that you know to be legitimate? Also listen for small glitches in the audio. Unfortunately, audio deepfakes can be hard to spot if there is no reliable audio recording of the person to compare against.

As you might imagine, fake direct messages or texts are harder to spot and much easier to fake. A quick Google search for “fake Instagram dm” gives you numerous generators to create a fake direct message. One indicator you can look for is that absence of a full screenshot, such as screenshots where the top banner of the app is not included. Another element to look at is whether the profile picture is slightly 'off' as compared to the one the individual being deepfaked usually uses. Profile pictures might, for example, be darker, or not centered correctly, as compared to a users true profile picture. Looking at the word choice and cadence of speech in the messages can help if you know how the person usually types. And finally, ask yourself if the comment is realistic; is this something the user would realistically be saying?

So what are we to make of the future of deepfakes? One thing seem things clear- they won’t be going away any time soon. In the near future, it looks like the responsibility is going to fall to on media consumers to remain skeptical, and remember that in this new age, your eyes and ears can, indeed, deceive you.

Sources:

 "Bucks County woman created 'deepfake' videos to harass rivals on ...." 12 Mar. 2021, https://www.inquirer.com/news/bucks-county-raffaela-spone-cyberbullying-deepfake-20210312.html. Accessed 13 May. 2021.

 "The most urgent threat of deepfakes isn't politics. It's porn. - Vox." 8 Jun. 2020, https://www.vox.com/2020/6/8/21284005/urgent-threat-deepfakes-politics-porn-kristen-bell. Accessed 13 May. 2021.

"Deepfake Best Practices Amid Developing Legal Landscape." 16 Apr. 2021, https://www.kelleydrye.com/getattachment/News-Events/Publications/Articles/Deepfake-Best-Practices-Amid-Developing-Legal-Land/Deepfake-Best-Practices-Amid-Developing-Legal-Landscape_Law360_Villafranco_April-2021.pdf.aspx?lang=en-US. Accessed 13 May. 2021.

"How to spot deepfake videos — 15 signs to watch for | NortonLifeLock." 13 Aug. 2020, https://us.norton.com/internetsecurity-emerging-threats-how-to-spot-deepfakes.html. Accessed 13 May. 2021.

"How To Spot Deepfake Audio Fraud, Because This Technology May ...." 21 Aug. 2019, https://www.bustle.com/p/how-to-spot-deepfake-audio-fraud-because-this-technology-may-be-more-widespread-than-you-think-18684814. Accessed 13 May. 2021.

 "A Short History of Deepfakes. Back in early 2018, a video ... - Medium." https://medium.com/@songda/a-short-history-of-deepfakes-604ac7be6016. Accessed 13 May.

2021.

 "Positive Use Cases of Deepfakes. Technology is an excellent ...." https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387. Accessed 13 May. 2021.

"Cheerleader's mom accused of making "deepfake ... - CBS News." 15 Mar. 2021, https://www.cbsnews.com/news/raffaela-spone-cheerleader-mom-deepfakes/. Accessed 13 May. 2021.

"The world's top deepfake artist is wrestling with the monster he created." 16 Aug. 2019, https://www.technologyreview.com/2019/08/16/133686/the-worlds-top-deepfake-artist-is-wrestling-with-the-monster-he-created/. Accessed 13 May. 2021.

 

 

Tags: Fellows

NGTC Student Fellows Logo with Text Reading