Deep Fakes: A Looming Challenge for
Privacy, Democracy, and National
Security
Bobby Chesney* and Danielle Citron**
Harmful lies are nothing new. But the ability to distort reality has
taken an exponential leap forward with “deep fake” technology. This
capability makes it possible to create audio and video of real people
saying and doing things they never said or did. Machine learning
techniques are escalating the technology’s sophistication, making
deep fakes ever more realistic and increasingly resistant to detection.
Deep-fake technology has characteristics that enable rapid and
widespread diffusion, putting it into the hands of both sophisticated
and unsophisticated actors.
DOI: https://doi.org/10.15779/Z38RV0D 15J
Copyright© 2019 California Law Review, Inc. California Law Review, Inc. (CLR) is a
California nonprofit corporation. CLR and the authors are solely responsible for the content of their
publications.
• James Baker Chair, University of Texas School of Law; co-founder ofLawfare.
** Professor of Law, Boston University School of Law; Vice President., Cyber Civil Rights
Initiative; Affiliate Fellow, Yale Information Society Project; Affiliate Scholar, Stanford Center on
Internet and Society. We thank Benjamin Wittes, Quinta Jurecic, Marc Blitz, Jennifer Finney Boylan,
Chris Bregler, Rebecca Crootof, Jeanmarie Fenrich, Mary Anne Franks, Nathaniel Gleicher, Patrick
Gray, Yasmin Green, Klon Kitchen, Woodrow Hartzog, Herb Lin, Helen Norton, Suz.anne Nossel,
Andreas Schou, and Jessica Silbey for helpful suggestions. We are grateful to Susan McCarty, Samuel
Morse, Jessica Burgard, and Alex Holland for research assistance. We had the great fortune of getting
feedback from audiences at the PEN Board of Trustees meeting; Heritage Foundation; Yale Information
Society Project; University of California, Hastings College of the Law; Northeastern School of
Journalism 2019 symposium on AI, Media, and the Threat to Democracy; and the University of
Maryland School of Law’s Trust and Truth Decay symposium. We appreciate the Deans who
generously supported this research: Dean Ward Farnsworth of the University of Texas School of Law,
and Dean Donald Tobin and Associate Dean Mike Pappas of the University of Maryland Carey School
of Law. We are grateful to the editors of the California Law Review, especially Erik Kundu, Alex
Copper, Yesenia Flores, Faye Hipsman, Gus Tupper, and Brady Williams, for their superb editing and
advice.
1753
1754
CALIFORNIA LAW REVIEW
[Vol. 107:1753
While deep-fake technology will bring certain benefits, it also will
introduce many harms. The marketplace of ideas already suffers from
truth decay as our networked information environment interacts in
toxic ways with our cognitive biases. Deep fakes will exacerbate this
problem significantly. Individuals and businesses will/ace novel forms
of exploitation, intimidation, and personal sabotage. The risks to our
democracy and to national security are profound as well.
Our aim is to provide the first in-depth assessment of the causes
and consequences of this disruptive technological change, and to
explore the existing and potential tools for responding to it. We survey
a broad array of responses, including: the role of technological
solutions; criminal penalties, civil liability, and regulatory action;
military and covert-action responses; economic sanctions; and market
developments. We cover the wateifrontfrom immunities to immutable
authentication trails, offering recommendations to improve law and
policy and anticipating the pitfalls embedded in various solutions.
Introduction ……………………………………………………………………………… 1755
I. Technological Foundations of the Deep-Fakes Problem ………………. 1758
A. Emergent Technology for Robust Deep Fakes ……………… 1759
B. Diffusion of Deep-Fake Technology …………………………… 1762
C. Fueling the Fire ………………………………………………………… 1763
II. Costs and Benefits …………………………………………………………………. 1768
A. Beneficial Uses of Deep-Fake Technology ………………….. 1769
1. Education …………………………………………………………… 1769
2. Art …………………………………………………………………… 1770
3. Autonomy ………………………………………………………….. 1770
B. Harmful Uses of Deep-Fake Technology …………………….. 1771
1. Harm to Individuals or Organizations …………………….. 1771
a. Exploitation ………………………………………………….. 1772
b. Sabotage ………………………………………………………. 1774
2. Harm to Society ………………………………………………….. 1776
a. Distortion of Democratic Discourse ………………… 1777
b. Manipulation of Elections ………………………………. 1778
c. Eroding Trust in Institutions …………………………… 1779
d. Exacerbating Social Divisions ………………………… 1780
e. Undermining Public Safety …………………………….. 1781
f. Undermining Diplomacy ……………………………….. 1782
g. Jeopardizing National Security ……………………….. 1783
h. Undermining Journalism ………………………………… 1784
1.
The Liar’s Dividend: Beware the Cry of Deep-Fake
News …………………………………………………………… 1785
III. What Can Be Done? Evaluating Technical, Legal, and Market
Responses ………………………………………………………………………. 1786
2019]
1755
DEEP FAKES
A.
B.
Technological Solutions …………………………………………….
Legal Solutions …………………………………………………………
1. Problems with an Outright Ban ……………………………..
2. Specific Categories of Civil Liability ……………………..
a. Threshold Obstacles ……………………………………….
b. Suing the Creators of Deep Fakes …………………….
c. Suing the Platforms ………………………………………..
3. Specific Categories of Criminal Liability ………………..
C. Administrative Agency Solutions ………………………………..
1. The FTC ……………………………………………………………..
2. The FCC …………………………………………………………….
3. The FEC ……………………………………………………………..
D. Coercive Responses …………………………………………………..
1. Military Responses ………………………………………………
2. Covert Action ………………………………………………………
3. Sanctions …………………………………………………………….
E. Market Solutions ……………………………………………………….
1. Immutable Life Logs as an Alibi Service ………………..
2. Speech Policies of Platforms …………………………………
Conclusion ………………………………………………………………………………..
1787
1788
1788
1792
1792
1793
1795
1801
1804
1804
1806
1807
1808
1808
1810
1811
1813
1814
1817
1819
INTRODUCTION
Through the magic of social media, it all went viral: a vivid photograph, an
inflammatory fake version, an animation expanding on the fake, posts debunking
the fakes, and stories trying to make sense of the situation. 1 It was both a sign of
the times and a cautionary tale about the challenges ahead.
The episode centered on Emma Gonzalez, a student who survived the
horrific shooting at Marjory Stoneman Douglas High School in Parkland,
Florida, in February 2018. In the aftermath of the shooting, a number of the
students emerged as potent voices in the national debate over gun control. Emma,
in particular, gained prominence thanks to the closing speech she delivered
during the “March for Our Lives” protest in Washington, D.C., as well as a
contemporaneous article she wrote for Teen Vogue.2 Fatefully, the Teen Vogue
I. Alex Horton, A Fake Photo of Emma Gonzalez Went Viral on the Far Right, Where
Parkland Teens are Villains, WASH.POST(Mar. 26, 2018), https://www.washingtonpost.com/news/theintersect/wp/20 18/03/25/a-fake-photo-of-emma-gonz.alez-went-viral-on-the-far -right-where-parklandteens-are-vi llains/?utm_ terrn=.0b0f8655530d [https://perrna.cc/6NDJ-W ADV].
2. Florida Student Emma Gonzalez [sic] to Lawmakers and Gun Advocates: ‘We call BS’,
CNN (Feb.
17, 2018), https://www.cnn.com/2018/02/17 /us/florida-student-emma-gonzalezspeech/index.html [https://perrna.cc/ZE3B-MVPD]; Emma Gonzalez, Emma Gonzalez on Why This
Generation
Needs
Gun
Control,
TEEN
VOOUE
(Mar.
23,
2018),
https://www.teenvogue.com/story/emma-gonzalez-parkland-gun-control-cover?mbid=social_twitter
[https://perrna.cc/P8TQ-P2ZR].
CALIFORNIA LA IVREVJEJ,V
1756
[Vol. 107:1753
piece incorpornted a video entitled “This Is Why We March,” including a
visually a â–¡-esting sequence in which Emma rips up a large sheet displaying a
bullseye target.
A powerful still image of Emma ripping up the bullseye target began to
circulate on the Internet. But soon someone generated a fake version, in which
the torn sheet is not a bullseye, but rather a copy of the Constitution of the United
States. While on some level the fake image might be construed as artistic fiction
highlighting the inconsistency of gun control with the Second Amendment, the
fake was not framed that way, Instead, il was depicted as a true image of Emma
Gonz ..lez ripping up the Constitution.
The image soon went viral. A fake of the video also appeared, though it
was more obvious that it had been manipulated. Still, the video circulated widely,
thanks in part to actor Adam Baldwin circulating il to a quartermillion followers
on Twitter (along with the disturbing hashtag #Vorwarts-the German word for
“forward,” a reference to neo-Nazis’ nod to the word’s role in a Hitler Youlh
anthem).’
Several factors combined to limit the harm from this fakery. First, the
genuine image already was in wide circulation and available at its original
source. This made it fast and easy to fact-check the fakes. Second, the intense
national attention associated with the post-Parkland gun control debate and,
especially, the role of students like Emma in that debate, ensured that journalists
paid attention to the issue, spending time and effort to debunk the fakes. Third,
the fakes were of poor quality (though audiences inclined to believe their
message might disregardthe red flags).
Even with those constraints, though, many believed the fakes, and harm
ensued. Our national dialogue on gun control has suffered some degree of
3.
See Horton,supra note I.
2019]
DEEP FAKES
1757
distortion; Emma has likely suffered some degree of anguish over the episode;
and other Parkland victims likely felt maligned and discredited. Falsified
imagery, in short, has already exacted significant costs for individuals and
society. But the situation is about to get much worse, as this Article shows.
Technologies for altering images, video, or audio (or even creating them
from scratch) in ways that are highly -realistic and difficult to detect are maturing
rapidly. As they ripen and diffuse, the problems illustrated by the Emma
Gonzalez episode will expand and generate significant policy and legal
challenges. Imagine a deep fake video, released the day before an election,
making it appear that a candidate for office has made an inflammatory statement.
Or what if, in the wake of the Trump-Putin tete-a-tete at Helsinki in 2018,
someone circulated a deep fake audio recording that seemed to portray President
Trump as promising not to take any action should Russia interfere with certain
NATO allies. Screenwriters are already building such prospects into their
plotlines. 4 The real world will not lag far behind.
Pornographers have been early adopters of the technology, interposing the
faces of celebrities into sex videos. This has given rise to the label “deep fake”
for such digitized impersonations. We use that label here more broadly, as
shorthand for the full range of hyper-realistic digital falsification of images,
video, and audio.
This full range will entail, sooner rather than later, a disturbing array of
malicious uses. We are by no means the first to observe that deep fakes will
• migrate far beyond the pornography context, with great potential for harm. 5 We
4. See, e.g., Vindu Goel & Sheera Frenkel, In India Election, False Posts and Hate Speech
N.
Y.
TIMES
(Apr.
I,
2019),
Flummox
Facebook,
https://www .nytimes.com/2019/04/01/technology/india-elections-facebook.html
[https://perma.cc/B9CP-MPPK) (describing the deluge of fake and manipulated videos and images
circulated in the lead up to elections in India); Homeland: Like Bad at Things (Showtime television
broadcast Mar. 4, 2018), https://www.sho.com/homeland/season/7/episode/4/li.ke-bad-at-things
[https://penna.cc/25XK-NN3Y]; Taken: Verum Nocet (NBC television broadcast Mar. 30, 2018)
https://www.nbc.com/taken/video/verum-nocet/3688929 [https://penna.cc/CVP2-PNXZ] (depicting a
deep-fake video in which a character appears to recite song lyrics); The Good Fight: Day 408 (CBS
television broadcast Mar. 4, 2018) (depicting fake audio pmporting to be President Trump); The Good
Fight: Day 464 (CBS television broadcast Apr. 29, 2018) (featuring a deep-fake video of the alleged
“golden shower” incident involving President Trump).
5. See, e.g., Samantha Cole, We Are Truly Fucked: Everyone is Making Al-Generated Fake
Porn
Now,
VICE:
MOTHERBOARD
(Jan.
24,
2018),
https://motherboard vice.com/ en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley
[https://perma.cc/V9NT-CBW8) (“[T]echnologyO allows anyone with sufficient raw footage to work
with to convincingly place any face in any video.”); see also @BuzzFeed, You Won’t Believe What
Obama
Says
in
This
Video,
TwlTIER
(Apr.
17,
2018,
8:00
AM),
https://twitter.com/BuzzFeed/status/986257991799222272 [https://perma.cc/C38K-B377) (“We ‘re
entering an era in which our enemies can make anyone say anything at any point in time.”); Tim Mak,
All Things Considered: Technologies to Create Fake Audio and Video Are Quickly Evolving, NAT’L
PuB. RADIO(Apr. 2, 2018), https://www.npr.org/2018/04/02/598916380/technologies-to-create-fakeaudio-and-video-are-quickly-evolving [https://perma.cc/NY23-YVQD] (discussing deep-fake videos
created for political reasons and misinformation campaigns); Julian Sanchez (@normative), TWITTER
(Jan. 24, 2018, 12:26 PM) (”The prospect of any Internet rando being able to swap anyone’s face into
1758
CALIFORNIA LAW REVIEW
[Vol. 107:1753
do, however, provide the first comprehensive survey of these harms and potential
responses to them. We break new ground by giving early warning regarding the
powerful incentives that deep fakes produce for privacy-destructive solutions.
This Article unfolds as follows. Part I begins with a description of the
technological innovations pushing deep fakes into the realm of hyper-realism
and making them increasingly difficult to debunk. It then discusses the
amplifying power of social media and the confounding influence of cognitive
biases.
Part II surveys the benefits and the costs of deep fakes. The upsides of deep
fakes include artistic exploration and educative contributions. The downsides of
deep fakes, however, are as varied as they are costly. Some harms are suffered
by individuals or groups, such as when deep fakes are deployed to exploit or
sabotage individual identities and corporate opportunities. Others impact society
more broadly, such as distortion of policy debates, manipulation of elections,
erosion of trust in institutions, exacerbation of social divisions, damage to
national security, and disruption of international relations. And, in what we call
the “liar’s dividend,” deep fakes make it easier for liars to avoid accountability
for things that are in fact true.
Part III turns to the question of remedies. We survey an array of existing or
potential solutions involving civil and criminal liability, agency regulation, and
“active measures” in special contexts like armed conflict and covert action. We
also discuss technology-driven market responses, including not just the
promotion of debunking technologies, but also the prospect of an alibi service,
such as privacy-destructive life logging. We find, in the end, that there are no
silver-bullet solutions. Thus, we couple our recommendations with warnings to
the public, policymakers, and educators.
I.
TECHNOLOGICAL FOUNDATIONS OF THE DEEP-FAKES PROBLEM
Digital impersonation is increasingly realistic and convincing. Deep-fake
technology is the cutting-edge of that trend. It leverages machine-learning
algorithms to insert faces and voices into video and audio recordings of actual
people and enables the creation of realistic impersonations out of digital whole
cloth. 6 The end result is realistic-looking video or audio making it appear that
someone said or did something. Although deep fakes can be created with the
consent of people being featured, more often they will be created without it. This
Part describes the technology and the forces ensuring its diffusion, virality, and
entrenchment.
porn is incredibly creepy. But my first thought is that we have not even scratched the surface of bow bad
‘fake news’ is going to get”).
6. See Cole, supra note 5.
2019]
DEEP FAKES
1759
A. Emergent Technology for Robust Deep Fakes
Doctored imagery is neither new nor rare. Innocuous doctoring of imagessuch as tweaks to lighting or the application of a filter to improve image
quality-is ubiquitous. Tools like Photoshop enable images to be tweaked in
both superficial and substantive ways.7 The field of digital forensics has been
grappling with the challenge of detecting digital alterations for some time. 8
Generally, forensic techniques are automated and thus less dependent on the
human eye to spot discrepancies. 9 While the detection of doctored audio and
video was once fairly straightforward, 10 the emergence of generative technology
capitalizing on machine learning promises to shift this balance. It will enable the
production of altered (or even wholly invented) images, videos, and audios that
are more realistic and more difficult to debunk than they have been in the past.
This technology often involves the use of a “neural network” for machine
learning. The neural network begins as a kind of tabula rasa featuring a nodal
network controlled by a set of numerical standards set at random. 11 Much as
experience refines the brain’s neural nodes, examples train the neural network
system. 12 If the network processes a broad array of training examples, it should
be able to create increasingly accurate models. 13 It is through this process that
neural networks categorize audio, video, or images and generate realistic
impersonations or alterations. 14
7. See, e.g., Stan Horaczek, Spot Faked Photos UsingDigital Forensic Techniques, POPULAR
SCTENCE (July 21, 2017), https://www.popsci.com/use-photo-forensics-to-spot-faked-images
[https://perma.cc/G72B-VLF2] (depicting and discussing a series of manipulated photographs).
8. Doctored images have been prevalent since the advent of the photography. See PHOTO
TAMPERINGTHROUGHOUTHISTORY, http://pth.izitru.com [https://perma.cc/5QSZ-NULR]. The
gallery was curated by FourandSix Technologies, Inc.
9. See Tiffanie Wen, The Hidden Signs That Can Reveal a Fake Photo, BBC FUTuRE(June
30, 2017), http://www.bbc.com/future/story/20170629-the-hidden-signs-that-can-reveal-if-a-photo-isfake [https://perma.cc/W9NX-XGKJ]. lZITRU.COMwas a project spearheaded by Dartmouth’s Dr.
Hany Farid It allowed users to upload photos to determine if they were fakes. The service was aimed at
“legions of citizen journalists who want[eel]to dispel doubts that what they [were] posting [wa ]s real.”
Rick Gladstone, Photos
Trusted but Verified, N.Y. TlMEs (May 7, 2014),
https://lens.blogs.nytimes.com/2014/05/07/photos-trusted-but-verified [https://perma.cc/7A 73-URKP].
10. See Steven Melendez, How DARPA ‘s Fighting Deepfakes, FASTCOMPANY (Apr. 4, 2018),
https://www.fastcompany.com/40551971/can-new-forensic-tech-win-war-on-ai-generated-fakeimages [https://perma.cc/9A8L-LFTQ].
11. Larry Hardesty, Explained: Neural Networks, MIT NEWS (Apr. 14, 2017),
http://news.mitedu/2017 /explained-neural-networks-deep-leaming-0414
[https://perma.ccNT A64Z2D].
12. Natalie Wolchover, New Theory Cracks Open the Black Box of Deep Neural Networks,
WIRED
(Oct
8,
2017),
https://www.wiredcom/story/new-theory-deep-learning
[https://perma.cc/UEL5-69ND].
13. Will Knight, Meet the Fake Celebrities Dreamed Up By AI, MIT TECH.REV. (Oct 31,
2017), https://www.technologyreview.com/the-download/609290/meet-the-fake-celebrities-dreamedup-by-ai [https://perma.cc/D3A3-JFY4].
14. Will Knight, Real or Fake? AI is Making it Very Hard to Know, MIT TECH.REV. (May 1,
2017), https://www.technologyreview.com/s/604270/real-or-fake-ai-is-making-it-very-hard-to-know
[https://perma.cc/3MQN-A4VH].
1760
CALIFORNIA LAW REVIEW
[Vol. 107:1753
To take a prominent example, researchers at the University of Washington
have created a neural network tool that alters videos so speakers say something
different from what they originally said. 15 They demonstrated the technology
with a video of former President Barack Obama (for whom plentiful video
footage was available to train the network) that made it appear that he said things
that he had not. 16
By itself, the emergence of machine learning through neural network
methods would portend a significant increase in the capacity to create false
images, videos, and audio. But the story does not end there. Enter “generative
adversarial networks,” otherwise known as GANs. The GAN approach, invented
by Google researcher Ian Goodfellow, brings two neural networks to bear
simultaneously. 17 One network, known as the generator, draws on a dataset to
produce a sample that mimics the dataset. 18 The other network, the discriminator,
assesses the degree to which the generator succeeded. 19 In an iterative fashion,
the assessments from the discriminator inform the assessments of the generator.
The result far exceeds the speed, scale, and nuance of what human reviewers
could achieve. 20 Growing sophistication of the GAN approach is sure to lead to
the production of increasingly convincing deep fakes.21
15.
SUPASORN SUWAJANAKORN ET AL., SYNTIIESIZING OBAMA:LEARNING LIP SYNC FROM
36 ACM ‘TRANSACTIONSON GRAPHICS, no. 4, art. 95 (July 2017),
http://grail.cs.washington.edu/projects/AudioToObama/siggrapb 17_obama.pdf
[https://perma.cc/7DCY-XK58]; James Vincent, New Al Research Makes It Easier to Create Fake
Footage
of
Someone
Speaking,
VERGE
(July
12,
2017),
https://www.theverge.com/2017 /7 /12/15957844/ai-fake-video-audio-speech-obama
[https://perma.cc/3SKP-EKGT].
16. Charles Q. Choi, Al Creates Fake Obama, IEEE SPECTRUM(July 12, 2017),
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama
[https://perma.cc/M6GP-TNZ4]; see also loon Son Chung et al., You Said That? (July 18, 2017) (British
Machine Vision conference paper), https://arx.iv.org/abs/1705.02966[https://permacc/6NAH-MA YL].
17. See Ian J. Goodfellow et al., Generative Adversarial Nets (June 10, 2014) (Neural
Information
Processing
Systems
conference
paper),
https://arx.iv.org/abs/1406.2661
[https://penna.cc/97SH-H7DD] (introducing the GAN approach); see also Tero Karras, et al.,
Progressive Growing ofGANs for Improved Quality, Stability, and Variation, ICLR 2018, at 1-2 (Apr.
2018) (conference paper), http://research.nvidiacorn/sites/default/files/pubs/2017-10_ProgressiveGrowing-ofikarras20 l 8iclr-paper.pdf [https://permacc/RSK2-NBAE] (explaining neural networks in
the GAN approach).,
18. Karras, supra note 17, at I.
19. Id.
20. Id. at 2.
21. Consider research conducted at Nvidia. Karras, supra note 17, at 2 (explaining a novel
approach that begins training cycles with low-resolution images and gradually shifts to higher-resolution
images, producing better and much quicker results). The New York Times recently profiled the Nvidia
team’s work. See Cade Metz & Keith Collins, How an A.I. ‘Cat-and-Mouse Game’ Generates
Believable
Fake
Photos,
N.Y.
TIMES
(Jan.
2,
2018),
https://www .nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html
[https://perma.cc/6DLQ-RDWD]. For further illustrations of the GAN approach, see Martin Arjovsky
et al., Wasserstein GAN (Dec. 6, 2017) (unpublished manuscript) (on file with California Law Review);
Chris Donahue et al., Semantically Decomposing the Latent Spaces of Generative Adversarial
Networks, ICLR 2018 (Feb. 22, 2018) (conference paper) (on file with California Law Review),
AUDIO,
2019]
DEEP FAKES
1761
The same is true with respect to generating convincing audio fakes. In the
past, the primary method of generating audio entailed the creation of a large
database of sound fragments from a source, which would then be combined and
reordered to generate simulated speech. New approaches promise greater
sophistication, including Google DeepMind’s “Wavenet” model, 22 Baidu’s
DeepVoice, 23 and GAN models. 24 Startup Lyrebird has posted short audio clips
simulating Barack Obama, Donald Trump, and Hillary Clinton discussing its
technology with admiration. 25
In comparison to private and academic efforts to develop deep-fake
technology, less is currently known about governmental research. 26 Given the
possible utility of deep-fake techniques for various government purposesincluding the need to defend against hostile uses-it is a safe bet that state actors
https://github.com/chrisdonahue/sdgan; Phillip Isola et al., Image-to-Image Translation with
Conditional Adversarial Nets (Nov. 26, 2018) (unpublished manuscript) (on file with California Law
Review); Alec Radford et al., Unsupervised Representation Learning with Deep Convolutional
Generative Adversarial Networks (Jan. 7, 2016) (unpublished manuscript) (on file with California Law
Review); Jun-Yan Zhu et al., Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial
Networks (Nov. 15, 2018) (unpublished manuscript) (on file with California Law Review).
22. Aaron van den Oord et al., WaveNet: A Generative Model for Raw Audio (Sept. 19, 2016)
(unpublished manuscript) (on file with California Law Review), https://arxiv.org/pdf/1609.03499.pdf
[https://perma.cc/QX4W-E6JT].
23. Ben Popper, Baidu’s New System Can Learn to Imitate Every Accent, VERGE (Oct. 24,
2017),
https://www.theverge.com/2017/10/24/16526370/baidu-deepvoice-3-ai-text-to-speech-voice
[https://perma.cc/NXV2-GDVJ].
24. See Chris Donahue et al., Adversarial Audio Synthesis (Feb. 9, 2019) (conference paper),
https://arxiv.org/pdf/1802.04208.pdf [https://permacc/F5UG-334U]; Yang Gao et al., Voice
Impersonation Using Generative Adversarial Networks (Feb. 19, 2018) (unpublished manuscript),
https://arxiv.org/abs/1802.06840 [https://perma.cc/5HZV-ZLD3].
25. See Bahar Gholipour, New AI Tech Can Mimic Any Voice, SCI. AM. (May 2, 2017),
https://www.scientificamerican.com/article/new-ai-tech-can-mimic-any-voice [https://perma.cc/2HSP83C3]. The ability to cause havoc by using this technology to portray persons saying things they have
never said looms large. Lyrebird’s website includes an “ethics” statement, which defensively invokes
notions of technological determinism. The statement argues that impersonation technology is inevitable
and that society benefits from gradual introduction to it Ethics, LYREBIRD,https://lyrebird.ai/ethics
[https://perma.cc/Q57E-G6MK] (“Imagine that we had decided not to release this technology at all.
Others would develop it and who knows if their intentions would be as sincere as ours: they could, for
example, only sell the technology to a specific company or an ill-intentioned organization. By contrast,
we are making the technology available to anyone and we are introducing it incrementally so that society
can adapt to it, leverage its positive aspects for good, while preventing potentially negative
applications.”).
26. DARPA’s MediFor program is working to “[develop] technologies for the automated
assessment of the integrity of an image or video and [integrate] these in an end-to-end media forensics
platform.” Matt Turek, Media Forensics (MediFor), DEF. ADVANCEDRES. PROJECTSAGENCY,
https://www.darpa.mil/program/media-forensics [https://perma.cc/VBY5-BQJA]. !ARPA’s DNA
program is attempting to use artificial intelligence to identify threats by sifting through video imagery.
Deep Jntermodal Video Analytics (DIVA) Program, INTELLIGENCE
ADVANCEDRES. PROJECTS
ACTIVITY,https://www.iarpa.gov/index.php/research-programs/diva [https://perma.cc/4VDX-B68W].
There are no grants from the National Science Foundation awarding federal dollars to researchers
studying the detection of doctored audio and video content at this time. E-mail from Seth M. Goldstein,
Project Manager, IARP A, Office of the Director of National Intelligence, to Samuel Morse (Apr. 6,
2018, 7:49 AM) (on file with authors).
1762
CALIFORNIA LAW REVIEW
[Vol. 107:1753
are conducting classified research in this area. However, it is unclear whether
classified research lags behind or outpaces commercial and academic efforts. At
the least, we can say with confidence that industry, academia, and governments
have the motive, means, and opportunity to push this technology forward at a
rapid clip.
B. Diffusion of Deep-Fake Technology
The capacity to generate persuasive deep fakes will not stay in the hands of
either technologically sophisticated or responsible actors. 27 For better or worse,
deep-fake technology will diffuse and democratize rapidly.
As Benjamin Wittes and Gabriella Blum explained in The Future of
Violence: Robots and Germs, Hackers and Drones, technologies–even
dangerous ones-tend to diffuse over time. 28 Firearms developed for statecontrolled armed forces are now sold to the public for relatively modest prices. 29
The tendency for technologies to spread only lags if they require scarce inputs
that function (or are made to function) as chokepoints to curtail access. 30 Scarcity
as a constraint on diffusion works best where the input in question is tangible
and hard to obtain; such as plutonium or highly enriched uranium to create
nuclear weapons. 31
Often though, the only scarce input for a new technology is the knowledge
behind a novel process or unique data sets. Where the constraint involves an
intangible resource like information, preserving secrecy requires not only
security against theft, espionage, and mistaken disclosure, but also the capacity
and will to keep the information confidential. 32 Depending on the circumstances,
the relevant actors may not want to keep the information to themselves and,
indeed, may have affirmative commercial or intellectual motivation to disperse
it, as in the case of academics or business enterprises. 33
27. See Jaime Dunaway, Reddit (Finally) Bans Deepfake Communities, but Face-Swapping
Porn Isn’t Going Anywhere, SLATE(Feb. 8, 2018), https://slate.com/technology/2018/02/reddit-finallybans-deepfak:e-communities-but-face-swapping-pom-isnt-going-anywhere.html
[https://permacc/A4Z7-2LDF].
28. See generally BENJAMlNWITIES & GABRIELLABLUM, THE FlJTURE OF VIOLENCE:
ROBOTSANDGERMS,HACKERS ANDDRONES.CONFRONTING
A NEWAGEOFTHREAT(2015).
29. Fresh Air: Assault Style Weapons in the Civilian Market, NPR (radio broadcast Dec. 20,
2012). Program host Teny Gross interviews Tom Diaz, a policy analyst for the Violence Policy Center.
A
transcript
of
the
interview
can
be
found
at
https:/ /www.npr.org/templates/transcript/transcriptphp?storyld= 167694808 [https://perma.cc/CE3F5AFX].
30. See generally GRAHAMT. ALLISONET AL., AVOIDINGNUCLEARANARCHY (1996).
31. Id
32. The techniques that are used to combat cyber attacks and threats are often published in
scientific papers, so that a multitude of actors can implement these shields as a defense measure.
However, the sophisticated malfeasor can use this information to create cyber weapons that circumvent
the defenses that researchers create.
33. In April 2016, the hacker group “Shadow Brokers” released malware that had allegedly been
created by the National Security Agency (NSA). One month later, the malware was used to propagate
2019]
DEEP FAKES
1763
Consequently, the capacity to generate deep fakes is sure to diffuse rapidly
no matter what efforts are made to safeguard it. The capacity does not depend on
scarce tangible inputs, but rather on access to knowledge like GANs and other
approaches to machine learning. As the volume and sophistication of publicly
available deep-fake research and services increase, user-friendly tools will be
developed and propagated online, allowing diffusion to reach beyond experts.
Such diffusion has occurred in the past both through commercial and blackmarket means, as seen with graphic manipulation tools like Photoshop and
malware services on the dark web. 34 User-friendly capacity to generate deep
fakes likely will follow a similar course on both dimensions. 35
Indeed, diffusion has begun for deep-fake technology. The recent wave of
attention generated by deep fakes began after a Reddit user posted a tool inserting
the faces of celebrities into porn videos. 36 Once Fake App, “a desktop app for
creating photorealistic faceswap videos made with deep learning,” appeared
online, the public adopted it in short order. 37 Following the straightforward steps
provided by Fake App, a New York Times reporter created a semi-realistic deepfake video of his face on actor Chris Pratt’s body with 1,861 images of himself
and 1,023 images of Chris Pratt. 38 After enlisting the help of someone with
experience blending facial features and source footage, the reporter created a
realistic video featuring him as Jimmy Kimmel. 39 This portends the diffusion of
ever more sophisticated versions of deep-fake technology.
C. Fueling the Fire
The capacity to create deep fakes comes at a perilous time. No longer is the
public’s attention exclusively in the hands of trusted media companies.
Individuals peddling deep fakes can quickly reach a massive, even global,
the WannaCry cyber attacks, which wreaked havoc on network systems around the globe, threatening
to erase files if a ransom was not paid through Bitcoin. See Bruce Schneier, Who Are the Shadow
Brokers?,
ATLANTIC
(May
23,
2017),
https://www.theatlantic.com/technology/archive/2017/05/shadow-brokers/527778
(https://perma.cc/UW2F-V36G].
34. See ARMOR, THE BLACKMARKET REPORT: A LoOK INSIDE THE DARK WEB 2 (2018),
https://www .armor.com/app/uploads/2018/03/2018-Q 1-Reports-BlackMarket-DIGITAL.pelf
[https://perma.cc/4UJA-QJ94] (explaining that the means to conduct a DDoS attack can be purchased
for $10/hour, or $200/day).
35. See id
36. Emma Grey Ellis, People Can Put Your Face on Pam-And the Law Can’t Help You,
WIRED
(Jan.
26,
2018),
https://www.wired.com/story/face-swap-pom-legal-limbo
[https://perma.cc/B7K7-Y79L].
37. FAKEAPP,https://www.fakeapp.org.
38. Kevin Roose, Here Come the Fake Videos, Too, N.Y. T!MEs (Mar. 4, 2018),
https://www .nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html
[https://perma.cc/U5QE-EPHX].
39. Id.
1764
CALIFORNIA LAW REVIEW
[Vol. 107:1753
audience. As this section explores, networked phenomena, rooted in cognitive
bias, will fuel that effort. 40
Twenty-five years ago, the practical ability of individuals and organizations
to distribute images, audio, and video (whether authentic or not) was limited. In
most countries, a handful of media organizations disseminated content on a
national or global basis. In the U.S., the major television and radio networks,
newspapers, magazines, and book publishers controlled the spread of
information. 41 While governments, advertisers, and prominent figures could
influence mass media, most were left to pursue local distribution of content. For
better or worse, relatively few individuals or entities could reach large audiences
in this few-to-many information distribution environment. 42
The information revolution has disrupted this content distribution model. 43
Today, innumerable platforms facilitate global connectivity. Generally speaking,
the networked environment blends the few-to-many and many-to-many models
of content distribution, democratizing access to communication to an
unprecedented degree. 44 This reduces the overall amount of gatekeeping, though
control still remains with the companies responsible for our digital
infrastructure. 45 For instance, content platforms have terms-of-service
agreements, which ban certain forms of content based on companies’ values. 46
40. See generally DANIELLE KEATS CITRON, HATE CRlMEs IN CYBERSPACE(2014)
[hereinafter CITRON,HATE CRIMESINCYBERSPACE](exploring pathologies attendant to online speech
including deindividuation, virality, information cascades, group polarization, and filter bubbles). For
important early work on filter bubbles, echo chambers, and group polarization in online interactions, see
generally ELI PARISER,THEFILTERBUBBLE:WHATTHE INTERNETrsHIDINGFROMYou (2011 ); CASS
R SUNSTEIN,REPUBLIC.COM(2001).
41. See generally NICHOLASCARR, THEBIG SWITCH:REWIRINGTHE WORLD,FROMEDISON
TO GooGLE (2008); How ARDRHEINGOLD,SMARTMOBS: THENEXT SOCIALREVOLUTION(2002).
42. See id
43. See generally SIVA VAIDHYANATHAN,
THE GOOGLIZATIONOF EVERYTHING(ANDWHY
WE SHOULDWORRY) (2011 ).
44. This ably captures the online environment accessible for those living in the United States.
As Jack Goldsmith and Tim Wu argued a decade ago, geographic borders and the will of governments
can and do make themselves known online. See generally JACKGOLDSMITH& TIM Wu, WHO OWNS
THE INTERNET?:ILLUSIONSOF A BORDERLESSWORLD(2006). The Internet visible in China is vastly
different from the Internet visible in the EU, which is different from the Internet visible in the United
States (and likely to become more so soon). See, e.g., Elizabeth C. Economy, The Great Firewall of
GUARDIAN
(June
29,
2018)
China:
Xi
Jinping’s
Internet
Shutdown,
https://www .theguardian.corn/news/20 l 8/jun/29/the-great-firewall-of-china-xi-jinpings-intemetshutdown [https://perrna.cc/8GUS-EC59]; Casey Newton, Europe Is Splitting the Internet into Three:
How the Copyright Directive Reshapes the Open Web, VERGE (Mar. 27, 2019,
https:/ /www .theverge.corn/2019/3/27 /18283541/european-union-copyright-directive-Internet-articlel 3 [https://perma.cc/K235-RZ7Q].
45. Danielle Keats Citron & Neil M. Richards, Four Principles for Digital Expression (You
Won’t Believe #3!), 95 WASH. U. L. REV. 1353, 1361-64 (2018).
46. See CITRON, HATE CRIMESIN CYBERSPACE,supra note 40, at 232-35; Danielle Keats
Citron, Extremist Speech, Compelled Confonnity, and Censorship Creep, 93 NOTRE DAME L. REV.
1035, 1037 (2018) [hereinafter Citron, Extremist Speech] (noting that platforms’ terms of service and
community guidelines have banned child pornography, spam, phishing, fraud, impersonation, copyright
violations, threats, cyber stalking, nonconsensual pornography, and hate speech); see also DANIELLE
2019)
DEEP FAKES
1765
They experience pressure from, or adhere to legal mandates of, governments to
block or filter certain information like hate speech or “fake news.’,47
Although private companies have enormous power to moderate content
(shadow banning it, lowering its prominence, and so on), they may decline to
filter or block content that does not amount to obvious illegality. Generally
speaking, there is far less screening of content for accuracy, quality, or
suppression of facts or opinions that some authority deems undesirable.
Content not only can find its way to online audiences, but can circulate far
and wide, sometimes going viral both online and, at times, amplifying further
once picked up by traditional media. A variety of cognitive heuristics help fuel
these dynamics. Three phenomena in particular-the “information cascade”
dynamic, human attraction to negative and novel information, and filter
bubbles-help explain why deep fakes may be especially prone to going viral.
First, consider the “information cascade” dynamic. 48 Information cascades
result when people stop paying sufficient attention to their own information,
relying instead on what they assume others have reliably determined and then
passing that information along: Because people cannot know everything, they
often rely on what others say, even ifit contradicts their own knowledge. 49 At a
certain point, people stop paying attention to their own information and look to
what others know. 50 And when people pass along what others think, the
KEATS CITRON& QUINTAJURECTC,PLATFORMJUSTICE:CONTENTMODERATIONAT AN INFLECTION
POINT 12 (Hoover Institution ed., 2018) [hereinafter CITRON & JURECIC, PLATFORMJUSTICE],
https:/ /www.hoover.org/sites/default/files/research/docs/citron-jurecic _ webreadypdf.pdf
[https://perrna.cc/M5L6-GNCH]
(noting Facebook’s Terms of Service agreement banning
noncoosensual pornography). See generally Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REv.
61 (2009) [hereinafter Citron, Cyber Civil Rights]; Danielle Keats Citron & Helen Norton,
Intermediaries and Hate Speech: Fostering Digital Citizenship for Our Information Age, 91 B.U. L.
REV. 1435, 1458 (2011) (discussing hate speech restrictions contained in platforms’ terms of service
agreements); Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad
Samaritans § 230 Immunity, 86 FORDHAML. REV. 401 (2017) (arguing that law should incentivize
online platforms to address known illegality in a reasonable manner).
47. See Citron, Extremist Speech, supra note 46, at 1040—49 (exploring pressure from EU
Commission on major platforms to remove extremist speech and hate speech). For important work on
global censorship efforts, see the scholarship of Anupam Chander, Daphne Keller, and Rebecca
McKinnon. See generally REBECCAMCKINNON,CONSENTOF TI-IENETWORKED:THEWORLDWIDE
STRUGGLEFOR INTERNETFREEDOM6 (2012) (arguing that ISPs and online platforms have “far too
much power over citizens’ lives, in ways that are insufficiently transparent or accountable to the public
interest.”); Anupam Chander, Facebookistan, 90 N.C. L. REV. 1807, 1819-35 (2012); Anupam
Chander, Googling Freedom, 99 CALIF. L. REV. 1, 5-9 (2011); Daphne Keller, Toward a Clearer
Conversation About Platform Liability, K.NlGHTFIRsT AMEND.INST. AT COLUM.U. (April 6, 2018),
https://knightcolumbia.org/content/toward-clearer-conversation-about-platform-liability
[https://permacc/GWM7-J8PW].
48. Carr, supra note 41. See generally DAVID EASLEY & JON KLEINBERG,NETWORKS,
CROWDS,AND MARKETS:REASONINGABOUT A HIGHLY CONNECTEDWORLD (2010) (exploring
cognitive biases in the information marketplace); CASS SUNSTEIN,REPUBLIC.COM2.0 (2007) (same).
49. See generally EASLEY& KLEINBERG,supra note 48.
50. Id.
1766
CALIFORNIA LAW REVIEW
[Vol. 107:1753
credibility of the original claim snowballs. 51 As the cycle repeats, the cascade
strengthens. 52
Social media platforms are a ripe environment for the formation of
information cascades spreading content of all stripes. From there, cascades can
spill over to traditional mass-audience outlets that take note of the surge of social
media interest and as a result cover a story that otherwise they might not have. 53
Social movements have leveraged the power of information cascades, including
Black Lives Matter activists 54 and the Never Again movement of the Parkland
High School students. 55 Arab Spring protesters spread videos and photographs
of police torture. 56 Journalist Howard Rheingold refers to positive information
cascades as “smart mobs.” 57 But not every mob is smart or laudable, and the
information cascade dynamic does not account for such distinctions. The Russian
covert action program to sow discord in the United States during the 2016
election provides ample demonstration. 58
Second, our natural tendency to propagate negative and novel information
may enable viral circulation of deep fakes. Negative and novel information
“grab[s] our attention as human beings and [] cause[s] us to want to share that
information with others-we’re attentive to novel threats and especially attentive
to negative threats.” 59 Data scientists, for instance, studied 126,000 news stories
shared on Twitter from 2006 to 2010, using third-party fact-checking sites to
51. Id
52. Id
53. See generally YOCHAI BENKLER, THE WEALTII OF NETWORKS:How SOCIAL
PRODUCTIONTRANSFORMSMARKETSAND FREEDOM (2006) (elaborating the concept of social
production in relation to rapid evolution of the information marlcetplaceand resistance to that trend).
54. See Monica Anderson & Paul Hitlin, The Hashtag #BlackLivesMatter Emerges: Social
Activism on Twitter, PEW RES. Cm.. (Aug. 15, 2016), http://www.pewintemetorg/2016/08/15/thehashtag-blacklivesrnatter-emerges-social-activism-on-twitter
[https://perma.cc/4BW9-L67G]
(discussing Black Lives Matter activists’ use of the hash tag #BlackLivesMatter to identify their message
and display solidarity around race and police use of force).
55. Jonah Engel Bromwich, How the Parkland Students Got So Good at Social Media, N.Y.
TIMES (Mar. 7, 2018), https://www .nytimes.com/2018/03/07/us/parlcland-students-social-media.html
[https://permacc/7 AW9-4HR2] (discussing students’ use of social media to keep sustained political
attention on the Parlcland tragedy).
supra note 40, at 68.
56. CITRON,HATECRIMESINCYBERSPACE,
supra note 41.
57. RHEINGOLD,
58. The 2018 indictment of the Internet Research Agency in the U.S. District Court for the
District
of
Columbia
is
available
at
https://wwwjustice.gov/file/1035477/download
[https://perma.cc/B6WJ-4FLX]; see also David A. Graham, What the Mueller Indictment Reveals,
ATLANTIC (Feb.
16, 2018), https://www.theatlantic.com/politics/archive/2018/02/muellerroadmap/553604 [https://perma.cc/WU2U-XHWW]; Tim Mak & Audrey McNamara, Mueller
Indictment of Russian Operatives Details Playbook of Iriformation Warfare, NAT’L P!JB.RADIO(Feb.
17, 2018), https://www.npr.org/2018/02/17 /586690342/rnueller-indictment-of-russian-operativesdetails-playbook-of-information-warfare [https://perma.cc/RJ6F-999R].
59. Robinson Meyer, The Grim Conclusions of the Largest-Ever Study of Fake News, THE
ATLANTIC (Mar. 8, 2018), https://www.theatlantic.com/technology/archive/2018/03/largest-studyever-fake-news-mit-twitter/555104 [https://perma.cc/PJS2-RKMF].
2019]
1767
DEEP FAKES
classify them as true or false. 60 According to the study, hoaxes and false rumors
reached people ten times faster than accurate stories. 61 Even when researchers
controlled for differences between accounts originating rumors, falsehoods were
70 percent more likely to get retweeted than accurate news. 62 The uneven spread
of fake news was not due to bots, which in fact retweeted falsehoods at the same
frequency as accurate information. 63 Rather, false news spread faster due to
people retweeting inaccurate news items. 64 The study’s authors hypothesized
that falsehoods had greater traction because they seemed more “novel” and
evocative than real news. 65 False rumors tended to elicit responses expressing
surprise and disgust, while accurate stories evoked replies associated with
sadness and trust. 66
With human beings seemingly more inclined to spread negative and novel
falsehoods, the field is ripe for bots to spur and escalate the spreading of negative
misinformation. 67 Facebook estimates that as many as 60 million bots may be
infesting its platform. 68 Bots were responsible for a substantial portion of
political content posted during the 2016 election. 69 Bots also can manipulate
algorithms used to predict potential engagement with content.
Negative information not only is tempting to share, but is also relatively
“sticky.” As social science research shows, people tend to credit-and
remember-negative information far more than positive information. 7 Coupled
with our natural predisposition towards certain stimuli like sex, gossip, and
violence, that tendency provides a welcome environment for harmful deep
fakes. 71 The Internet amplifies this effect, which helps explain the popularity of
°
60. Soroush Vosoughi et al., The Spread of Trne and False News Online, 359 SCIENCE1146,
1146 (2018), http://science.sciencemag.org/content/359/6380/1146/tab-pdf [https://perma.cc/5U5DUHPZ].
61. Id at 1148.
62. Id at 1149.
63. Id. at 1146.
64. Id
65. Id at 1149.
66. Id. at 1146, 1150.
67. Meyer, supra note 59 (quoting political scientist Dave Karpf).
68. Nicholas Confessore et al., The Follower Factory, N.Y. TIMEs (Jan. 27, 2018),
https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html
[https://perma.cc/DX34-RENV] (“In November, Facebook disclosed to investors that it had at least
twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts
may roam the world’s largest social media platform.”); see also Extremist Content and Russian
Disinfonnation Online: Working with Tech to Find Solutions: Hearing Before the S. Judiciary Comm.,
117th Cong. (2017) https://www.judiciary.senate.gov/meetings/extremist-content-and-russiandisinformation-online-working-with-tech-to-find-solutions [https://perma.cc/M5L9-R2MY].
69. David M. J. Lazer et al., The &ience of Fake News: Addressing Fake News Requires a
Multidisciplinary Effort, 359 SCIENCE1094, 1095 (2018).
70. See, e.g., Elizabeth A. Kensinger, Negative Emotion Enhances Memory Accuracy:
IN PSYCHOL.SCI. 213, 217 (2007)
Behavioral and Neuroimaging Evidence, 16 CURRENTDIRECTIONS
(finding that “negative emotion conveys focal benefits on memory for detail”).
71. PARISER,supra note 40, at 13-14.
1768
CALIFORNIA LAW REVIEW
[Vol. 107:1753
gossip sites like TMZ.com. 72 Because search engines produce results based on
our interests, they tend to feature more of the same-more sex and more gossip.73
Third, filter bubbles further aggravate the spread of false information. Even
without the aid of technology, we naturally tend to surround ourselves with
information confirming our beliefs. Social media platforms supercharge this
tendency by empowering users to endorse and re-share content. 74 Platforms’
algorithms highlight popular information, especially if it has been shared by
friends, and surround us with content from relatively homogenous groups. 75 As
endorsements and shares accumulate, the chances for an algorithmic boost
increase. After seeing friends’ recommendations online, individuals tend to pass
on those recommendations to their own networks. 76 Because people tend to share
information with which they agree, social media users are surrounded by
information confirming their preexisting beliefs. 77 This is what we mean by
“filter bubble. ” 78
Filter bubbles can be powerful insulators against the influence of contrary
information. In a study of Facebook users, researchers found that individuals
reading fact-checking articles had not originally consumed the fake news at
issue, and those who consumed fake news in the first place almost never read a
fact-check that might debunk it.79
Taken together, common cognitive biases and social media capabilities are
behind the viral spread of falsehoods and decay of truth. They have helped
entrench what amounts to information tribalism, and the results plague public
and private discourse. Information cascades, natural attraction to negative and
novel information, and filter bubbles provide an all-too-welcoming environment
as deep-fake capacities mature and proliferate.
II.
COSTS AND BENEFITS
Deep- fake technology can and will be used for a wide variety of purposes.
Not all will be antisocial; some, in fact, will be profoundly prosocial.
72. CITRON,HATECR.IMEs
INCYBERSPACE,
supra note 40, at 68.
73. Id
74. Id at 67.
75. Id
76. Id
77. Id
78. Political scientists Andrew Guess, Brendan Nyhan, and Jason Reifler studied the production
and consumption of fake news on Facebook during the 2016 U.S. Presidential election. According to
the study, filter bubbles were deep (with one in four individuals visiting from fake news websites), but
narrow (the majority offake news group consumption was concentrated among 10% of the public). See
ANDREW GUESS ET AL., SELECTIVEEXPOSURETO MISINFORMATION:
EVIDENCEFROM TIIE
CONSUMPTIONOF FAKE NEWS DURING THE 2016 U.S. PRESIDENTIALCAMPAIGN 1 (2018)
https:/ /www.dartmouth.edu/-nyhan/fake-news-2016.pdf [https://perma.cc/F3VF-NCL ].
79. See id. at 11.
2019]
DEEP FAKES
1769
Nevertheless, deep fakes can inflict a remarkable array of harms, many of which
are exacerbated by features of the information environment explored above.
A. Beneficial Uses of Deep-Fake Technology
Human ingenuity no doubt will conceive many beneficial uses for deepfake technology. For now, the most obvious possibilities for beneficial uses fall
under the headings of education, art, and the promotion of individual autonomy.
1. Education
Deep-fake technology creates an array of opportunities for educators,
including the ability to provide students with information in compelling ways
relative to traditional means like readings and lectures. This is similar to an
earlier wave of educational innovation made possible by increasing access to
ordinary video. 80 With deep fakes, it will be possible to manufacture videos of
historical figures speaking directly to students, giving an otherwise unappealing
lecture a new lease on life. 81
Creating modified content will raise interesting questions about intellectual
property protections and the reach of the fair use exemption. Setting those
obstacles aside, the educational benefits of deep fakes are appealing from a
pedagogical perspective in much the same way that is true for the advent of
virtual and augmented reality production and viewing technologies. 82
The technology opens the door to relatively cheap and accessible
production of video content that alters existing films or shows, particularly on
the audio track, to illustrate a pedagogical point. For example, a scene from a
war film could be altered to make it seem that a commander and her legal advisor
are discussing application of the laws of war, when in the original the dialogue
had nothing to do with that-and the scene could be re-run again and again with
modifications to the dialogue tracking changes to the hypothetical scenario under
80. Emily Cruse, Using Educational Video in the Classroom: Theory, Research, and Practice,
1-2
(2013)
(unpublished
manuscript),
https://www.safarimontage.com/pdfs/training/UsingEducationa!VideoinTheClassroom.pdf
[https://perma.cc/AJ8Q-WZP4].
81. Face2Face is a real-time face capture and reenactment software developed by researchers at
the University of Erlangen-Nuremberg, the Max-Planck-Institute for Informatics, and Stanford
University. The applications of this technology could reinvent the way students learn about historical
events and figures. See Justus Thies et al., Face2Face: Real-time Face Capture and Reenactment ofRGB
Videos
(June
2016)
(29th
IEEE-CVPR
2016
conference
paper),
http://www.graphics.stanford.edu/-niessner/papers/2016/1facetoface/thies2016face.pdf
[https://permacc/S94K-DPU5].
82. Adam Evans, Pros and Cons of Virtual Reality in the Classroom, CHRON.HIGHER EDUC.
(Apr.
8,
2018),
https://www.chronicle.com/article/ProsCons-of-Virtual/243016
[https://perma.cc/TN84-89SQ].
1770
CALIFORNIA LAW REVIEW
[Vol. 107:1753
consideration. If done well, it would surely beat just having the professor asking
students to imagine the shifting scenario out of whole cloth. 83
The educational value of deep fakes will extend beyond the classroom. In
the spring of 2018, Buzzfeed provided an apt example when it circulated a video
that appeared to feature Barack Obama warning of the dangers of deep-fake
technology itself. 84 One can imagine deep fakes deployed to support educational
campaigns by public-interest organizations such as Mothers Against Drunk
Driving.
2.
Art
The potential artistic benefits of deep-fake technology relate to its
educational benefits, though they need not serve any formal educational purpose.
Thanks to the use of existing technologies that resurrect dead performers for
fresh roles, the benefits to creativity are already familiar to mass audiences. 85 For
example, the startling appearance of the long-dead Peter Cushing as the
venerable Grand MoffTarkin in 2016’s Rogue One was made possible by a deft
combination of live acting and technical wizardry. That prominent illustration
delighted some and upset others. 86 The Star Wars contribution to this theme
continued in The Last Jedi when Carrie Fisher’s death led the filmmakers to fake
additional dialogue using snippets from real recordings. 87
Not all artistic uses of deep-fake technologies will have commercial
potential. Artists may find it appealing to express ideas through deep fakes,
including, but not limited to, productions showing incongruities between
apparent speakers and their apparent speech. Video artists might use deep-fake
technology to satirize, parody, and critique public figures and public officials.
Activists could use deep fakes to demonstrate their point in a way that words
alone could not.
3.
Autonomy
Just as art overlaps with education, deep fakes implicate self-expression.
But not all uses of deep fakes for self-expression are best understood as art. Some
83. The facial animation software CrazyTalk, by Reallusion, animates faces from photographs
or cartoons and can be used by educators to further pedagogical goals. The software is available at
https://www.reallusion.com/crazytalk/defaulthtml [https://perma.cc/TTX8-QMJP].
84. SeeChoi,supranote 16.
85. Indeed, film contracts now increasingly address future uses of a person’s image in
subsequent films via deep fake technology in the event of their death.
86. Dave Itzkoff, How ‘Rogue One ‘Brought Back F amilim Faces, N.Y. TIMES(Dec. 27, 2016),
https://www.nytimes.com/2016/12/27/movies/how-rogue-one-brought-back-grand-moff-tarkin.html
[https://perma.cc/F53C-TDYV].
87. Evan Narcisse, It Took Some Movie Magic to Complete Carrie Fisher’s Leia Dialogue in
The Last Jedi, GIZMODO(Dec. 8, 2017), https://io9.gizmodo.com/it-took-some-movie-magic-tocomplete-carrie-fishers-lei-l 821121635 [https://perma.cc/NF5H-GPJF].
2019)
DEEP FAKES
1771
may be used to facilitate “avatar” experiences for a variety of self-expressive
ends that might best be described in terms of autonomy.
Perhaps most notably, deep-fake audio technology holds promise to restore
the ability of persons suffering from certain forms of paralysis, such as ALS, to
speak with their own voice. 88 Separately, individuals suffering from certain
physical disabilities might interpose their faces and that of consenting partners
into pornographic videos, enabling virtual engagement with an aspect of life
unavailable to them in a conventional sense. 89
The utility of deep-fake technology for avatar experiences, which need not
be limited to sex, closely relates to more familiar examples of technology. Video
games, for example, enable a person to have or perceive experiences that might
otherwise be impossible, dangerous, or otherwise undesirable if pursued in
person. The customizable avatars from Nintendo Wii (known as “Mii”) provide
a familiar and non-threatening example. The video game example underscores
that the avatar scenario is not always a serious matter, and sometimes boils down
to no more and no less than the pursuit of happiness.
Deep-fake technology confers the ability to integrate more realistic
simulacrums of one’s own self into an array of media, thus producing a stronger
avatar effect. For some aspects of the pursuit of autonomy, this will be a very
good thing (as the book and film Ready Player One suggests, albeit with
reference to a vision of advanced virtual reality rather than deep-fake
technology). Not so for others, however. Indeed, as we describe below, the
prospects for the harmful use of deep-fake technology are legion.
B. Harmful Uses of Deep-Fake Technology
Human ingenuity, alas, is not limited to applying technology to beneficial
ends. Like any technology, deep fakes also will be used to cause a broad
spectrum of serious harms, many of them exacerbated by the combination of
networked information systems and cognitive biases described above.
1. Harm to Individuals or Organizations
Lies about what other people have said or done are as old as human society,
and come in many shapes and sizes. Some merely irritate or embarrass, while
others humiliate and destroy; some spur violence. All of this will be true with
deep fakes as well, only more so due to their inherent credibility and the manner
88. Sirna Shakeri, Lyrebird Helps ALS Ice Bucket Challenge Co-Founder Pat Quinn Get His
Voice Back: Project Revoice Can Change Lives, HUFFINGTON POST (Apr. 14, 2018),
https://www .huffingtonpost.ca/2018/04/ 14/lyrebird-helps-als-ice-bucket-challenge-co- founder-patquinn-get-his-voice-back _a_ 23411403 (bttps://perma.cc/R5SD-Y3 7Y].
89. See Allie Volpe, Deepfake Porn has Terrifying Implications. But What if it Could Be Used
for
Good?, MEN’S HEALTII (Apr.
13,
2018),
https://www.menshealth.com/sexwomen/al9755663/deepfakes-pom-reddit-pornhub [https://perma.cc/EFX9-2BUE].
1772
CALIFORNIA LAW REVIEW
[Vol. 107:1753
in which they hide the liar’s creative role. Deep fakes will emerge as powerful
mechanisms for some to exploit and sabotage others.
a.
Exploitation
There will be no shortage of harmful exploitations. Some will be in the
nature of theft, such as stealing people’s identities to extract financial or some
other benefit. Others will be in the nature of abuse, commandeering a person’s
identity to harm them or individuals who care about them. And some will involve
both dimensions, whether the person creating the fake so intended or not.
As an example of extracting value, consider the possibilities for the realm
of extortion. Blackmailers might use deep fakes to extract something of value
from people, even those who might normally have little or nothing to fear in this
regard, who (quite reasonably) doubt their ability to debunk the fakes
persuasively, or who fear that any debunking would fail to reach far and fast
enough to prevent or undo the initial damage. 90 In that case, victims might be
forced to provide money, business secrets, or nude images or videos (a practice
known as sextortion) to prevent the release of the deep fakes. 91 Likewise,
fraudulent kidnapping claims might prove more effective in extracting ransom
when backed by video or audio appearing to depict a victim who is not in fact in
the fraudster’s control.
Not all value extraction takes a tangible form. Deep-fake technology can
also be used to exploit an individual’s sexual identity for other’s gratification. 92
Thanks to deep-fake technology, an individual’s face, voice, and body can be
swapped into real pomography. 93 A subreddit (now closed) featured deep-fake
sex videos of female celebrities and amassed more than 100,000 users. 94 As one
Reddit user asked, “I want to make a porn video with my ex-girlfriend. But I
90. See generally ADAMDoDGE & ERICAJOHNSTONE,
USINGFAKEVIDEOTECHNOLOGY
TO
PERPETUATEINTIMATEPARTNERABUSE6 (2018), http://withoutmyconsent.org/blog/new-advisoryhelps-domestic-violence-survivors-prevent-and-stop-deepfake-abuse
[https://perma.cc/K3Y2-XG2Q]
(discussing how deep fakes used as black mail of an intimate partner could violate the California Family
Code). The advisory was published by the non-profit organization Without My Consent, which combats
online invasions of privacy.
91. Sextortion thrives on the threat that the extortionist will disclose sex videos or nude images
unless more nude images or videos are provided. BENAJMINWITTES ET AL., SEXTORTION:
CYBERSECURITY,TEENAGERS,AND REMOTE SEXUAL AsSAULT (Brookings Inst. ed., 2016),
https://www .brookings.edu/wp-content/uploads/2016/05/sextortion 1-1.pdf
[https:/ /perma.cc/7K9N5W7C].
92. See DoDGE & JOHNSTONE,supra note 90, at 6 (explaining the likelihood that domestic
abusers and cyber stalkers will use deep sex tapes to harm victims); Jank:oRoettgers, ‘Deep Fakes’ Will
VARIETY (Feb.
2,
2018),
Create
Hollywood’s
Next
Sex
Tape
Scare,
http:/ !variety .com/20 l 8/digital/news/hollywood-sex-tapes-deepfakes-ai-1202685655
[https://perma.cc/98HQ-668G].
93. Danielle Keats Citron, Sexual Privacy, 128 YALEL. J. 1870, 1921-24 (2019) [hereinafter
Citron, Sexual Privacy].
94. DoDGE & JOHNSTONE,supra note 90, at 6.
2019]
DEEP FAKES
1773
don’t have any high-quality video with her, but I have lots of good photos.” 95 A
Discord user explained that he made a “pretty good” video of a girl he went to
high school with, using around 380 photos scraped from her Instagram and
Facebook accounts. 96
These examples highlight an important point: the gendered dimension of
the exploitation of deep fakes. In all likelihood, the majority of victims of fake
sex videos will be female. This has been the case for cyber stalking and nonconsensual pornography, and likely will be the case for deep-fake sex videos. 97
One can easily imagine deep-fake sex videos subjecting individuals to
violent, humiliating sex acts. This shows that not all such fakes will be designed
primarily, or at all, for the creator’s sexual or financial gratification. Some will
be nothing less than cruel weapons meant to terrorize and inflict pain. Of deepfake sex videos, Mary Anne Franks has astutely said, “If you were the worst
misogynist in the world, this technology would allow you to accomplish
whatever you wanted. “98
When victims discover that they have been used in deep- fake sex videos,
the psychological damage may be profound-whether or not this was the video
creator’s aim. Victims may feel humiliated and scared. 99 Deep-fake sex videos
force individuals into virtual sex, reducing them to sex objects. As Robin West
has observed, threats of sexual violence “literally, albeit not physically,
penetrates the body.” 100 Deep-fake sex videos can transform rape threats into a
terrifying virtual reality. They send the message that victims can be sexually
abused at whim. Given the stigma of nude images, especially for women and
girls, individuals depicted in fake sex videos also may suffer collateral
consequences in the job market, among other places, as we explain in more detail
below in our discussion of sabotage. 101
95. Id
96. Id
97. AsIA A. EATONET AL., 2017 NATIONWIDE
ONLINE STUDY OF NONCONSENSUAL
PORN
VICTIMIZATION AND PERPETRATION 12
(Cyber
C.R.
Initiative
ed.,
2017),
https://www .cybercivilrights.org/wp-content/uploads/2017/06/CCRI-2017-Research-Report.pdf
[https://perma.cc/2HYP-7ELV] (”Women were significantly more likely [1.7 times] to have been
victims of [non-consensual porn] or to have been threatened with [non-consensual porn]. … “).
98. Drew Harwell, Fake-Porn VideosAre Being Weaponized to Harass and Humiliate Women:
‘Everybody
is
a
Potential
Target’,
WASH.
POST
(Dec.
30,
2018),
https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponizedharass-humiliate-women-everybody-is-potential-target/?utm_term=.936bfc339777
[https://perma.cc/D37Y-DPXB].
99. See generally Rana Ayyub,Jn India, Journalists Face Slut-Shaming and Rape Threats, N. Y.
TIMES(May 22, 2018), https://www.nytimes.com/2018/05/22/opinion/india-joumalists-slut-shamingrape.html [https://perma.cc/A 7WR-PF6L ]; ‘/Couldn’t Talk or Sleep for Three Days’: Journalist Rana
Ayyub’s Horrific Social Media Ordeal over Fake Tweet, Daily O (Apr. 26, 2018),
https://www.dailyo.in/variety/rana-ayyub-trolling-fake-tweet-social-media-harassmenthindutva/story/1/23733.html [https://perma.cc/J6G6-H6GZ].
I 00. ROBINWEST,CARINGFOR JUSTICE102-03 (1997) (emphasis omitted).
101. Deep-fake sex videos should be considered in light of the broader cyber stalking
phenomenon, which more often targets women and usually involves online assaults that are sexually
1774
CALIFORNIA LAW REVIEW
[Vol. 107:1753
These examples are but the tip of a disturbing iceberg. Like sexualized deep
fakes, imagery depicting non-sexual abuse or violence might also be used to
threaten, intimidate, and inflict psychological harm on the depicted victim (or
those who care for that person). Deep fakes also might be used to portray
someone, falsely, as endorsing a product, service, idea, or politician. Other forms
of exploitation will abound.
b.
Sabotage
In addition to inflicting direct psychological harm on victims, deep-fake
technology can be used to harm victims along other dimensions due to their
utility for reputational sabotage. Across every field of competition-workplace,
romance, sports, marketplace, and politics-people will have the capacity to deal
significant blows to the prospects of their rivals.
It could mean the loss of romantic opportunity, the support of friends, the
denial of a promotion, the cancellation of a business opportunity, and beyond.
Deep-fake videos could depict a person destroying property in a drunken rage.
They could show people stealing from a store; yelling vile, racist epithets; using
drugs; or any manner of antisocial or embarrassing behavior like sounding
incoherent. Depending on the circumstances, timing, and circulation of the fake,
the effects could be devastating.
In some instances, debunking the fake may come too late to remedy the
initial harm. For example, consider how a rival might torpedo the draft position
of a top pro sports prospect by releasing a compromising deep-fake video just as
the draft begins. Even if the video is later doubted as a fake, it could be
impossible to undo the consequences (which might involve the loss of millions
of dollars) because once cautious teams make other picks, the victim may fall
into later rounds of the draft (or out of the draft altogether). 102
The nature of today’s communication environment enhances the capacity
of deep fakes to cause reputational harm. The combination of cognitive biases
and algorithmic boosting increases the chances for salacious fakes to circulate.
The ease of copying and storing data online-including storage in remote
jurisdictions-makes it much harder to eliminate fakes once they are posted and
shared. These considerations combined with the ever-improving search engines
increase the chances that employers, business partners, or romantic interests will
encounter the fake.
threatening and sexually demeaning. See CITRON,HATE CRIMEs IN CYBERSPACE,supra note 40, at 1319.
102. This hypothetical is modeled on an actual event, albeit one involving a genuine rather than
a falsified compromising video. In 2016, a highly regarded NFL prospect named Laremy Tunsill may
have lost as much as $16 million when, on the verge of the NFL draft, someone released a video showing
him smoking marijuana with a bong and gas mask. See Jack Holmes, A Hacker’s Tweet May Have Cost
EsQUIRE (Apr.
29,
2016),
This
NFL
Prospect
Almost
$16
Million,
ht1ps://www.esquire.com/sports/news/a44457/laremy-tunsil-nfl-draft-weed-lost-millions
[https://perma.cc/7PEL-PRBF].
2019]
DEEP FAKES
1775
Once discovered, deep fakes can be devastating to those searching for
employment. Search results matter to employers. 103 According to a 2009
Microsoft study, more than 90 percent of employers use search results to make
decisions about candidates, and in more than 77 percent of cases, those results
have a negative result. As the study explained, employers often decline to
interview or hire people because their search results featured “inappropriate
photos. ” 104 The reason for those results should be obvious. It is less risky and
expensive to hire people who do not have the baggage of damaged online
reputations. This is especially true in fields where the competition for jobs is
steep. 105 There is little reason to think the dynamics would be significantly
different with respect to romantic prospects. 106
Deep fakes can be used to sabotage business competitors. Deep-fake videos
could show a rival company’s chief executive engaged in any manner of
disreputable behavior, from purchasing illegal drugs to hiring underage
prostitutes to uttering racial epithets to bribing government officials. Deep fakes
could be released just in time to interfere with merger discussions or bids for
government contracts. As with the sports draft example, mundane business
opportunities could be thwarted even if the videos are ultimately exposed as
fakes.
103. Number of Employers Using Social Media to Screen Candidates at All-Time High, Finds
(June
15, 2017),
Latest
CareerBuilder Study, CAREERBUILDER: PRESS ROOM
http://press.careerbuilder.com/2017-06-15-Number-of-Employers-Using-Social-Media-to-ScreenCandidates-at-All-Time-High-Finds-Latest-CareerBuilder-Study
[https://perma.cc/K6BD-DYSV]
(noting that a national survey conducted in 2017 found that over half of employers will not hire a
candidate without an online presence and may choose not to hire a candidate based on negative social
media content).
104. This has been the case for nude photos posted without consent, often known as revenge
porn. See generally CITRON,HATECRIMEsINCYBERSPACE,
supra note 40, at 17-18, 48–49 (exploring
the economic fallout of the nonconsensual posting of someone’s nude image); Mary Anne Franks,
“Revenge Porn” Reform: A Viewfrom the Front Lines, 69 FLA.L. REV. 1251, 1308-23 (2017). For
recent examples, see Tasneem Nashrulla, A Middle School Teacher Was Fired After a Student Obtained
Her Topless Se/fie. Now She is Suing the School District for Gender Discrimination, BUzzFEED(Apr.
4.
2019),
https://www.buzzfeednews.com/article/tasneemnashrulla/middle-school-teacher-firedtopless-selfie-lawsuit [https://perma.cc/3PGZ-CZ5R); Annie Seifullah, Revenge Porn Took My Career.
The Law Couldn’t Get It Back, JEZEBEL(July 18, 2018), https://jezebel.com/revenge-porn-took-mycareer-the-law-couldnt-get-it-bac-1827572768 [https://perma.cc/D9Y8-63WH].
105. See Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 WAKE
FORESTL. REV. 345, 352-53 (2014) (“Most employers rely on candidates’ online reputations as an
employment screen.”).
106. Journalist Rana Ayuub, who faced vicious online abuse including her image in deep-fake
sex videos, explained that the deep fakes seemed designed to label her as “promiscuous,” “immoral,”
and damaged goods. Ayyub, supra note 99. See generally Citron, Sexual Privacy, supra note 93, at
1925-26 (discussing how victims of deep-fake sex videos felt crippled and unable to talk or eat, let alone
engage with others); Danielle Keats Citron, Why Sexual Privacy Matters for Trost, WASH.U. L. REV.
(forthcoming) (recounting fear of dating and embarrassment experienced by individuals whose nude
photos were disclosed online without consent).
1776
CALIFORNIA LAW REVIEW
[Vol. 107:1753
2. Harm to Society
Deep fakes are not just a threat to specific individuals or entities. They have
the capacity to harm society in a variety of ways. Consider the following:
• Fake videos could feature public officials taking bribes,
displaying racism, or engaging in adultery.
• Politicians and other government officials could appear in
locations where they were not, saying or doing things that they
did not. 107
• Fake audio or video could involve damaging campaign material
that claims to emanate from a political candidate when it does
not.1os
•
•
•
•
•
•
•
•
Fake videos could place them in meetings with spies or
criminals, launching public outrage, criminal investigations, or
both.
Soldiers could be shown murdering innocent civilians in a war
zone, precipitating waves of violence and even strategic harms
to a war effort. 109
A deep fake might falsely depict a white police officer shooting
an unarmed black man while shouting racial epithets.
A fake audio clip might “reveal” criminal behavior by a
candidate on the eve of an election.
Falsified video appearing to show a Muslim man at a local
mosque celebrating the Islamic State could stoke distrust of, or
even violence against, that community.
A fake video might portray an Israeli official doing or saying
something so inflammatory as to cause riots in neighboring
countries, potentially disrupting diplomatic ties or sparking a
wave of violence.
False audio might convincingly depict U.S. officials privately
“admitting” a plan to commit an outrage overseas, timed to
disrupt an important diplomatic initiative.
A fake video might depict emergency officials “announcing”
an impending missile strike on Los Angeles or an emergent
pandemic in New York City, provoking panic and worse.
107. See, e.g., Linton Weeks, A Very Weird Photo of Ulysses S. Grant, NAT’L PuB. RADIO (Oct.
27, 2015 11:03 AM), https://www.npr.org/sections/npr-history-dept/2015/10/27/452089384/a-veryweird-photo-of-ulysses-s-grant [https://perma.cc/F3U6-WRVF] (discussing a doctored photo of
tnysses S. Grant from the Library of Congress archives that was created over 100 years ago).
108. For powerful work on the potential damage of deep-fake campaign speech, see Rebecca
Green, Counte,feit Campaign Speech, 70 HAsTINGSL. J. (forthcoming 2019).
109. Cf Vindu Goel and Sheera Frenkel, In India Election, False Posts and Hate Speech
Flummox
Facebook,
N.Y
TIMES
(Apr.
1,
2019),
https://www.nytimes.com/2019/04/0l /technology/india-elections-facebook.html
[https://perma.cc/55AW-X6Q3].
2019]
DEEP FAKES
1777
As these scenarios suggest, the threats posed by deep fakes have systemic
dimensions. The damage may extend to, among other things, distortion of
democratic discourse on important policy questions; manipulation of elections;
erosion of trust in significant public and private institutions; enhancement and
exploitation of social divisions; harm to specific military or intelligence
operations or capabilities; threats to the economy; and damage to international
relations.
a. Distortion of Democratic Discourse
Public discourse on questions of policy currently suffers from the
circulation of false information. 110 Sometimes lies are intended to undermine the
credibility of participants in such debates, and sometimes lies erode the factual
foundation that ought to inform policy discourse. Even without prevalent deep
fakes, information pathologies abound. But deep fakes will exacerbate matters
by raising the stakes for the “fake news” phenomenon in dramatic fashion ( quite
literally). 111
Many actors will have sufficient interest to exploit the capacity of deep
fakes to skew information and thus manipulate beliefs. As recent actions by the
Russian government demonstrate, state actors sometimes have such interests. 112
Other actors will do it as a form of unfair competition in the battle of ideas. And
others will do it simply as a tactic of intellectual vandalism and fraud. The
combined effects may be significant, including but not limited to the disruption
of elections. But elections are vulnerable to deep fakes in a separate and
distinctive way as well, as we will explore in the next section.
Democratic discourse is most functional when debates build from a
foundation of shared facts and truths supported by empirical evidence. 113 In the
absence of an agreed upon reality, efforts to solve national and global problems
become enmeshed in needless first-order questions like whether climate change
is real. 114 The large-scale erosion of public faith in data and statistics has led us
110. See Steve Lohr, It’s True: False News Spreads Faster and Wider. And Humans Are to
Blame, N.Y. TIMES (Mar. 8, 2018), https://www.nytimes.com/2018/03/08/tecbnology/twitter-fakenews-research.html [https://perrnacc/ AB7 4-CUWV].
111. Franklin Foer, The Era of Fake Video Begins, ATLANTIC (May 2018),
https://www.theatlantic.com/magazine/archive/2018/05/realitys-end/556877
[https://perrna.cc/RX2AX8EE] (“Fabricated videos will create new and understandable suspicions about everything we watch.
Politicians and publicists will exploit those doubts. When captured in a moment of wrongdoing, a culprit
will simply declare the visual evidence a malicious concoction.”).
112. Charlie W arzel, 2017 Was the Year Our Internet Destroyed Our Shared Reality, BUZZFEED
(Dec. 28, 2017), https://www.buzzfeed.com/charliewarzel/2017-year-the-Internet-destroyed-sharedreality?utm _term=.nebaDjYmj [https://perma.cc/8WWS-UC8K].
113. Mark Verstraete & Derek E. Bambauer, Ecosystem of Distrust, 16 FIRST AMEND.L. REV.
129, 152 (2017). For powerful scholarship on how lies undermine culture of trust, see SEANA
VALENTINESHRIFFIN,SPEECHMATTERS:ON LYING,MORALITY,ANDTHELAW (2014).
114. Verstraete & Bambauer, supra note 113, at 144 (“Trust in data and statistics is a precondition
to being able to resolve disputes about the world–they allow participants in policy debates to operate
at least from a shared reality.”).
1778
CALIFORNIA LAW REVIEW
[Vol. 107:1753
to a point where the simple introduction of empirical evidence can alienate those
who have come to view statistics as elitist. 115 Deep fakes will allow individuals
to live in their own subjective realities, where beliefs can be supported by
manufactured “facts.” When basic empirical insights provoke heated
contestation, democratic discourse has difficulty proceeding. In a marketplace of
ideas flooded with deep-fake videos and audio, truthful facts will have difficulty
emerging from the scrum.
b. Manipulation of Elections
In addition to the ability of deep fakes to inject visual and audio falsehoods
into policy debates, a deeply convincing variation of a long-standing problem in
politics, deep fakes can enable a particularly disturbing form of sabotage:
distribution of a damaging, but false, video or audio about a political candidate.
The potential to sway the outcome of an election is real, particularly if the
attacker is able to time the distribution such that there will be enough window
for the fake to circulate but not enough window for the victim to debunk it
effectively (assuming it can be debunked at all). In this respect, the election
scenario is akin to the NBA draft scenario described earlier. Both involve
decisional chokepoints: narrow windows of time during which irrevocable
decisions are made, and during which the circulation of false information
therefore may have irremediable effects.
The 201 7 election in France illustrates the perils. In this variant of the
operation executed against the Clinton campaign in the United States in 2016,
the Russians mounted a covert-action program that blended cyber-espionage and
information manipulation in an effort to prevent the election of Emmanuel
Macron as President of France in 2017. 116 The campaign included theft of large
numbers of digital communications and documents, alteration of some of those
documents in hopes of making them seem problematic, and dumping a lot of
them on the public alongside aggressive spin. The effort ultimately fizzled for
many reasons, including: poor tradecraft that made it easy to trace the attack;
smart defensive work by the Macron team, which planted their own false
documents throughout their own system to create a smokescreen of distrust; a
lack of sufficiently provocative material despite an effort by the Russians to
engineer scandal by altering some of the documents prior to release; and
mismanagement of the timing of the document dump, which left enough time for
the Macron team and the media to discover and point out all these flaws. 117
ll5.
Id
116. See Aurelien Breeden et al., Macron Campaign Says It Was Target of ‘Massive’ Hacking
Attack, N.Y. TIMES(May 5, 2017), https://www.nytimes.com/2017 /05/05/world/europe/france-macronhacking.html [https://perma.cc/4RC8-PV5G].
Came, But the French Were Prepared, N.Y. TIMES
117. See, e.g., AdamNossiteretal.,Hackers
(May 9, 201 7), https://www .nytimes.com/2017 /05/09/world/europe/hackers-came-but-the-frenchwere-prepared.html [https://perma.cctP3EW-H5ZY”].
2019]
DEEP FAKES
1779
It was a bullet dodged, yes, but a bullet nonetheless. The Russians could
have acted with greater care, both in terms of timing and tradecraft. They could
have produced a more-damning fake document, for example, dropping it just as
polls opened. Worse, they could have distributed a deep fake consisting of
seemingly-real video or audio evidence persuasively depicting Macron speaking
or doing something shocking.
This version of the deep-fake threat is not limited to state-sponsored covert
action. States may have a strong incentive to develop and deploy such tools to
sway elections, but there will be no shortage of non-state actors and individuals
motivated to do the same. The limitation on such interventions has much more
to do with means than motive, as things currently stand. The diffusion of the
capacity to produce high-quality deep fakes will erode that limitation,
empowering an ever-widening circle of participants to inject false-butcompelling information into a ready and willing information-sharing
environment. If executed and timed well enough, such interventions are bound
to tip an outcome sooner or later-and in a larger set of cases they will at least
cast a shadow of illegitimacy over the election process itself.
c.
Eroding Trost in Institutions
Deep fakes will erode trust in a wide range of both public and private
institutions and such trust will become harder to maintain. The list of public
institutions for which this will matter runs the gamut, including elected officials,
appointed officials, judges, juries, legislators, staffers, and agencies. One can
readily imagine, in the current climate especially, a fake-but-viral video
purporting to show FBI special agents discussing ways to abuse their authority
to pursue a Trump family member. Conversely, we might see a fraudulent video
ofICE officers speaking with racist language about immigrants or acting cruelly
towards a detained child. Particularly where strong narratives of distrust already
exist, provocative deep fakes will find a primed audience.
Private sector institutions will be just as vulnerable. If an institution has a
significant voice or role in society, whether nationally or locally, it is a potential
target. More to the point, such institutions already are subject to reputational
attacks, but soon will have to face abuse in the form of deep fakes that are harder
to debunk and more likely to circulate widely. Religious institutions are an
obvious target, as are politically-engaged entities ranging from Planned
Parenthood to the NRA. 118
118. Recall that the Center for Medical Progress released videos of Planned Parenthood officials
that Planned Parenthood argued had been deceptively edited to embarrass the organization. See, e.g.,
Jackie Calmes, Planned Parenthood Videos Were Altered, Analysis Finds, N.Y. TIMES(Aug. 27, 2015),
https://www .nytimes.com/2015/08/28/us/abortion-planned-parenthood-videos.html
[https://pennacc/G52X-V8ND]. hnagine the potential for deep fakes designed for such a purpose.
1780
CALIFORNIA LAW REVIEW
[Vol. 107:1753
d. Exacerbating Social Divisions
The institutional examples relate closely to significant cleavages in
American society involving identity and policy commitments. Indeed, this is
what makes institutions attractive targets for falsehoods. As divisions become
entrenched, the likelihood that opponents will believe negative things about the
other side-and that some will be willing to spread lies towards that endgrows. 119 However, institutions will not be the only ones targeted with deep
fakes. We anticipate that deep fakes will reinforce and exacerbate the underlying
social divisions that fueled them in the first place.
Some have argued that this was the actual–or at least the original-goal
of the Russian covert action program involving intervention in American politics
in 2016. The Russians may have intended to enhance American social divisions
as a general proposition, rendering us less capable of forming consensus on
important policy questions and thus more distracted by internal squabbles. 120
Texas is illustrative. 121 Russia promoted conspiracy theories about federal
military power during the innocuous, “Jade Helm” training exercises. 122 Russian
operators organized an event in Houston to protest radical Islam and a counterprotest of that event; 123 they also promoted a Texas independence movement. 124
Deep fakes will strengthen the hand of those who seek to divide us in this way.
Deep fakes will not merely add fuel to the fire sustaining divisions. In some
instances, the emotional punch of a fake video or audio might accomplish a
degree of mobilization-to-action that written words alone could not. 125 Consider
119. See Brian E. Weeks, Emotions, Partisanship, and Misperceptions: How Anger and Anxiety
Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation, 65 J. COMM.699,
711-15 (2015) (discussing how political actors can spread political misinformation by recognizing and
exploiting common human emotional states).
120. JON WHITE, DISMISS,DISTORT,DISTRACT,ANDDISMAY:CONTINUITY
ANDCHANGE IN
RUSSIAN DISINFORMATION
(Inst. for European Studies ed. 2016), https://www.ies.be/node/3689
[https://perma.cc/P889- 768J].
121. The Ca!Ex.itcampaign is another illustration of Russian disinformation campaign. ‘Russian
Trolls’ Promoted California Independence, BBC (Nov. 4, 2017), http://www.bbc.com/news/blogstrending-41853131 [https://permacc/68Q8-KNDG].
122. Cassandra Pollock & Alex Samuels, Hysteria Over Jade Helm Exercise in Texas Was
Fueled by Russians, Former CIA Director Says, TEX. TRIB. (May 3, 2018),
https://www.texastribune.org/2018/05/03/hysteria-over-jade-helm-exercise-texas-was-fueled-russiansformer-cia [https://perma.cc/BU2Y-E7EY].
123. Scott Shane, How Unwitting Americans Encountered Russian Operatives Online, N.Y.
TIMES(Feb. 18, 2018), https://www .nytimes.com/2018/02/ l 8/us/politics/russian-operatives-facebooktwitter.htrnl [https://perma.cc/4C8Y-STP7].
124. Casey Michel, How the Russians Pretended to Be Texans-And Texans Believed Them,
WASH.
POST
(Oct.
17,
2017),
https://www.washingtonpost.com/news/democracypost/wp/2017 /10/17 /how-the-russians-pretended-to-be-texans-and-texans-believedthem/?noredirect=on&utrn _ term= .4730a395a684 [https://perma.cc/3Q7V-8YZK].
125. The “Pizzagate” conspiracy theory is a perfect example. There, an individual stormed a D.C.
restaurant with a gun because online stories falsely claimed that Presidential candidate Hillary Clinton
ran a child sex exploitation ring out of its basement. See Marc Fisher et al., Pizzagate: From Rumor, to
Hashtag,
to
Gunfire
in
D.C.,
WASH.
POST
(Dec.
6,
2016),
2019]
DEEP FAKES
1781
a situation of fraught, race-related tensions involving a police force and a local
community. A sufficiently inflammatory deep fake depicting a police officer
using racial slurs, shooting an unarmed person, or both could set off substantial
civil unrest, riots, or worse. Of course, the same deep fake might be done in
reverse, falsely depicting a community leader calling for violence against the
police. Such events would impose intangible costs by sharpening societal
divisions, as well as tangible costs for those tricked into certain actions and those
suffering from those actions.
e.
Undermining Public Safety
The foregoing example illustrates how a deep fake might be used to
enhance social divisions and to spark actions–even violence-that fray our
social fabric. But note, too, how deep fakes can undermine public safety.
A century ago, Justice Oliver Wendell Holmes warned of the danger of
falsely shouting fire in a crowded theater. 126 Now, false cries in the form of deep
fakes go viral, fueled by the persuasive power of hyper-realistic evidence in
conjunction with the distribution powers of social media. 127 The panic and
damage Holmes imagined may be modest in comparison to the potential unrest
and destruction created by a well-timed deep fake. 128
In the best-case scenario, real public panic might simply entail economic
harms and hassles. In the worst-case scenario, it might involve property
destruction, personal injuries, and/or death. Deep fakes increase the chances that
someone can induce a public panic.
They might not even need to capitalize on social divisions to do so. In early
2018, we saw a glimpse of how a panic might be caused through ordinary human
error when an employee of Hawaii’s Emergency Management Agency issued a
https://www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-gunfire-indc/20 l 6/12/06/4c7def50-bbd4-l l e6-94ac-3d324840 l 06c_ story.html [https://perma.cc/FV7W-PC9W].
126. Schenck v. United States, 249 U.S. 47, 52 (1919) (Holmes, J.) (“The most stringent
protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a
panic.”).
127. Cass R, Sunstein, Constitutional Caution, 1996 U. CHI.LEGALF.361,365 (1996) (“It may
well be that the easy transmission of such material to millions of people will justify deference to
reasonable legislative judgments.”).
128. In our keynote at the University of Maryland Law Review symposium inspired by this
article, we brought the issue close to home (for one of us) in Baltimore—-thedeath of Freddie Gray while
he was in policy custody. We asked the audience: “Imagine if a deep-fake video appeared of the police
officers responsible for Mr. Gray’s death in which they said they were ordered to kill Mr. Gray. As most
readers know, the day after Mr. Gray’s death was characterized by protests and civil unrest. If such a
deep-fake video had appeared and gone viral, we might have seen far more violence and disruption in
Baltimore. If the timing was just right and the video sufficiently inflammatory, we might have seen
greater destruction of property and possibly of lives.” Robert Chesney & Danielle Keats Citron, 21st
Century Style Truth Decay: Deep Fakes and the Challengefor Privacy, Free Expression, and National
Security, 78 MD. L. REV. 887 (2019); see also Maryland Carey Law, Truth Decay- Maryland Law
Review
Keynote
Symposium
Address,
YOUTuBE
(Feb.
6,
2019),
https://www.youtube.com/watch?v=WrYlKHiWv2c [https://perma.ccITT8M-ZBBN].
1782
CALIFORNIA LAW REVIEW
[Vol. 107:1753
warning to the public about an incoming ballistic missile. 129 Less widely noted,
we saw purposeful attempts to induce panic when the Russian Internet Research
Agency mounted a sophisticated and well-resourced campaign to create the
appearance of a chemical disaster in Louisiana and an Ebola outbreak in
Atlanta. 130 There was real but limited harm in both of these cases, though the
stories did not spread far because they lacked evidence and the facts were easy
to check.
We will not always be so lucky as malicious attempts to spread panic grow.
Deep fakes will prove especially useful for such disinformation campaigns,
enhancing their credibility. Imagine if the Atlanta Ebola story had been backed
by compelling fake audio appearing to capture a phone conversation with the
head of the Centers for Disease Control and Prevention describing terrifying
facts and calling for a cover-up to keep the public calm.
f
Undermining Diplomacy
Deep fakes will also disrupt diplomatic relations and roil international
affairs, especially where the fake is circulated publicly and galvanizes public
opinion. The recent Saudi-Qatari crisis might have been fueled by a hack that
injected fake stories with fake quotes by Qatar’s emir into a Qatari news site. 131
The manipulator behind the lie could then further support the fraud with
convincing video and audio clips purportedly gathered by and leaked from some
unnamed intelligence agency.
A deep fake put into the hands of a state’s intelligence apparatus may or
may not prompt a rash action. After all, the intelligence agencies of the most
capable governments are in a good position to make smart decisions about what
weight to give potential fakes. But not every state has such capable institutions,
and, in any event, the real utility of a deep fake for purposes of sparking an
international incident lies in inciting the public in one or more states to believe
that something shocking really did occur or was said. Deep fakes thus might best
be used to box in a government through inflammation ofrelevant public opinion,
constraining the government’s options, and perhaps forcing its hand in some
particular way. Recalling the concept of decisional chokepoints, for example, a
well-timed deep fake calculated to inflame public opinion might be circulated
during a summit meeting, making it politically untenable for one side to press its
129. Cecilia Kang, Hawaii MissileAlert Wasn’t Accidental, Officials Say, Blaming Worker,N.Y.
1iMES (Jan. 30, 2018), https://www.nytimes.com/2018/01/30/technology/fcc-hawaii-missile-alert.html
[https://perma.cc/4M39-C492].
The Agency, N.Y.
TIMES MAG. (June
2, 2015),
130. Adrian
Chen,
https://www.nytimes.com/2015/06/07 /magazine/the-agency.html [https://perma.cc/DML3-6MWT].
13 l. Krishnadev Calamur, Did Russian Hackers Target Qatar?, ATLANTIC (June 6, 2017),
https:/ /www.theatlantic.com/news/archive/2017 /06/qatar-russian-hacker-fake-news/529359
[https://perma.cc/4QA W-TL Y8] (discussing how Russian hackers may have planted a fake news story
on a Qatari news site that falsely suggested that the Qatari Emir had praised Iran and expressed interest
in peace with Israel).
2019]
DEEP FAKES
1783
agenda as it otherwise would have, or making it too costly to reach and announce
some particular agreement.
g.
Jeopardizing National Security
The use of deep fakes to endanger public safety or disrupt international
relations can also be viewed as harming national security. But what else belongs
under that heading?
Military activity-especially
combat operations-belongs
under this
heading as well, and there is considerable utility for deep fakes in that setting.
Most obviously, deep fakes have utility as a form of disinformation supporting
strategic, operational, or even tactical deception. This is a familiar aspect of
warfare, famously illustrated by the efforts of the Allies in Operation Bodyguard
to mislead the Axis regarding the location of what became the D-Day invasion
of June 1944.132 In that sense, deep fakes will be (or already are) merely another
instrument in the toolkit for wartime deception, one that combatants will both
use and have used against them.
Critically, deep fakes may prove to have special impact when it comes to
the battle for hearts and minds where a military force is occupying or at least
operating amidst a civilian population, as was the case for the U.S. military for
many years in Iraq and even now in Afghanistan. In that context, we have long
seen contending claims about civilian casualties-including, at times, the use of
falsified evidence to that effect. Deep fakes are certain to be used to make such
claims more credible. At times, this will merely have a general impact in the
larger battle of narratives. Nevertheless, such general impacts can matter a great
deal in the long term and can spur enemy recruitment or enhance civilian support
to the enemy. And, at times, it will spark specific violent reactions. One can
imagine circulation of a deep-fake video purporting to depict American soldiers
killing local civilians and seeming to say disparaging things about Islam in the
process, precipitating an attack by civilians or even a host-state soldier or police
officer against nearby U.S. persons.
Deep fakes pose similar problems for the activities of intelligence agencies.
The experience of the United States since the Snowden leaks in 2013
demonstrates that the public, both in the United States and abroad, can become
very alarmed about reports that the U.S. Intellig…
Purchase answer to see full
attachment
Privacy, Democracy, and National
Security
Bobby Chesney* and Danielle Citron**
Harmful lies are nothing new. But the ability to distort reality has
taken an exponential leap forward with “deep fake” technology. This
capability makes it possible to create audio and video of real people
saying and doing things they never said or did. Machine learning
techniques are escalating the technology’s sophistication, making
deep fakes ever more realistic and increasingly resistant to detection.
Deep-fake technology has characteristics that enable rapid and
widespread diffusion, putting it into the hands of both sophisticated
and unsophisticated actors.
DOI: https://doi.org/10.15779/Z38RV0D 15J
Copyright© 2019 California Law Review, Inc. California Law Review, Inc. (CLR) is a
California nonprofit corporation. CLR and the authors are solely responsible for the content of their
publications.
• James Baker Chair, University of Texas School of Law; co-founder ofLawfare.
** Professor of Law, Boston University School of Law; Vice President., Cyber Civil Rights
Initiative; Affiliate Fellow, Yale Information Society Project; Affiliate Scholar, Stanford Center on
Internet and Society. We thank Benjamin Wittes, Quinta Jurecic, Marc Blitz, Jennifer Finney Boylan,
Chris Bregler, Rebecca Crootof, Jeanmarie Fenrich, Mary Anne Franks, Nathaniel Gleicher, Patrick
Gray, Yasmin Green, Klon Kitchen, Woodrow Hartzog, Herb Lin, Helen Norton, Suz.anne Nossel,
Andreas Schou, and Jessica Silbey for helpful suggestions. We are grateful to Susan McCarty, Samuel
Morse, Jessica Burgard, and Alex Holland for research assistance. We had the great fortune of getting
feedback from audiences at the PEN Board of Trustees meeting; Heritage Foundation; Yale Information
Society Project; University of California, Hastings College of the Law; Northeastern School of
Journalism 2019 symposium on AI, Media, and the Threat to Democracy; and the University of
Maryland School of Law’s Trust and Truth Decay symposium. We appreciate the Deans who
generously supported this research: Dean Ward Farnsworth of the University of Texas School of Law,
and Dean Donald Tobin and Associate Dean Mike Pappas of the University of Maryland Carey School
of Law. We are grateful to the editors of the California Law Review, especially Erik Kundu, Alex
Copper, Yesenia Flores, Faye Hipsman, Gus Tupper, and Brady Williams, for their superb editing and
advice.
1753
1754
CALIFORNIA LAW REVIEW
[Vol. 107:1753
While deep-fake technology will bring certain benefits, it also will
introduce many harms. The marketplace of ideas already suffers from
truth decay as our networked information environment interacts in
toxic ways with our cognitive biases. Deep fakes will exacerbate this
problem significantly. Individuals and businesses will/ace novel forms
of exploitation, intimidation, and personal sabotage. The risks to our
democracy and to national security are profound as well.
Our aim is to provide the first in-depth assessment of the causes
and consequences of this disruptive technological change, and to
explore the existing and potential tools for responding to it. We survey
a broad array of responses, including: the role of technological
solutions; criminal penalties, civil liability, and regulatory action;
military and covert-action responses; economic sanctions; and market
developments. We cover the wateifrontfrom immunities to immutable
authentication trails, offering recommendations to improve law and
policy and anticipating the pitfalls embedded in various solutions.
Introduction ……………………………………………………………………………… 1755
I. Technological Foundations of the Deep-Fakes Problem ………………. 1758
A. Emergent Technology for Robust Deep Fakes ……………… 1759
B. Diffusion of Deep-Fake Technology …………………………… 1762
C. Fueling the Fire ………………………………………………………… 1763
II. Costs and Benefits …………………………………………………………………. 1768
A. Beneficial Uses of Deep-Fake Technology ………………….. 1769
1. Education …………………………………………………………… 1769
2. Art …………………………………………………………………… 1770
3. Autonomy ………………………………………………………….. 1770
B. Harmful Uses of Deep-Fake Technology …………………….. 1771
1. Harm to Individuals or Organizations …………………….. 1771
a. Exploitation ………………………………………………….. 1772
b. Sabotage ………………………………………………………. 1774
2. Harm to Society ………………………………………………….. 1776
a. Distortion of Democratic Discourse ………………… 1777
b. Manipulation of Elections ………………………………. 1778
c. Eroding Trust in Institutions …………………………… 1779
d. Exacerbating Social Divisions ………………………… 1780
e. Undermining Public Safety …………………………….. 1781
f. Undermining Diplomacy ……………………………….. 1782
g. Jeopardizing National Security ……………………….. 1783
h. Undermining Journalism ………………………………… 1784
1.
The Liar’s Dividend: Beware the Cry of Deep-Fake
News …………………………………………………………… 1785
III. What Can Be Done? Evaluating Technical, Legal, and Market
Responses ………………………………………………………………………. 1786
2019]
1755
DEEP FAKES
A.
B.
Technological Solutions …………………………………………….
Legal Solutions …………………………………………………………
1. Problems with an Outright Ban ……………………………..
2. Specific Categories of Civil Liability ……………………..
a. Threshold Obstacles ……………………………………….
b. Suing the Creators of Deep Fakes …………………….
c. Suing the Platforms ………………………………………..
3. Specific Categories of Criminal Liability ………………..
C. Administrative Agency Solutions ………………………………..
1. The FTC ……………………………………………………………..
2. The FCC …………………………………………………………….
3. The FEC ……………………………………………………………..
D. Coercive Responses …………………………………………………..
1. Military Responses ………………………………………………
2. Covert Action ………………………………………………………
3. Sanctions …………………………………………………………….
E. Market Solutions ……………………………………………………….
1. Immutable Life Logs as an Alibi Service ………………..
2. Speech Policies of Platforms …………………………………
Conclusion ………………………………………………………………………………..
1787
1788
1788
1792
1792
1793
1795
1801
1804
1804
1806
1807
1808
1808
1810
1811
1813
1814
1817
1819
INTRODUCTION
Through the magic of social media, it all went viral: a vivid photograph, an
inflammatory fake version, an animation expanding on the fake, posts debunking
the fakes, and stories trying to make sense of the situation. 1 It was both a sign of
the times and a cautionary tale about the challenges ahead.
The episode centered on Emma Gonzalez, a student who survived the
horrific shooting at Marjory Stoneman Douglas High School in Parkland,
Florida, in February 2018. In the aftermath of the shooting, a number of the
students emerged as potent voices in the national debate over gun control. Emma,
in particular, gained prominence thanks to the closing speech she delivered
during the “March for Our Lives” protest in Washington, D.C., as well as a
contemporaneous article she wrote for Teen Vogue.2 Fatefully, the Teen Vogue
I. Alex Horton, A Fake Photo of Emma Gonzalez Went Viral on the Far Right, Where
Parkland Teens are Villains, WASH.POST(Mar. 26, 2018), https://www.washingtonpost.com/news/theintersect/wp/20 18/03/25/a-fake-photo-of-emma-gonz.alez-went-viral-on-the-far -right-where-parklandteens-are-vi llains/?utm_ terrn=.0b0f8655530d [https://perrna.cc/6NDJ-W ADV].
2. Florida Student Emma Gonzalez [sic] to Lawmakers and Gun Advocates: ‘We call BS’,
CNN (Feb.
17, 2018), https://www.cnn.com/2018/02/17 /us/florida-student-emma-gonzalezspeech/index.html [https://perrna.cc/ZE3B-MVPD]; Emma Gonzalez, Emma Gonzalez on Why This
Generation
Needs
Gun
Control,
TEEN
VOOUE
(Mar.
23,
2018),
https://www.teenvogue.com/story/emma-gonzalez-parkland-gun-control-cover?mbid=social_twitter
[https://perrna.cc/P8TQ-P2ZR].
CALIFORNIA LA IVREVJEJ,V
1756
[Vol. 107:1753
piece incorpornted a video entitled “This Is Why We March,” including a
visually a â–¡-esting sequence in which Emma rips up a large sheet displaying a
bullseye target.
A powerful still image of Emma ripping up the bullseye target began to
circulate on the Internet. But soon someone generated a fake version, in which
the torn sheet is not a bullseye, but rather a copy of the Constitution of the United
States. While on some level the fake image might be construed as artistic fiction
highlighting the inconsistency of gun control with the Second Amendment, the
fake was not framed that way, Instead, il was depicted as a true image of Emma
Gonz ..lez ripping up the Constitution.
The image soon went viral. A fake of the video also appeared, though it
was more obvious that it had been manipulated. Still, the video circulated widely,
thanks in part to actor Adam Baldwin circulating il to a quartermillion followers
on Twitter (along with the disturbing hashtag #Vorwarts-the German word for
“forward,” a reference to neo-Nazis’ nod to the word’s role in a Hitler Youlh
anthem).’
Several factors combined to limit the harm from this fakery. First, the
genuine image already was in wide circulation and available at its original
source. This made it fast and easy to fact-check the fakes. Second, the intense
national attention associated with the post-Parkland gun control debate and,
especially, the role of students like Emma in that debate, ensured that journalists
paid attention to the issue, spending time and effort to debunk the fakes. Third,
the fakes were of poor quality (though audiences inclined to believe their
message might disregardthe red flags).
Even with those constraints, though, many believed the fakes, and harm
ensued. Our national dialogue on gun control has suffered some degree of
3.
See Horton,supra note I.
2019]
DEEP FAKES
1757
distortion; Emma has likely suffered some degree of anguish over the episode;
and other Parkland victims likely felt maligned and discredited. Falsified
imagery, in short, has already exacted significant costs for individuals and
society. But the situation is about to get much worse, as this Article shows.
Technologies for altering images, video, or audio (or even creating them
from scratch) in ways that are highly -realistic and difficult to detect are maturing
rapidly. As they ripen and diffuse, the problems illustrated by the Emma
Gonzalez episode will expand and generate significant policy and legal
challenges. Imagine a deep fake video, released the day before an election,
making it appear that a candidate for office has made an inflammatory statement.
Or what if, in the wake of the Trump-Putin tete-a-tete at Helsinki in 2018,
someone circulated a deep fake audio recording that seemed to portray President
Trump as promising not to take any action should Russia interfere with certain
NATO allies. Screenwriters are already building such prospects into their
plotlines. 4 The real world will not lag far behind.
Pornographers have been early adopters of the technology, interposing the
faces of celebrities into sex videos. This has given rise to the label “deep fake”
for such digitized impersonations. We use that label here more broadly, as
shorthand for the full range of hyper-realistic digital falsification of images,
video, and audio.
This full range will entail, sooner rather than later, a disturbing array of
malicious uses. We are by no means the first to observe that deep fakes will
• migrate far beyond the pornography context, with great potential for harm. 5 We
4. See, e.g., Vindu Goel & Sheera Frenkel, In India Election, False Posts and Hate Speech
N.
Y.
TIMES
(Apr.
I,
2019),
Flummox
Facebook,
https://www .nytimes.com/2019/04/01/technology/india-elections-facebook.html
[https://perma.cc/B9CP-MPPK) (describing the deluge of fake and manipulated videos and images
circulated in the lead up to elections in India); Homeland: Like Bad at Things (Showtime television
broadcast Mar. 4, 2018), https://www.sho.com/homeland/season/7/episode/4/li.ke-bad-at-things
[https://penna.cc/25XK-NN3Y]; Taken: Verum Nocet (NBC television broadcast Mar. 30, 2018)
https://www.nbc.com/taken/video/verum-nocet/3688929 [https://penna.cc/CVP2-PNXZ] (depicting a
deep-fake video in which a character appears to recite song lyrics); The Good Fight: Day 408 (CBS
television broadcast Mar. 4, 2018) (depicting fake audio pmporting to be President Trump); The Good
Fight: Day 464 (CBS television broadcast Apr. 29, 2018) (featuring a deep-fake video of the alleged
“golden shower” incident involving President Trump).
5. See, e.g., Samantha Cole, We Are Truly Fucked: Everyone is Making Al-Generated Fake
Porn
Now,
VICE:
MOTHERBOARD
(Jan.
24,
2018),
https://motherboard vice.com/ en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley
[https://perma.cc/V9NT-CBW8) (“[T]echnologyO allows anyone with sufficient raw footage to work
with to convincingly place any face in any video.”); see also @BuzzFeed, You Won’t Believe What
Obama
Says
in
This
Video,
TwlTIER
(Apr.
17,
2018,
8:00
AM),
https://twitter.com/BuzzFeed/status/986257991799222272 [https://perma.cc/C38K-B377) (“We ‘re
entering an era in which our enemies can make anyone say anything at any point in time.”); Tim Mak,
All Things Considered: Technologies to Create Fake Audio and Video Are Quickly Evolving, NAT’L
PuB. RADIO(Apr. 2, 2018), https://www.npr.org/2018/04/02/598916380/technologies-to-create-fakeaudio-and-video-are-quickly-evolving [https://perma.cc/NY23-YVQD] (discussing deep-fake videos
created for political reasons and misinformation campaigns); Julian Sanchez (@normative), TWITTER
(Jan. 24, 2018, 12:26 PM) (”The prospect of any Internet rando being able to swap anyone’s face into
1758
CALIFORNIA LAW REVIEW
[Vol. 107:1753
do, however, provide the first comprehensive survey of these harms and potential
responses to them. We break new ground by giving early warning regarding the
powerful incentives that deep fakes produce for privacy-destructive solutions.
This Article unfolds as follows. Part I begins with a description of the
technological innovations pushing deep fakes into the realm of hyper-realism
and making them increasingly difficult to debunk. It then discusses the
amplifying power of social media and the confounding influence of cognitive
biases.
Part II surveys the benefits and the costs of deep fakes. The upsides of deep
fakes include artistic exploration and educative contributions. The downsides of
deep fakes, however, are as varied as they are costly. Some harms are suffered
by individuals or groups, such as when deep fakes are deployed to exploit or
sabotage individual identities and corporate opportunities. Others impact society
more broadly, such as distortion of policy debates, manipulation of elections,
erosion of trust in institutions, exacerbation of social divisions, damage to
national security, and disruption of international relations. And, in what we call
the “liar’s dividend,” deep fakes make it easier for liars to avoid accountability
for things that are in fact true.
Part III turns to the question of remedies. We survey an array of existing or
potential solutions involving civil and criminal liability, agency regulation, and
“active measures” in special contexts like armed conflict and covert action. We
also discuss technology-driven market responses, including not just the
promotion of debunking technologies, but also the prospect of an alibi service,
such as privacy-destructive life logging. We find, in the end, that there are no
silver-bullet solutions. Thus, we couple our recommendations with warnings to
the public, policymakers, and educators.
I.
TECHNOLOGICAL FOUNDATIONS OF THE DEEP-FAKES PROBLEM
Digital impersonation is increasingly realistic and convincing. Deep-fake
technology is the cutting-edge of that trend. It leverages machine-learning
algorithms to insert faces and voices into video and audio recordings of actual
people and enables the creation of realistic impersonations out of digital whole
cloth. 6 The end result is realistic-looking video or audio making it appear that
someone said or did something. Although deep fakes can be created with the
consent of people being featured, more often they will be created without it. This
Part describes the technology and the forces ensuring its diffusion, virality, and
entrenchment.
porn is incredibly creepy. But my first thought is that we have not even scratched the surface of bow bad
‘fake news’ is going to get”).
6. See Cole, supra note 5.
2019]
DEEP FAKES
1759
A. Emergent Technology for Robust Deep Fakes
Doctored imagery is neither new nor rare. Innocuous doctoring of imagessuch as tweaks to lighting or the application of a filter to improve image
quality-is ubiquitous. Tools like Photoshop enable images to be tweaked in
both superficial and substantive ways.7 The field of digital forensics has been
grappling with the challenge of detecting digital alterations for some time. 8
Generally, forensic techniques are automated and thus less dependent on the
human eye to spot discrepancies. 9 While the detection of doctored audio and
video was once fairly straightforward, 10 the emergence of generative technology
capitalizing on machine learning promises to shift this balance. It will enable the
production of altered (or even wholly invented) images, videos, and audios that
are more realistic and more difficult to debunk than they have been in the past.
This technology often involves the use of a “neural network” for machine
learning. The neural network begins as a kind of tabula rasa featuring a nodal
network controlled by a set of numerical standards set at random. 11 Much as
experience refines the brain’s neural nodes, examples train the neural network
system. 12 If the network processes a broad array of training examples, it should
be able to create increasingly accurate models. 13 It is through this process that
neural networks categorize audio, video, or images and generate realistic
impersonations or alterations. 14
7. See, e.g., Stan Horaczek, Spot Faked Photos UsingDigital Forensic Techniques, POPULAR
SCTENCE (July 21, 2017), https://www.popsci.com/use-photo-forensics-to-spot-faked-images
[https://perma.cc/G72B-VLF2] (depicting and discussing a series of manipulated photographs).
8. Doctored images have been prevalent since the advent of the photography. See PHOTO
TAMPERINGTHROUGHOUTHISTORY, http://pth.izitru.com [https://perma.cc/5QSZ-NULR]. The
gallery was curated by FourandSix Technologies, Inc.
9. See Tiffanie Wen, The Hidden Signs That Can Reveal a Fake Photo, BBC FUTuRE(June
30, 2017), http://www.bbc.com/future/story/20170629-the-hidden-signs-that-can-reveal-if-a-photo-isfake [https://perma.cc/W9NX-XGKJ]. lZITRU.COMwas a project spearheaded by Dartmouth’s Dr.
Hany Farid It allowed users to upload photos to determine if they were fakes. The service was aimed at
“legions of citizen journalists who want[eel]to dispel doubts that what they [were] posting [wa ]s real.”
Rick Gladstone, Photos
Trusted but Verified, N.Y. TlMEs (May 7, 2014),
https://lens.blogs.nytimes.com/2014/05/07/photos-trusted-but-verified [https://perma.cc/7A 73-URKP].
10. See Steven Melendez, How DARPA ‘s Fighting Deepfakes, FASTCOMPANY (Apr. 4, 2018),
https://www.fastcompany.com/40551971/can-new-forensic-tech-win-war-on-ai-generated-fakeimages [https://perma.cc/9A8L-LFTQ].
11. Larry Hardesty, Explained: Neural Networks, MIT NEWS (Apr. 14, 2017),
http://news.mitedu/2017 /explained-neural-networks-deep-leaming-0414
[https://perma.ccNT A64Z2D].
12. Natalie Wolchover, New Theory Cracks Open the Black Box of Deep Neural Networks,
WIRED
(Oct
8,
2017),
https://www.wiredcom/story/new-theory-deep-learning
[https://perma.cc/UEL5-69ND].
13. Will Knight, Meet the Fake Celebrities Dreamed Up By AI, MIT TECH.REV. (Oct 31,
2017), https://www.technologyreview.com/the-download/609290/meet-the-fake-celebrities-dreamedup-by-ai [https://perma.cc/D3A3-JFY4].
14. Will Knight, Real or Fake? AI is Making it Very Hard to Know, MIT TECH.REV. (May 1,
2017), https://www.technologyreview.com/s/604270/real-or-fake-ai-is-making-it-very-hard-to-know
[https://perma.cc/3MQN-A4VH].
1760
CALIFORNIA LAW REVIEW
[Vol. 107:1753
To take a prominent example, researchers at the University of Washington
have created a neural network tool that alters videos so speakers say something
different from what they originally said. 15 They demonstrated the technology
with a video of former President Barack Obama (for whom plentiful video
footage was available to train the network) that made it appear that he said things
that he had not. 16
By itself, the emergence of machine learning through neural network
methods would portend a significant increase in the capacity to create false
images, videos, and audio. But the story does not end there. Enter “generative
adversarial networks,” otherwise known as GANs. The GAN approach, invented
by Google researcher Ian Goodfellow, brings two neural networks to bear
simultaneously. 17 One network, known as the generator, draws on a dataset to
produce a sample that mimics the dataset. 18 The other network, the discriminator,
assesses the degree to which the generator succeeded. 19 In an iterative fashion,
the assessments from the discriminator inform the assessments of the generator.
The result far exceeds the speed, scale, and nuance of what human reviewers
could achieve. 20 Growing sophistication of the GAN approach is sure to lead to
the production of increasingly convincing deep fakes.21
15.
SUPASORN SUWAJANAKORN ET AL., SYNTIIESIZING OBAMA:LEARNING LIP SYNC FROM
36 ACM ‘TRANSACTIONSON GRAPHICS, no. 4, art. 95 (July 2017),
http://grail.cs.washington.edu/projects/AudioToObama/siggrapb 17_obama.pdf
[https://perma.cc/7DCY-XK58]; James Vincent, New Al Research Makes It Easier to Create Fake
Footage
of
Someone
Speaking,
VERGE
(July
12,
2017),
https://www.theverge.com/2017 /7 /12/15957844/ai-fake-video-audio-speech-obama
[https://perma.cc/3SKP-EKGT].
16. Charles Q. Choi, Al Creates Fake Obama, IEEE SPECTRUM(July 12, 2017),
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama
[https://perma.cc/M6GP-TNZ4]; see also loon Son Chung et al., You Said That? (July 18, 2017) (British
Machine Vision conference paper), https://arx.iv.org/abs/1705.02966[https://permacc/6NAH-MA YL].
17. See Ian J. Goodfellow et al., Generative Adversarial Nets (June 10, 2014) (Neural
Information
Processing
Systems
conference
paper),
https://arx.iv.org/abs/1406.2661
[https://penna.cc/97SH-H7DD] (introducing the GAN approach); see also Tero Karras, et al.,
Progressive Growing ofGANs for Improved Quality, Stability, and Variation, ICLR 2018, at 1-2 (Apr.
2018) (conference paper), http://research.nvidiacorn/sites/default/files/pubs/2017-10_ProgressiveGrowing-ofikarras20 l 8iclr-paper.pdf [https://permacc/RSK2-NBAE] (explaining neural networks in
the GAN approach).,
18. Karras, supra note 17, at I.
19. Id.
20. Id. at 2.
21. Consider research conducted at Nvidia. Karras, supra note 17, at 2 (explaining a novel
approach that begins training cycles with low-resolution images and gradually shifts to higher-resolution
images, producing better and much quicker results). The New York Times recently profiled the Nvidia
team’s work. See Cade Metz & Keith Collins, How an A.I. ‘Cat-and-Mouse Game’ Generates
Believable
Fake
Photos,
N.Y.
TIMES
(Jan.
2,
2018),
https://www .nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html
[https://perma.cc/6DLQ-RDWD]. For further illustrations of the GAN approach, see Martin Arjovsky
et al., Wasserstein GAN (Dec. 6, 2017) (unpublished manuscript) (on file with California Law Review);
Chris Donahue et al., Semantically Decomposing the Latent Spaces of Generative Adversarial
Networks, ICLR 2018 (Feb. 22, 2018) (conference paper) (on file with California Law Review),
AUDIO,
2019]
DEEP FAKES
1761
The same is true with respect to generating convincing audio fakes. In the
past, the primary method of generating audio entailed the creation of a large
database of sound fragments from a source, which would then be combined and
reordered to generate simulated speech. New approaches promise greater
sophistication, including Google DeepMind’s “Wavenet” model, 22 Baidu’s
DeepVoice, 23 and GAN models. 24 Startup Lyrebird has posted short audio clips
simulating Barack Obama, Donald Trump, and Hillary Clinton discussing its
technology with admiration. 25
In comparison to private and academic efforts to develop deep-fake
technology, less is currently known about governmental research. 26 Given the
possible utility of deep-fake techniques for various government purposesincluding the need to defend against hostile uses-it is a safe bet that state actors
https://github.com/chrisdonahue/sdgan; Phillip Isola et al., Image-to-Image Translation with
Conditional Adversarial Nets (Nov. 26, 2018) (unpublished manuscript) (on file with California Law
Review); Alec Radford et al., Unsupervised Representation Learning with Deep Convolutional
Generative Adversarial Networks (Jan. 7, 2016) (unpublished manuscript) (on file with California Law
Review); Jun-Yan Zhu et al., Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial
Networks (Nov. 15, 2018) (unpublished manuscript) (on file with California Law Review).
22. Aaron van den Oord et al., WaveNet: A Generative Model for Raw Audio (Sept. 19, 2016)
(unpublished manuscript) (on file with California Law Review), https://arxiv.org/pdf/1609.03499.pdf
[https://perma.cc/QX4W-E6JT].
23. Ben Popper, Baidu’s New System Can Learn to Imitate Every Accent, VERGE (Oct. 24,
2017),
https://www.theverge.com/2017/10/24/16526370/baidu-deepvoice-3-ai-text-to-speech-voice
[https://perma.cc/NXV2-GDVJ].
24. See Chris Donahue et al., Adversarial Audio Synthesis (Feb. 9, 2019) (conference paper),
https://arxiv.org/pdf/1802.04208.pdf [https://permacc/F5UG-334U]; Yang Gao et al., Voice
Impersonation Using Generative Adversarial Networks (Feb. 19, 2018) (unpublished manuscript),
https://arxiv.org/abs/1802.06840 [https://perma.cc/5HZV-ZLD3].
25. See Bahar Gholipour, New AI Tech Can Mimic Any Voice, SCI. AM. (May 2, 2017),
https://www.scientificamerican.com/article/new-ai-tech-can-mimic-any-voice [https://perma.cc/2HSP83C3]. The ability to cause havoc by using this technology to portray persons saying things they have
never said looms large. Lyrebird’s website includes an “ethics” statement, which defensively invokes
notions of technological determinism. The statement argues that impersonation technology is inevitable
and that society benefits from gradual introduction to it Ethics, LYREBIRD,https://lyrebird.ai/ethics
[https://perma.cc/Q57E-G6MK] (“Imagine that we had decided not to release this technology at all.
Others would develop it and who knows if their intentions would be as sincere as ours: they could, for
example, only sell the technology to a specific company or an ill-intentioned organization. By contrast,
we are making the technology available to anyone and we are introducing it incrementally so that society
can adapt to it, leverage its positive aspects for good, while preventing potentially negative
applications.”).
26. DARPA’s MediFor program is working to “[develop] technologies for the automated
assessment of the integrity of an image or video and [integrate] these in an end-to-end media forensics
platform.” Matt Turek, Media Forensics (MediFor), DEF. ADVANCEDRES. PROJECTSAGENCY,
https://www.darpa.mil/program/media-forensics [https://perma.cc/VBY5-BQJA]. !ARPA’s DNA
program is attempting to use artificial intelligence to identify threats by sifting through video imagery.
Deep Jntermodal Video Analytics (DIVA) Program, INTELLIGENCE
ADVANCEDRES. PROJECTS
ACTIVITY,https://www.iarpa.gov/index.php/research-programs/diva [https://perma.cc/4VDX-B68W].
There are no grants from the National Science Foundation awarding federal dollars to researchers
studying the detection of doctored audio and video content at this time. E-mail from Seth M. Goldstein,
Project Manager, IARP A, Office of the Director of National Intelligence, to Samuel Morse (Apr. 6,
2018, 7:49 AM) (on file with authors).
1762
CALIFORNIA LAW REVIEW
[Vol. 107:1753
are conducting classified research in this area. However, it is unclear whether
classified research lags behind or outpaces commercial and academic efforts. At
the least, we can say with confidence that industry, academia, and governments
have the motive, means, and opportunity to push this technology forward at a
rapid clip.
B. Diffusion of Deep-Fake Technology
The capacity to generate persuasive deep fakes will not stay in the hands of
either technologically sophisticated or responsible actors. 27 For better or worse,
deep-fake technology will diffuse and democratize rapidly.
As Benjamin Wittes and Gabriella Blum explained in The Future of
Violence: Robots and Germs, Hackers and Drones, technologies–even
dangerous ones-tend to diffuse over time. 28 Firearms developed for statecontrolled armed forces are now sold to the public for relatively modest prices. 29
The tendency for technologies to spread only lags if they require scarce inputs
that function (or are made to function) as chokepoints to curtail access. 30 Scarcity
as a constraint on diffusion works best where the input in question is tangible
and hard to obtain; such as plutonium or highly enriched uranium to create
nuclear weapons. 31
Often though, the only scarce input for a new technology is the knowledge
behind a novel process or unique data sets. Where the constraint involves an
intangible resource like information, preserving secrecy requires not only
security against theft, espionage, and mistaken disclosure, but also the capacity
and will to keep the information confidential. 32 Depending on the circumstances,
the relevant actors may not want to keep the information to themselves and,
indeed, may have affirmative commercial or intellectual motivation to disperse
it, as in the case of academics or business enterprises. 33
27. See Jaime Dunaway, Reddit (Finally) Bans Deepfake Communities, but Face-Swapping
Porn Isn’t Going Anywhere, SLATE(Feb. 8, 2018), https://slate.com/technology/2018/02/reddit-finallybans-deepfak:e-communities-but-face-swapping-pom-isnt-going-anywhere.html
[https://permacc/A4Z7-2LDF].
28. See generally BENJAMlNWITIES & GABRIELLABLUM, THE FlJTURE OF VIOLENCE:
ROBOTSANDGERMS,HACKERS ANDDRONES.CONFRONTING
A NEWAGEOFTHREAT(2015).
29. Fresh Air: Assault Style Weapons in the Civilian Market, NPR (radio broadcast Dec. 20,
2012). Program host Teny Gross interviews Tom Diaz, a policy analyst for the Violence Policy Center.
A
transcript
of
the
interview
can
be
found
at
https:/ /www.npr.org/templates/transcript/transcriptphp?storyld= 167694808 [https://perma.cc/CE3F5AFX].
30. See generally GRAHAMT. ALLISONET AL., AVOIDINGNUCLEARANARCHY (1996).
31. Id
32. The techniques that are used to combat cyber attacks and threats are often published in
scientific papers, so that a multitude of actors can implement these shields as a defense measure.
However, the sophisticated malfeasor can use this information to create cyber weapons that circumvent
the defenses that researchers create.
33. In April 2016, the hacker group “Shadow Brokers” released malware that had allegedly been
created by the National Security Agency (NSA). One month later, the malware was used to propagate
2019]
DEEP FAKES
1763
Consequently, the capacity to generate deep fakes is sure to diffuse rapidly
no matter what efforts are made to safeguard it. The capacity does not depend on
scarce tangible inputs, but rather on access to knowledge like GANs and other
approaches to machine learning. As the volume and sophistication of publicly
available deep-fake research and services increase, user-friendly tools will be
developed and propagated online, allowing diffusion to reach beyond experts.
Such diffusion has occurred in the past both through commercial and blackmarket means, as seen with graphic manipulation tools like Photoshop and
malware services on the dark web. 34 User-friendly capacity to generate deep
fakes likely will follow a similar course on both dimensions. 35
Indeed, diffusion has begun for deep-fake technology. The recent wave of
attention generated by deep fakes began after a Reddit user posted a tool inserting
the faces of celebrities into porn videos. 36 Once Fake App, “a desktop app for
creating photorealistic faceswap videos made with deep learning,” appeared
online, the public adopted it in short order. 37 Following the straightforward steps
provided by Fake App, a New York Times reporter created a semi-realistic deepfake video of his face on actor Chris Pratt’s body with 1,861 images of himself
and 1,023 images of Chris Pratt. 38 After enlisting the help of someone with
experience blending facial features and source footage, the reporter created a
realistic video featuring him as Jimmy Kimmel. 39 This portends the diffusion of
ever more sophisticated versions of deep-fake technology.
C. Fueling the Fire
The capacity to create deep fakes comes at a perilous time. No longer is the
public’s attention exclusively in the hands of trusted media companies.
Individuals peddling deep fakes can quickly reach a massive, even global,
the WannaCry cyber attacks, which wreaked havoc on network systems around the globe, threatening
to erase files if a ransom was not paid through Bitcoin. See Bruce Schneier, Who Are the Shadow
Brokers?,
ATLANTIC
(May
23,
2017),
https://www.theatlantic.com/technology/archive/2017/05/shadow-brokers/527778
(https://perma.cc/UW2F-V36G].
34. See ARMOR, THE BLACKMARKET REPORT: A LoOK INSIDE THE DARK WEB 2 (2018),
https://www .armor.com/app/uploads/2018/03/2018-Q 1-Reports-BlackMarket-DIGITAL.pelf
[https://perma.cc/4UJA-QJ94] (explaining that the means to conduct a DDoS attack can be purchased
for $10/hour, or $200/day).
35. See id
36. Emma Grey Ellis, People Can Put Your Face on Pam-And the Law Can’t Help You,
WIRED
(Jan.
26,
2018),
https://www.wired.com/story/face-swap-pom-legal-limbo
[https://perma.cc/B7K7-Y79L].
37. FAKEAPP,https://www.fakeapp.org.
38. Kevin Roose, Here Come the Fake Videos, Too, N.Y. T!MEs (Mar. 4, 2018),
https://www .nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html
[https://perma.cc/U5QE-EPHX].
39. Id.
1764
CALIFORNIA LAW REVIEW
[Vol. 107:1753
audience. As this section explores, networked phenomena, rooted in cognitive
bias, will fuel that effort. 40
Twenty-five years ago, the practical ability of individuals and organizations
to distribute images, audio, and video (whether authentic or not) was limited. In
most countries, a handful of media organizations disseminated content on a
national or global basis. In the U.S., the major television and radio networks,
newspapers, magazines, and book publishers controlled the spread of
information. 41 While governments, advertisers, and prominent figures could
influence mass media, most were left to pursue local distribution of content. For
better or worse, relatively few individuals or entities could reach large audiences
in this few-to-many information distribution environment. 42
The information revolution has disrupted this content distribution model. 43
Today, innumerable platforms facilitate global connectivity. Generally speaking,
the networked environment blends the few-to-many and many-to-many models
of content distribution, democratizing access to communication to an
unprecedented degree. 44 This reduces the overall amount of gatekeeping, though
control still remains with the companies responsible for our digital
infrastructure. 45 For instance, content platforms have terms-of-service
agreements, which ban certain forms of content based on companies’ values. 46
40. See generally DANIELLE KEATS CITRON, HATE CRlMEs IN CYBERSPACE(2014)
[hereinafter CITRON,HATE CRIMESINCYBERSPACE](exploring pathologies attendant to online speech
including deindividuation, virality, information cascades, group polarization, and filter bubbles). For
important early work on filter bubbles, echo chambers, and group polarization in online interactions, see
generally ELI PARISER,THEFILTERBUBBLE:WHATTHE INTERNETrsHIDINGFROMYou (2011 ); CASS
R SUNSTEIN,REPUBLIC.COM(2001).
41. See generally NICHOLASCARR, THEBIG SWITCH:REWIRINGTHE WORLD,FROMEDISON
TO GooGLE (2008); How ARDRHEINGOLD,SMARTMOBS: THENEXT SOCIALREVOLUTION(2002).
42. See id
43. See generally SIVA VAIDHYANATHAN,
THE GOOGLIZATIONOF EVERYTHING(ANDWHY
WE SHOULDWORRY) (2011 ).
44. This ably captures the online environment accessible for those living in the United States.
As Jack Goldsmith and Tim Wu argued a decade ago, geographic borders and the will of governments
can and do make themselves known online. See generally JACKGOLDSMITH& TIM Wu, WHO OWNS
THE INTERNET?:ILLUSIONSOF A BORDERLESSWORLD(2006). The Internet visible in China is vastly
different from the Internet visible in the EU, which is different from the Internet visible in the United
States (and likely to become more so soon). See, e.g., Elizabeth C. Economy, The Great Firewall of
GUARDIAN
(June
29,
2018)
China:
Xi
Jinping’s
Internet
Shutdown,
https://www .theguardian.corn/news/20 l 8/jun/29/the-great-firewall-of-china-xi-jinpings-intemetshutdown [https://perrna.cc/8GUS-EC59]; Casey Newton, Europe Is Splitting the Internet into Three:
How the Copyright Directive Reshapes the Open Web, VERGE (Mar. 27, 2019,
https:/ /www .theverge.corn/2019/3/27 /18283541/european-union-copyright-directive-Internet-articlel 3 [https://perma.cc/K235-RZ7Q].
45. Danielle Keats Citron & Neil M. Richards, Four Principles for Digital Expression (You
Won’t Believe #3!), 95 WASH. U. L. REV. 1353, 1361-64 (2018).
46. See CITRON, HATE CRIMESIN CYBERSPACE,supra note 40, at 232-35; Danielle Keats
Citron, Extremist Speech, Compelled Confonnity, and Censorship Creep, 93 NOTRE DAME L. REV.
1035, 1037 (2018) [hereinafter Citron, Extremist Speech] (noting that platforms’ terms of service and
community guidelines have banned child pornography, spam, phishing, fraud, impersonation, copyright
violations, threats, cyber stalking, nonconsensual pornography, and hate speech); see also DANIELLE
2019)
DEEP FAKES
1765
They experience pressure from, or adhere to legal mandates of, governments to
block or filter certain information like hate speech or “fake news.’,47
Although private companies have enormous power to moderate content
(shadow banning it, lowering its prominence, and so on), they may decline to
filter or block content that does not amount to obvious illegality. Generally
speaking, there is far less screening of content for accuracy, quality, or
suppression of facts or opinions that some authority deems undesirable.
Content not only can find its way to online audiences, but can circulate far
and wide, sometimes going viral both online and, at times, amplifying further
once picked up by traditional media. A variety of cognitive heuristics help fuel
these dynamics. Three phenomena in particular-the “information cascade”
dynamic, human attraction to negative and novel information, and filter
bubbles-help explain why deep fakes may be especially prone to going viral.
First, consider the “information cascade” dynamic. 48 Information cascades
result when people stop paying sufficient attention to their own information,
relying instead on what they assume others have reliably determined and then
passing that information along: Because people cannot know everything, they
often rely on what others say, even ifit contradicts their own knowledge. 49 At a
certain point, people stop paying attention to their own information and look to
what others know. 50 And when people pass along what others think, the
KEATS CITRON& QUINTAJURECTC,PLATFORMJUSTICE:CONTENTMODERATIONAT AN INFLECTION
POINT 12 (Hoover Institution ed., 2018) [hereinafter CITRON & JURECIC, PLATFORMJUSTICE],
https:/ /www.hoover.org/sites/default/files/research/docs/citron-jurecic _ webreadypdf.pdf
[https://perrna.cc/M5L6-GNCH]
(noting Facebook’s Terms of Service agreement banning
noncoosensual pornography). See generally Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REv.
61 (2009) [hereinafter Citron, Cyber Civil Rights]; Danielle Keats Citron & Helen Norton,
Intermediaries and Hate Speech: Fostering Digital Citizenship for Our Information Age, 91 B.U. L.
REV. 1435, 1458 (2011) (discussing hate speech restrictions contained in platforms’ terms of service
agreements); Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad
Samaritans § 230 Immunity, 86 FORDHAML. REV. 401 (2017) (arguing that law should incentivize
online platforms to address known illegality in a reasonable manner).
47. See Citron, Extremist Speech, supra note 46, at 1040—49 (exploring pressure from EU
Commission on major platforms to remove extremist speech and hate speech). For important work on
global censorship efforts, see the scholarship of Anupam Chander, Daphne Keller, and Rebecca
McKinnon. See generally REBECCAMCKINNON,CONSENTOF TI-IENETWORKED:THEWORLDWIDE
STRUGGLEFOR INTERNETFREEDOM6 (2012) (arguing that ISPs and online platforms have “far too
much power over citizens’ lives, in ways that are insufficiently transparent or accountable to the public
interest.”); Anupam Chander, Facebookistan, 90 N.C. L. REV. 1807, 1819-35 (2012); Anupam
Chander, Googling Freedom, 99 CALIF. L. REV. 1, 5-9 (2011); Daphne Keller, Toward a Clearer
Conversation About Platform Liability, K.NlGHTFIRsT AMEND.INST. AT COLUM.U. (April 6, 2018),
https://knightcolumbia.org/content/toward-clearer-conversation-about-platform-liability
[https://permacc/GWM7-J8PW].
48. Carr, supra note 41. See generally DAVID EASLEY & JON KLEINBERG,NETWORKS,
CROWDS,AND MARKETS:REASONINGABOUT A HIGHLY CONNECTEDWORLD (2010) (exploring
cognitive biases in the information marketplace); CASS SUNSTEIN,REPUBLIC.COM2.0 (2007) (same).
49. See generally EASLEY& KLEINBERG,supra note 48.
50. Id.
1766
CALIFORNIA LAW REVIEW
[Vol. 107:1753
credibility of the original claim snowballs. 51 As the cycle repeats, the cascade
strengthens. 52
Social media platforms are a ripe environment for the formation of
information cascades spreading content of all stripes. From there, cascades can
spill over to traditional mass-audience outlets that take note of the surge of social
media interest and as a result cover a story that otherwise they might not have. 53
Social movements have leveraged the power of information cascades, including
Black Lives Matter activists 54 and the Never Again movement of the Parkland
High School students. 55 Arab Spring protesters spread videos and photographs
of police torture. 56 Journalist Howard Rheingold refers to positive information
cascades as “smart mobs.” 57 But not every mob is smart or laudable, and the
information cascade dynamic does not account for such distinctions. The Russian
covert action program to sow discord in the United States during the 2016
election provides ample demonstration. 58
Second, our natural tendency to propagate negative and novel information
may enable viral circulation of deep fakes. Negative and novel information
“grab[s] our attention as human beings and [] cause[s] us to want to share that
information with others-we’re attentive to novel threats and especially attentive
to negative threats.” 59 Data scientists, for instance, studied 126,000 news stories
shared on Twitter from 2006 to 2010, using third-party fact-checking sites to
51. Id
52. Id
53. See generally YOCHAI BENKLER, THE WEALTII OF NETWORKS:How SOCIAL
PRODUCTIONTRANSFORMSMARKETSAND FREEDOM (2006) (elaborating the concept of social
production in relation to rapid evolution of the information marlcetplaceand resistance to that trend).
54. See Monica Anderson & Paul Hitlin, The Hashtag #BlackLivesMatter Emerges: Social
Activism on Twitter, PEW RES. Cm.. (Aug. 15, 2016), http://www.pewintemetorg/2016/08/15/thehashtag-blacklivesrnatter-emerges-social-activism-on-twitter
[https://perma.cc/4BW9-L67G]
(discussing Black Lives Matter activists’ use of the hash tag #BlackLivesMatter to identify their message
and display solidarity around race and police use of force).
55. Jonah Engel Bromwich, How the Parkland Students Got So Good at Social Media, N.Y.
TIMES (Mar. 7, 2018), https://www .nytimes.com/2018/03/07/us/parlcland-students-social-media.html
[https://permacc/7 AW9-4HR2] (discussing students’ use of social media to keep sustained political
attention on the Parlcland tragedy).
supra note 40, at 68.
56. CITRON,HATECRIMESINCYBERSPACE,
supra note 41.
57. RHEINGOLD,
58. The 2018 indictment of the Internet Research Agency in the U.S. District Court for the
District
of
Columbia
is
available
at
https://wwwjustice.gov/file/1035477/download
[https://perma.cc/B6WJ-4FLX]; see also David A. Graham, What the Mueller Indictment Reveals,
ATLANTIC (Feb.
16, 2018), https://www.theatlantic.com/politics/archive/2018/02/muellerroadmap/553604 [https://perma.cc/WU2U-XHWW]; Tim Mak & Audrey McNamara, Mueller
Indictment of Russian Operatives Details Playbook of Iriformation Warfare, NAT’L P!JB.RADIO(Feb.
17, 2018), https://www.npr.org/2018/02/17 /586690342/rnueller-indictment-of-russian-operativesdetails-playbook-of-information-warfare [https://perma.cc/RJ6F-999R].
59. Robinson Meyer, The Grim Conclusions of the Largest-Ever Study of Fake News, THE
ATLANTIC (Mar. 8, 2018), https://www.theatlantic.com/technology/archive/2018/03/largest-studyever-fake-news-mit-twitter/555104 [https://perma.cc/PJS2-RKMF].
2019]
1767
DEEP FAKES
classify them as true or false. 60 According to the study, hoaxes and false rumors
reached people ten times faster than accurate stories. 61 Even when researchers
controlled for differences between accounts originating rumors, falsehoods were
70 percent more likely to get retweeted than accurate news. 62 The uneven spread
of fake news was not due to bots, which in fact retweeted falsehoods at the same
frequency as accurate information. 63 Rather, false news spread faster due to
people retweeting inaccurate news items. 64 The study’s authors hypothesized
that falsehoods had greater traction because they seemed more “novel” and
evocative than real news. 65 False rumors tended to elicit responses expressing
surprise and disgust, while accurate stories evoked replies associated with
sadness and trust. 66
With human beings seemingly more inclined to spread negative and novel
falsehoods, the field is ripe for bots to spur and escalate the spreading of negative
misinformation. 67 Facebook estimates that as many as 60 million bots may be
infesting its platform. 68 Bots were responsible for a substantial portion of
political content posted during the 2016 election. 69 Bots also can manipulate
algorithms used to predict potential engagement with content.
Negative information not only is tempting to share, but is also relatively
“sticky.” As social science research shows, people tend to credit-and
remember-negative information far more than positive information. 7 Coupled
with our natural predisposition towards certain stimuli like sex, gossip, and
violence, that tendency provides a welcome environment for harmful deep
fakes. 71 The Internet amplifies this effect, which helps explain the popularity of
°
60. Soroush Vosoughi et al., The Spread of Trne and False News Online, 359 SCIENCE1146,
1146 (2018), http://science.sciencemag.org/content/359/6380/1146/tab-pdf [https://perma.cc/5U5DUHPZ].
61. Id at 1148.
62. Id at 1149.
63. Id. at 1146.
64. Id
65. Id at 1149.
66. Id. at 1146, 1150.
67. Meyer, supra note 59 (quoting political scientist Dave Karpf).
68. Nicholas Confessore et al., The Follower Factory, N.Y. TIMEs (Jan. 27, 2018),
https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html
[https://perma.cc/DX34-RENV] (“In November, Facebook disclosed to investors that it had at least
twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts
may roam the world’s largest social media platform.”); see also Extremist Content and Russian
Disinfonnation Online: Working with Tech to Find Solutions: Hearing Before the S. Judiciary Comm.,
117th Cong. (2017) https://www.judiciary.senate.gov/meetings/extremist-content-and-russiandisinformation-online-working-with-tech-to-find-solutions [https://perma.cc/M5L9-R2MY].
69. David M. J. Lazer et al., The &ience of Fake News: Addressing Fake News Requires a
Multidisciplinary Effort, 359 SCIENCE1094, 1095 (2018).
70. See, e.g., Elizabeth A. Kensinger, Negative Emotion Enhances Memory Accuracy:
IN PSYCHOL.SCI. 213, 217 (2007)
Behavioral and Neuroimaging Evidence, 16 CURRENTDIRECTIONS
(finding that “negative emotion conveys focal benefits on memory for detail”).
71. PARISER,supra note 40, at 13-14.
1768
CALIFORNIA LAW REVIEW
[Vol. 107:1753
gossip sites like TMZ.com. 72 Because search engines produce results based on
our interests, they tend to feature more of the same-more sex and more gossip.73
Third, filter bubbles further aggravate the spread of false information. Even
without the aid of technology, we naturally tend to surround ourselves with
information confirming our beliefs. Social media platforms supercharge this
tendency by empowering users to endorse and re-share content. 74 Platforms’
algorithms highlight popular information, especially if it has been shared by
friends, and surround us with content from relatively homogenous groups. 75 As
endorsements and shares accumulate, the chances for an algorithmic boost
increase. After seeing friends’ recommendations online, individuals tend to pass
on those recommendations to their own networks. 76 Because people tend to share
information with which they agree, social media users are surrounded by
information confirming their preexisting beliefs. 77 This is what we mean by
“filter bubble. ” 78
Filter bubbles can be powerful insulators against the influence of contrary
information. In a study of Facebook users, researchers found that individuals
reading fact-checking articles had not originally consumed the fake news at
issue, and those who consumed fake news in the first place almost never read a
fact-check that might debunk it.79
Taken together, common cognitive biases and social media capabilities are
behind the viral spread of falsehoods and decay of truth. They have helped
entrench what amounts to information tribalism, and the results plague public
and private discourse. Information cascades, natural attraction to negative and
novel information, and filter bubbles provide an all-too-welcoming environment
as deep-fake capacities mature and proliferate.
II.
COSTS AND BENEFITS
Deep- fake technology can and will be used for a wide variety of purposes.
Not all will be antisocial; some, in fact, will be profoundly prosocial.
72. CITRON,HATECR.IMEs
INCYBERSPACE,
supra note 40, at 68.
73. Id
74. Id at 67.
75. Id
76. Id
77. Id
78. Political scientists Andrew Guess, Brendan Nyhan, and Jason Reifler studied the production
and consumption of fake news on Facebook during the 2016 U.S. Presidential election. According to
the study, filter bubbles were deep (with one in four individuals visiting from fake news websites), but
narrow (the majority offake news group consumption was concentrated among 10% of the public). See
ANDREW GUESS ET AL., SELECTIVEEXPOSURETO MISINFORMATION:
EVIDENCEFROM TIIE
CONSUMPTIONOF FAKE NEWS DURING THE 2016 U.S. PRESIDENTIALCAMPAIGN 1 (2018)
https:/ /www.dartmouth.edu/-nyhan/fake-news-2016.pdf [https://perma.cc/F3VF-NCL ].
79. See id. at 11.
2019]
DEEP FAKES
1769
Nevertheless, deep fakes can inflict a remarkable array of harms, many of which
are exacerbated by features of the information environment explored above.
A. Beneficial Uses of Deep-Fake Technology
Human ingenuity no doubt will conceive many beneficial uses for deepfake technology. For now, the most obvious possibilities for beneficial uses fall
under the headings of education, art, and the promotion of individual autonomy.
1. Education
Deep-fake technology creates an array of opportunities for educators,
including the ability to provide students with information in compelling ways
relative to traditional means like readings and lectures. This is similar to an
earlier wave of educational innovation made possible by increasing access to
ordinary video. 80 With deep fakes, it will be possible to manufacture videos of
historical figures speaking directly to students, giving an otherwise unappealing
lecture a new lease on life. 81
Creating modified content will raise interesting questions about intellectual
property protections and the reach of the fair use exemption. Setting those
obstacles aside, the educational benefits of deep fakes are appealing from a
pedagogical perspective in much the same way that is true for the advent of
virtual and augmented reality production and viewing technologies. 82
The technology opens the door to relatively cheap and accessible
production of video content that alters existing films or shows, particularly on
the audio track, to illustrate a pedagogical point. For example, a scene from a
war film could be altered to make it seem that a commander and her legal advisor
are discussing application of the laws of war, when in the original the dialogue
had nothing to do with that-and the scene could be re-run again and again with
modifications to the dialogue tracking changes to the hypothetical scenario under
80. Emily Cruse, Using Educational Video in the Classroom: Theory, Research, and Practice,
1-2
(2013)
(unpublished
manuscript),
https://www.safarimontage.com/pdfs/training/UsingEducationa!VideoinTheClassroom.pdf
[https://perma.cc/AJ8Q-WZP4].
81. Face2Face is a real-time face capture and reenactment software developed by researchers at
the University of Erlangen-Nuremberg, the Max-Planck-Institute for Informatics, and Stanford
University. The applications of this technology could reinvent the way students learn about historical
events and figures. See Justus Thies et al., Face2Face: Real-time Face Capture and Reenactment ofRGB
Videos
(June
2016)
(29th
IEEE-CVPR
2016
conference
paper),
http://www.graphics.stanford.edu/-niessner/papers/2016/1facetoface/thies2016face.pdf
[https://permacc/S94K-DPU5].
82. Adam Evans, Pros and Cons of Virtual Reality in the Classroom, CHRON.HIGHER EDUC.
(Apr.
8,
2018),
https://www.chronicle.com/article/ProsCons-of-Virtual/243016
[https://perma.cc/TN84-89SQ].
1770
CALIFORNIA LAW REVIEW
[Vol. 107:1753
consideration. If done well, it would surely beat just having the professor asking
students to imagine the shifting scenario out of whole cloth. 83
The educational value of deep fakes will extend beyond the classroom. In
the spring of 2018, Buzzfeed provided an apt example when it circulated a video
that appeared to feature Barack Obama warning of the dangers of deep-fake
technology itself. 84 One can imagine deep fakes deployed to support educational
campaigns by public-interest organizations such as Mothers Against Drunk
Driving.
2.
Art
The potential artistic benefits of deep-fake technology relate to its
educational benefits, though they need not serve any formal educational purpose.
Thanks to the use of existing technologies that resurrect dead performers for
fresh roles, the benefits to creativity are already familiar to mass audiences. 85 For
example, the startling appearance of the long-dead Peter Cushing as the
venerable Grand MoffTarkin in 2016’s Rogue One was made possible by a deft
combination of live acting and technical wizardry. That prominent illustration
delighted some and upset others. 86 The Star Wars contribution to this theme
continued in The Last Jedi when Carrie Fisher’s death led the filmmakers to fake
additional dialogue using snippets from real recordings. 87
Not all artistic uses of deep-fake technologies will have commercial
potential. Artists may find it appealing to express ideas through deep fakes,
including, but not limited to, productions showing incongruities between
apparent speakers and their apparent speech. Video artists might use deep-fake
technology to satirize, parody, and critique public figures and public officials.
Activists could use deep fakes to demonstrate their point in a way that words
alone could not.
3.
Autonomy
Just as art overlaps with education, deep fakes implicate self-expression.
But not all uses of deep fakes for self-expression are best understood as art. Some
83. The facial animation software CrazyTalk, by Reallusion, animates faces from photographs
or cartoons and can be used by educators to further pedagogical goals. The software is available at
https://www.reallusion.com/crazytalk/defaulthtml [https://perma.cc/TTX8-QMJP].
84. SeeChoi,supranote 16.
85. Indeed, film contracts now increasingly address future uses of a person’s image in
subsequent films via deep fake technology in the event of their death.
86. Dave Itzkoff, How ‘Rogue One ‘Brought Back F amilim Faces, N.Y. TIMES(Dec. 27, 2016),
https://www.nytimes.com/2016/12/27/movies/how-rogue-one-brought-back-grand-moff-tarkin.html
[https://perma.cc/F53C-TDYV].
87. Evan Narcisse, It Took Some Movie Magic to Complete Carrie Fisher’s Leia Dialogue in
The Last Jedi, GIZMODO(Dec. 8, 2017), https://io9.gizmodo.com/it-took-some-movie-magic-tocomplete-carrie-fishers-lei-l 821121635 [https://perma.cc/NF5H-GPJF].
2019)
DEEP FAKES
1771
may be used to facilitate “avatar” experiences for a variety of self-expressive
ends that might best be described in terms of autonomy.
Perhaps most notably, deep-fake audio technology holds promise to restore
the ability of persons suffering from certain forms of paralysis, such as ALS, to
speak with their own voice. 88 Separately, individuals suffering from certain
physical disabilities might interpose their faces and that of consenting partners
into pornographic videos, enabling virtual engagement with an aspect of life
unavailable to them in a conventional sense. 89
The utility of deep-fake technology for avatar experiences, which need not
be limited to sex, closely relates to more familiar examples of technology. Video
games, for example, enable a person to have or perceive experiences that might
otherwise be impossible, dangerous, or otherwise undesirable if pursued in
person. The customizable avatars from Nintendo Wii (known as “Mii”) provide
a familiar and non-threatening example. The video game example underscores
that the avatar scenario is not always a serious matter, and sometimes boils down
to no more and no less than the pursuit of happiness.
Deep-fake technology confers the ability to integrate more realistic
simulacrums of one’s own self into an array of media, thus producing a stronger
avatar effect. For some aspects of the pursuit of autonomy, this will be a very
good thing (as the book and film Ready Player One suggests, albeit with
reference to a vision of advanced virtual reality rather than deep-fake
technology). Not so for others, however. Indeed, as we describe below, the
prospects for the harmful use of deep-fake technology are legion.
B. Harmful Uses of Deep-Fake Technology
Human ingenuity, alas, is not limited to applying technology to beneficial
ends. Like any technology, deep fakes also will be used to cause a broad
spectrum of serious harms, many of them exacerbated by the combination of
networked information systems and cognitive biases described above.
1. Harm to Individuals or Organizations
Lies about what other people have said or done are as old as human society,
and come in many shapes and sizes. Some merely irritate or embarrass, while
others humiliate and destroy; some spur violence. All of this will be true with
deep fakes as well, only more so due to their inherent credibility and the manner
88. Sirna Shakeri, Lyrebird Helps ALS Ice Bucket Challenge Co-Founder Pat Quinn Get His
Voice Back: Project Revoice Can Change Lives, HUFFINGTON POST (Apr. 14, 2018),
https://www .huffingtonpost.ca/2018/04/ 14/lyrebird-helps-als-ice-bucket-challenge-co- founder-patquinn-get-his-voice-back _a_ 23411403 (bttps://perma.cc/R5SD-Y3 7Y].
89. See Allie Volpe, Deepfake Porn has Terrifying Implications. But What if it Could Be Used
for
Good?, MEN’S HEALTII (Apr.
13,
2018),
https://www.menshealth.com/sexwomen/al9755663/deepfakes-pom-reddit-pornhub [https://perma.cc/EFX9-2BUE].
1772
CALIFORNIA LAW REVIEW
[Vol. 107:1753
in which they hide the liar’s creative role. Deep fakes will emerge as powerful
mechanisms for some to exploit and sabotage others.
a.
Exploitation
There will be no shortage of harmful exploitations. Some will be in the
nature of theft, such as stealing people’s identities to extract financial or some
other benefit. Others will be in the nature of abuse, commandeering a person’s
identity to harm them or individuals who care about them. And some will involve
both dimensions, whether the person creating the fake so intended or not.
As an example of extracting value, consider the possibilities for the realm
of extortion. Blackmailers might use deep fakes to extract something of value
from people, even those who might normally have little or nothing to fear in this
regard, who (quite reasonably) doubt their ability to debunk the fakes
persuasively, or who fear that any debunking would fail to reach far and fast
enough to prevent or undo the initial damage. 90 In that case, victims might be
forced to provide money, business secrets, or nude images or videos (a practice
known as sextortion) to prevent the release of the deep fakes. 91 Likewise,
fraudulent kidnapping claims might prove more effective in extracting ransom
when backed by video or audio appearing to depict a victim who is not in fact in
the fraudster’s control.
Not all value extraction takes a tangible form. Deep-fake technology can
also be used to exploit an individual’s sexual identity for other’s gratification. 92
Thanks to deep-fake technology, an individual’s face, voice, and body can be
swapped into real pomography. 93 A subreddit (now closed) featured deep-fake
sex videos of female celebrities and amassed more than 100,000 users. 94 As one
Reddit user asked, “I want to make a porn video with my ex-girlfriend. But I
90. See generally ADAMDoDGE & ERICAJOHNSTONE,
USINGFAKEVIDEOTECHNOLOGY
TO
PERPETUATEINTIMATEPARTNERABUSE6 (2018), http://withoutmyconsent.org/blog/new-advisoryhelps-domestic-violence-survivors-prevent-and-stop-deepfake-abuse
[https://perma.cc/K3Y2-XG2Q]
(discussing how deep fakes used as black mail of an intimate partner could violate the California Family
Code). The advisory was published by the non-profit organization Without My Consent, which combats
online invasions of privacy.
91. Sextortion thrives on the threat that the extortionist will disclose sex videos or nude images
unless more nude images or videos are provided. BENAJMINWITTES ET AL., SEXTORTION:
CYBERSECURITY,TEENAGERS,AND REMOTE SEXUAL AsSAULT (Brookings Inst. ed., 2016),
https://www .brookings.edu/wp-content/uploads/2016/05/sextortion 1-1.pdf
[https:/ /perma.cc/7K9N5W7C].
92. See DoDGE & JOHNSTONE,supra note 90, at 6 (explaining the likelihood that domestic
abusers and cyber stalkers will use deep sex tapes to harm victims); Jank:oRoettgers, ‘Deep Fakes’ Will
VARIETY (Feb.
2,
2018),
Create
Hollywood’s
Next
Sex
Tape
Scare,
http:/ !variety .com/20 l 8/digital/news/hollywood-sex-tapes-deepfakes-ai-1202685655
[https://perma.cc/98HQ-668G].
93. Danielle Keats Citron, Sexual Privacy, 128 YALEL. J. 1870, 1921-24 (2019) [hereinafter
Citron, Sexual Privacy].
94. DoDGE & JOHNSTONE,supra note 90, at 6.
2019]
DEEP FAKES
1773
don’t have any high-quality video with her, but I have lots of good photos.” 95 A
Discord user explained that he made a “pretty good” video of a girl he went to
high school with, using around 380 photos scraped from her Instagram and
Facebook accounts. 96
These examples highlight an important point: the gendered dimension of
the exploitation of deep fakes. In all likelihood, the majority of victims of fake
sex videos will be female. This has been the case for cyber stalking and nonconsensual pornography, and likely will be the case for deep-fake sex videos. 97
One can easily imagine deep-fake sex videos subjecting individuals to
violent, humiliating sex acts. This shows that not all such fakes will be designed
primarily, or at all, for the creator’s sexual or financial gratification. Some will
be nothing less than cruel weapons meant to terrorize and inflict pain. Of deepfake sex videos, Mary Anne Franks has astutely said, “If you were the worst
misogynist in the world, this technology would allow you to accomplish
whatever you wanted. “98
When victims discover that they have been used in deep- fake sex videos,
the psychological damage may be profound-whether or not this was the video
creator’s aim. Victims may feel humiliated and scared. 99 Deep-fake sex videos
force individuals into virtual sex, reducing them to sex objects. As Robin West
has observed, threats of sexual violence “literally, albeit not physically,
penetrates the body.” 100 Deep-fake sex videos can transform rape threats into a
terrifying virtual reality. They send the message that victims can be sexually
abused at whim. Given the stigma of nude images, especially for women and
girls, individuals depicted in fake sex videos also may suffer collateral
consequences in the job market, among other places, as we explain in more detail
below in our discussion of sabotage. 101
95. Id
96. Id
97. AsIA A. EATONET AL., 2017 NATIONWIDE
ONLINE STUDY OF NONCONSENSUAL
PORN
VICTIMIZATION AND PERPETRATION 12
(Cyber
C.R.
Initiative
ed.,
2017),
https://www .cybercivilrights.org/wp-content/uploads/2017/06/CCRI-2017-Research-Report.pdf
[https://perma.cc/2HYP-7ELV] (”Women were significantly more likely [1.7 times] to have been
victims of [non-consensual porn] or to have been threatened with [non-consensual porn]. … “).
98. Drew Harwell, Fake-Porn VideosAre Being Weaponized to Harass and Humiliate Women:
‘Everybody
is
a
Potential
Target’,
WASH.
POST
(Dec.
30,
2018),
https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponizedharass-humiliate-women-everybody-is-potential-target/?utm_term=.936bfc339777
[https://perma.cc/D37Y-DPXB].
99. See generally Rana Ayyub,Jn India, Journalists Face Slut-Shaming and Rape Threats, N. Y.
TIMES(May 22, 2018), https://www.nytimes.com/2018/05/22/opinion/india-joumalists-slut-shamingrape.html [https://perma.cc/A 7WR-PF6L ]; ‘/Couldn’t Talk or Sleep for Three Days’: Journalist Rana
Ayyub’s Horrific Social Media Ordeal over Fake Tweet, Daily O (Apr. 26, 2018),
https://www.dailyo.in/variety/rana-ayyub-trolling-fake-tweet-social-media-harassmenthindutva/story/1/23733.html [https://perma.cc/J6G6-H6GZ].
I 00. ROBINWEST,CARINGFOR JUSTICE102-03 (1997) (emphasis omitted).
101. Deep-fake sex videos should be considered in light of the broader cyber stalking
phenomenon, which more often targets women and usually involves online assaults that are sexually
1774
CALIFORNIA LAW REVIEW
[Vol. 107:1753
These examples are but the tip of a disturbing iceberg. Like sexualized deep
fakes, imagery depicting non-sexual abuse or violence might also be used to
threaten, intimidate, and inflict psychological harm on the depicted victim (or
those who care for that person). Deep fakes also might be used to portray
someone, falsely, as endorsing a product, service, idea, or politician. Other forms
of exploitation will abound.
b.
Sabotage
In addition to inflicting direct psychological harm on victims, deep-fake
technology can be used to harm victims along other dimensions due to their
utility for reputational sabotage. Across every field of competition-workplace,
romance, sports, marketplace, and politics-people will have the capacity to deal
significant blows to the prospects of their rivals.
It could mean the loss of romantic opportunity, the support of friends, the
denial of a promotion, the cancellation of a business opportunity, and beyond.
Deep-fake videos could depict a person destroying property in a drunken rage.
They could show people stealing from a store; yelling vile, racist epithets; using
drugs; or any manner of antisocial or embarrassing behavior like sounding
incoherent. Depending on the circumstances, timing, and circulation of the fake,
the effects could be devastating.
In some instances, debunking the fake may come too late to remedy the
initial harm. For example, consider how a rival might torpedo the draft position
of a top pro sports prospect by releasing a compromising deep-fake video just as
the draft begins. Even if the video is later doubted as a fake, it could be
impossible to undo the consequences (which might involve the loss of millions
of dollars) because once cautious teams make other picks, the victim may fall
into later rounds of the draft (or out of the draft altogether). 102
The nature of today’s communication environment enhances the capacity
of deep fakes to cause reputational harm. The combination of cognitive biases
and algorithmic boosting increases the chances for salacious fakes to circulate.
The ease of copying and storing data online-including storage in remote
jurisdictions-makes it much harder to eliminate fakes once they are posted and
shared. These considerations combined with the ever-improving search engines
increase the chances that employers, business partners, or romantic interests will
encounter the fake.
threatening and sexually demeaning. See CITRON,HATE CRIMEs IN CYBERSPACE,supra note 40, at 1319.
102. This hypothetical is modeled on an actual event, albeit one involving a genuine rather than
a falsified compromising video. In 2016, a highly regarded NFL prospect named Laremy Tunsill may
have lost as much as $16 million when, on the verge of the NFL draft, someone released a video showing
him smoking marijuana with a bong and gas mask. See Jack Holmes, A Hacker’s Tweet May Have Cost
EsQUIRE (Apr.
29,
2016),
This
NFL
Prospect
Almost
$16
Million,
ht1ps://www.esquire.com/sports/news/a44457/laremy-tunsil-nfl-draft-weed-lost-millions
[https://perma.cc/7PEL-PRBF].
2019]
DEEP FAKES
1775
Once discovered, deep fakes can be devastating to those searching for
employment. Search results matter to employers. 103 According to a 2009
Microsoft study, more than 90 percent of employers use search results to make
decisions about candidates, and in more than 77 percent of cases, those results
have a negative result. As the study explained, employers often decline to
interview or hire people because their search results featured “inappropriate
photos. ” 104 The reason for those results should be obvious. It is less risky and
expensive to hire people who do not have the baggage of damaged online
reputations. This is especially true in fields where the competition for jobs is
steep. 105 There is little reason to think the dynamics would be significantly
different with respect to romantic prospects. 106
Deep fakes can be used to sabotage business competitors. Deep-fake videos
could show a rival company’s chief executive engaged in any manner of
disreputable behavior, from purchasing illegal drugs to hiring underage
prostitutes to uttering racial epithets to bribing government officials. Deep fakes
could be released just in time to interfere with merger discussions or bids for
government contracts. As with the sports draft example, mundane business
opportunities could be thwarted even if the videos are ultimately exposed as
fakes.
103. Number of Employers Using Social Media to Screen Candidates at All-Time High, Finds
(June
15, 2017),
Latest
CareerBuilder Study, CAREERBUILDER: PRESS ROOM
http://press.careerbuilder.com/2017-06-15-Number-of-Employers-Using-Social-Media-to-ScreenCandidates-at-All-Time-High-Finds-Latest-CareerBuilder-Study
[https://perma.cc/K6BD-DYSV]
(noting that a national survey conducted in 2017 found that over half of employers will not hire a
candidate without an online presence and may choose not to hire a candidate based on negative social
media content).
104. This has been the case for nude photos posted without consent, often known as revenge
porn. See generally CITRON,HATECRIMEsINCYBERSPACE,
supra note 40, at 17-18, 48–49 (exploring
the economic fallout of the nonconsensual posting of someone’s nude image); Mary Anne Franks,
“Revenge Porn” Reform: A Viewfrom the Front Lines, 69 FLA.L. REV. 1251, 1308-23 (2017). For
recent examples, see Tasneem Nashrulla, A Middle School Teacher Was Fired After a Student Obtained
Her Topless Se/fie. Now She is Suing the School District for Gender Discrimination, BUzzFEED(Apr.
4.
2019),
https://www.buzzfeednews.com/article/tasneemnashrulla/middle-school-teacher-firedtopless-selfie-lawsuit [https://perma.cc/3PGZ-CZ5R); Annie Seifullah, Revenge Porn Took My Career.
The Law Couldn’t Get It Back, JEZEBEL(July 18, 2018), https://jezebel.com/revenge-porn-took-mycareer-the-law-couldnt-get-it-bac-1827572768 [https://perma.cc/D9Y8-63WH].
105. See Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 WAKE
FORESTL. REV. 345, 352-53 (2014) (“Most employers rely on candidates’ online reputations as an
employment screen.”).
106. Journalist Rana Ayuub, who faced vicious online abuse including her image in deep-fake
sex videos, explained that the deep fakes seemed designed to label her as “promiscuous,” “immoral,”
and damaged goods. Ayyub, supra note 99. See generally Citron, Sexual Privacy, supra note 93, at
1925-26 (discussing how victims of deep-fake sex videos felt crippled and unable to talk or eat, let alone
engage with others); Danielle Keats Citron, Why Sexual Privacy Matters for Trost, WASH.U. L. REV.
(forthcoming) (recounting fear of dating and embarrassment experienced by individuals whose nude
photos were disclosed online without consent).
1776
CALIFORNIA LAW REVIEW
[Vol. 107:1753
2. Harm to Society
Deep fakes are not just a threat to specific individuals or entities. They have
the capacity to harm society in a variety of ways. Consider the following:
• Fake videos could feature public officials taking bribes,
displaying racism, or engaging in adultery.
• Politicians and other government officials could appear in
locations where they were not, saying or doing things that they
did not. 107
• Fake audio or video could involve damaging campaign material
that claims to emanate from a political candidate when it does
not.1os
•
•
•
•
•
•
•
•
Fake videos could place them in meetings with spies or
criminals, launching public outrage, criminal investigations, or
both.
Soldiers could be shown murdering innocent civilians in a war
zone, precipitating waves of violence and even strategic harms
to a war effort. 109
A deep fake might falsely depict a white police officer shooting
an unarmed black man while shouting racial epithets.
A fake audio clip might “reveal” criminal behavior by a
candidate on the eve of an election.
Falsified video appearing to show a Muslim man at a local
mosque celebrating the Islamic State could stoke distrust of, or
even violence against, that community.
A fake video might portray an Israeli official doing or saying
something so inflammatory as to cause riots in neighboring
countries, potentially disrupting diplomatic ties or sparking a
wave of violence.
False audio might convincingly depict U.S. officials privately
“admitting” a plan to commit an outrage overseas, timed to
disrupt an important diplomatic initiative.
A fake video might depict emergency officials “announcing”
an impending missile strike on Los Angeles or an emergent
pandemic in New York City, provoking panic and worse.
107. See, e.g., Linton Weeks, A Very Weird Photo of Ulysses S. Grant, NAT’L PuB. RADIO (Oct.
27, 2015 11:03 AM), https://www.npr.org/sections/npr-history-dept/2015/10/27/452089384/a-veryweird-photo-of-ulysses-s-grant [https://perma.cc/F3U6-WRVF] (discussing a doctored photo of
tnysses S. Grant from the Library of Congress archives that was created over 100 years ago).
108. For powerful work on the potential damage of deep-fake campaign speech, see Rebecca
Green, Counte,feit Campaign Speech, 70 HAsTINGSL. J. (forthcoming 2019).
109. Cf Vindu Goel and Sheera Frenkel, In India Election, False Posts and Hate Speech
Flummox
Facebook,
N.Y
TIMES
(Apr.
1,
2019),
https://www.nytimes.com/2019/04/0l /technology/india-elections-facebook.html
[https://perma.cc/55AW-X6Q3].
2019]
DEEP FAKES
1777
As these scenarios suggest, the threats posed by deep fakes have systemic
dimensions. The damage may extend to, among other things, distortion of
democratic discourse on important policy questions; manipulation of elections;
erosion of trust in significant public and private institutions; enhancement and
exploitation of social divisions; harm to specific military or intelligence
operations or capabilities; threats to the economy; and damage to international
relations.
a. Distortion of Democratic Discourse
Public discourse on questions of policy currently suffers from the
circulation of false information. 110 Sometimes lies are intended to undermine the
credibility of participants in such debates, and sometimes lies erode the factual
foundation that ought to inform policy discourse. Even without prevalent deep
fakes, information pathologies abound. But deep fakes will exacerbate matters
by raising the stakes for the “fake news” phenomenon in dramatic fashion ( quite
literally). 111
Many actors will have sufficient interest to exploit the capacity of deep
fakes to skew information and thus manipulate beliefs. As recent actions by the
Russian government demonstrate, state actors sometimes have such interests. 112
Other actors will do it as a form of unfair competition in the battle of ideas. And
others will do it simply as a tactic of intellectual vandalism and fraud. The
combined effects may be significant, including but not limited to the disruption
of elections. But elections are vulnerable to deep fakes in a separate and
distinctive way as well, as we will explore in the next section.
Democratic discourse is most functional when debates build from a
foundation of shared facts and truths supported by empirical evidence. 113 In the
absence of an agreed upon reality, efforts to solve national and global problems
become enmeshed in needless first-order questions like whether climate change
is real. 114 The large-scale erosion of public faith in data and statistics has led us
110. See Steve Lohr, It’s True: False News Spreads Faster and Wider. And Humans Are to
Blame, N.Y. TIMES (Mar. 8, 2018), https://www.nytimes.com/2018/03/08/tecbnology/twitter-fakenews-research.html [https://perrnacc/ AB7 4-CUWV].
111. Franklin Foer, The Era of Fake Video Begins, ATLANTIC (May 2018),
https://www.theatlantic.com/magazine/archive/2018/05/realitys-end/556877
[https://perrna.cc/RX2AX8EE] (“Fabricated videos will create new and understandable suspicions about everything we watch.
Politicians and publicists will exploit those doubts. When captured in a moment of wrongdoing, a culprit
will simply declare the visual evidence a malicious concoction.”).
112. Charlie W arzel, 2017 Was the Year Our Internet Destroyed Our Shared Reality, BUZZFEED
(Dec. 28, 2017), https://www.buzzfeed.com/charliewarzel/2017-year-the-Internet-destroyed-sharedreality?utm _term=.nebaDjYmj [https://perma.cc/8WWS-UC8K].
113. Mark Verstraete & Derek E. Bambauer, Ecosystem of Distrust, 16 FIRST AMEND.L. REV.
129, 152 (2017). For powerful scholarship on how lies undermine culture of trust, see SEANA
VALENTINESHRIFFIN,SPEECHMATTERS:ON LYING,MORALITY,ANDTHELAW (2014).
114. Verstraete & Bambauer, supra note 113, at 144 (“Trust in data and statistics is a precondition
to being able to resolve disputes about the world–they allow participants in policy debates to operate
at least from a shared reality.”).
1778
CALIFORNIA LAW REVIEW
[Vol. 107:1753
to a point where the simple introduction of empirical evidence can alienate those
who have come to view statistics as elitist. 115 Deep fakes will allow individuals
to live in their own subjective realities, where beliefs can be supported by
manufactured “facts.” When basic empirical insights provoke heated
contestation, democratic discourse has difficulty proceeding. In a marketplace of
ideas flooded with deep-fake videos and audio, truthful facts will have difficulty
emerging from the scrum.
b. Manipulation of Elections
In addition to the ability of deep fakes to inject visual and audio falsehoods
into policy debates, a deeply convincing variation of a long-standing problem in
politics, deep fakes can enable a particularly disturbing form of sabotage:
distribution of a damaging, but false, video or audio about a political candidate.
The potential to sway the outcome of an election is real, particularly if the
attacker is able to time the distribution such that there will be enough window
for the fake to circulate but not enough window for the victim to debunk it
effectively (assuming it can be debunked at all). In this respect, the election
scenario is akin to the NBA draft scenario described earlier. Both involve
decisional chokepoints: narrow windows of time during which irrevocable
decisions are made, and during which the circulation of false information
therefore may have irremediable effects.
The 201 7 election in France illustrates the perils. In this variant of the
operation executed against the Clinton campaign in the United States in 2016,
the Russians mounted a covert-action program that blended cyber-espionage and
information manipulation in an effort to prevent the election of Emmanuel
Macron as President of France in 2017. 116 The campaign included theft of large
numbers of digital communications and documents, alteration of some of those
documents in hopes of making them seem problematic, and dumping a lot of
them on the public alongside aggressive spin. The effort ultimately fizzled for
many reasons, including: poor tradecraft that made it easy to trace the attack;
smart defensive work by the Macron team, which planted their own false
documents throughout their own system to create a smokescreen of distrust; a
lack of sufficiently provocative material despite an effort by the Russians to
engineer scandal by altering some of the documents prior to release; and
mismanagement of the timing of the document dump, which left enough time for
the Macron team and the media to discover and point out all these flaws. 117
ll5.
Id
116. See Aurelien Breeden et al., Macron Campaign Says It Was Target of ‘Massive’ Hacking
Attack, N.Y. TIMES(May 5, 2017), https://www.nytimes.com/2017 /05/05/world/europe/france-macronhacking.html [https://perma.cc/4RC8-PV5G].
Came, But the French Were Prepared, N.Y. TIMES
117. See, e.g., AdamNossiteretal.,Hackers
(May 9, 201 7), https://www .nytimes.com/2017 /05/09/world/europe/hackers-came-but-the-frenchwere-prepared.html [https://perma.cctP3EW-H5ZY”].
2019]
DEEP FAKES
1779
It was a bullet dodged, yes, but a bullet nonetheless. The Russians could
have acted with greater care, both in terms of timing and tradecraft. They could
have produced a more-damning fake document, for example, dropping it just as
polls opened. Worse, they could have distributed a deep fake consisting of
seemingly-real video or audio evidence persuasively depicting Macron speaking
or doing something shocking.
This version of the deep-fake threat is not limited to state-sponsored covert
action. States may have a strong incentive to develop and deploy such tools to
sway elections, but there will be no shortage of non-state actors and individuals
motivated to do the same. The limitation on such interventions has much more
to do with means than motive, as things currently stand. The diffusion of the
capacity to produce high-quality deep fakes will erode that limitation,
empowering an ever-widening circle of participants to inject false-butcompelling information into a ready and willing information-sharing
environment. If executed and timed well enough, such interventions are bound
to tip an outcome sooner or later-and in a larger set of cases they will at least
cast a shadow of illegitimacy over the election process itself.
c.
Eroding Trost in Institutions
Deep fakes will erode trust in a wide range of both public and private
institutions and such trust will become harder to maintain. The list of public
institutions for which this will matter runs the gamut, including elected officials,
appointed officials, judges, juries, legislators, staffers, and agencies. One can
readily imagine, in the current climate especially, a fake-but-viral video
purporting to show FBI special agents discussing ways to abuse their authority
to pursue a Trump family member. Conversely, we might see a fraudulent video
ofICE officers speaking with racist language about immigrants or acting cruelly
towards a detained child. Particularly where strong narratives of distrust already
exist, provocative deep fakes will find a primed audience.
Private sector institutions will be just as vulnerable. If an institution has a
significant voice or role in society, whether nationally or locally, it is a potential
target. More to the point, such institutions already are subject to reputational
attacks, but soon will have to face abuse in the form of deep fakes that are harder
to debunk and more likely to circulate widely. Religious institutions are an
obvious target, as are politically-engaged entities ranging from Planned
Parenthood to the NRA. 118
118. Recall that the Center for Medical Progress released videos of Planned Parenthood officials
that Planned Parenthood argued had been deceptively edited to embarrass the organization. See, e.g.,
Jackie Calmes, Planned Parenthood Videos Were Altered, Analysis Finds, N.Y. TIMES(Aug. 27, 2015),
https://www .nytimes.com/2015/08/28/us/abortion-planned-parenthood-videos.html
[https://pennacc/G52X-V8ND]. hnagine the potential for deep fakes designed for such a purpose.
1780
CALIFORNIA LAW REVIEW
[Vol. 107:1753
d. Exacerbating Social Divisions
The institutional examples relate closely to significant cleavages in
American society involving identity and policy commitments. Indeed, this is
what makes institutions attractive targets for falsehoods. As divisions become
entrenched, the likelihood that opponents will believe negative things about the
other side-and that some will be willing to spread lies towards that endgrows. 119 However, institutions will not be the only ones targeted with deep
fakes. We anticipate that deep fakes will reinforce and exacerbate the underlying
social divisions that fueled them in the first place.
Some have argued that this was the actual–or at least the original-goal
of the Russian covert action program involving intervention in American politics
in 2016. The Russians may have intended to enhance American social divisions
as a general proposition, rendering us less capable of forming consensus on
important policy questions and thus more distracted by internal squabbles. 120
Texas is illustrative. 121 Russia promoted conspiracy theories about federal
military power during the innocuous, “Jade Helm” training exercises. 122 Russian
operators organized an event in Houston to protest radical Islam and a counterprotest of that event; 123 they also promoted a Texas independence movement. 124
Deep fakes will strengthen the hand of those who seek to divide us in this way.
Deep fakes will not merely add fuel to the fire sustaining divisions. In some
instances, the emotional punch of a fake video or audio might accomplish a
degree of mobilization-to-action that written words alone could not. 125 Consider
119. See Brian E. Weeks, Emotions, Partisanship, and Misperceptions: How Anger and Anxiety
Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation, 65 J. COMM.699,
711-15 (2015) (discussing how political actors can spread political misinformation by recognizing and
exploiting common human emotional states).
120. JON WHITE, DISMISS,DISTORT,DISTRACT,ANDDISMAY:CONTINUITY
ANDCHANGE IN
RUSSIAN DISINFORMATION
(Inst. for European Studies ed. 2016), https://www.ies.be/node/3689
[https://perma.cc/P889- 768J].
121. The Ca!Ex.itcampaign is another illustration of Russian disinformation campaign. ‘Russian
Trolls’ Promoted California Independence, BBC (Nov. 4, 2017), http://www.bbc.com/news/blogstrending-41853131 [https://permacc/68Q8-KNDG].
122. Cassandra Pollock & Alex Samuels, Hysteria Over Jade Helm Exercise in Texas Was
Fueled by Russians, Former CIA Director Says, TEX. TRIB. (May 3, 2018),
https://www.texastribune.org/2018/05/03/hysteria-over-jade-helm-exercise-texas-was-fueled-russiansformer-cia [https://perma.cc/BU2Y-E7EY].
123. Scott Shane, How Unwitting Americans Encountered Russian Operatives Online, N.Y.
TIMES(Feb. 18, 2018), https://www .nytimes.com/2018/02/ l 8/us/politics/russian-operatives-facebooktwitter.htrnl [https://perma.cc/4C8Y-STP7].
124. Casey Michel, How the Russians Pretended to Be Texans-And Texans Believed Them,
WASH.
POST
(Oct.
17,
2017),
https://www.washingtonpost.com/news/democracypost/wp/2017 /10/17 /how-the-russians-pretended-to-be-texans-and-texans-believedthem/?noredirect=on&utrn _ term= .4730a395a684 [https://perma.cc/3Q7V-8YZK].
125. The “Pizzagate” conspiracy theory is a perfect example. There, an individual stormed a D.C.
restaurant with a gun because online stories falsely claimed that Presidential candidate Hillary Clinton
ran a child sex exploitation ring out of its basement. See Marc Fisher et al., Pizzagate: From Rumor, to
Hashtag,
to
Gunfire
in
D.C.,
WASH.
POST
(Dec.
6,
2016),
2019]
DEEP FAKES
1781
a situation of fraught, race-related tensions involving a police force and a local
community. A sufficiently inflammatory deep fake depicting a police officer
using racial slurs, shooting an unarmed person, or both could set off substantial
civil unrest, riots, or worse. Of course, the same deep fake might be done in
reverse, falsely depicting a community leader calling for violence against the
police. Such events would impose intangible costs by sharpening societal
divisions, as well as tangible costs for those tricked into certain actions and those
suffering from those actions.
e.
Undermining Public Safety
The foregoing example illustrates how a deep fake might be used to
enhance social divisions and to spark actions–even violence-that fray our
social fabric. But note, too, how deep fakes can undermine public safety.
A century ago, Justice Oliver Wendell Holmes warned of the danger of
falsely shouting fire in a crowded theater. 126 Now, false cries in the form of deep
fakes go viral, fueled by the persuasive power of hyper-realistic evidence in
conjunction with the distribution powers of social media. 127 The panic and
damage Holmes imagined may be modest in comparison to the potential unrest
and destruction created by a well-timed deep fake. 128
In the best-case scenario, real public panic might simply entail economic
harms and hassles. In the worst-case scenario, it might involve property
destruction, personal injuries, and/or death. Deep fakes increase the chances that
someone can induce a public panic.
They might not even need to capitalize on social divisions to do so. In early
2018, we saw a glimpse of how a panic might be caused through ordinary human
error when an employee of Hawaii’s Emergency Management Agency issued a
https://www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-gunfire-indc/20 l 6/12/06/4c7def50-bbd4-l l e6-94ac-3d324840 l 06c_ story.html [https://perma.cc/FV7W-PC9W].
126. Schenck v. United States, 249 U.S. 47, 52 (1919) (Holmes, J.) (“The most stringent
protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a
panic.”).
127. Cass R, Sunstein, Constitutional Caution, 1996 U. CHI.LEGALF.361,365 (1996) (“It may
well be that the easy transmission of such material to millions of people will justify deference to
reasonable legislative judgments.”).
128. In our keynote at the University of Maryland Law Review symposium inspired by this
article, we brought the issue close to home (for one of us) in Baltimore—-thedeath of Freddie Gray while
he was in policy custody. We asked the audience: “Imagine if a deep-fake video appeared of the police
officers responsible for Mr. Gray’s death in which they said they were ordered to kill Mr. Gray. As most
readers know, the day after Mr. Gray’s death was characterized by protests and civil unrest. If such a
deep-fake video had appeared and gone viral, we might have seen far more violence and disruption in
Baltimore. If the timing was just right and the video sufficiently inflammatory, we might have seen
greater destruction of property and possibly of lives.” Robert Chesney & Danielle Keats Citron, 21st
Century Style Truth Decay: Deep Fakes and the Challengefor Privacy, Free Expression, and National
Security, 78 MD. L. REV. 887 (2019); see also Maryland Carey Law, Truth Decay- Maryland Law
Review
Keynote
Symposium
Address,
YOUTuBE
(Feb.
6,
2019),
https://www.youtube.com/watch?v=WrYlKHiWv2c [https://perma.ccITT8M-ZBBN].
1782
CALIFORNIA LAW REVIEW
[Vol. 107:1753
warning to the public about an incoming ballistic missile. 129 Less widely noted,
we saw purposeful attempts to induce panic when the Russian Internet Research
Agency mounted a sophisticated and well-resourced campaign to create the
appearance of a chemical disaster in Louisiana and an Ebola outbreak in
Atlanta. 130 There was real but limited harm in both of these cases, though the
stories did not spread far because they lacked evidence and the facts were easy
to check.
We will not always be so lucky as malicious attempts to spread panic grow.
Deep fakes will prove especially useful for such disinformation campaigns,
enhancing their credibility. Imagine if the Atlanta Ebola story had been backed
by compelling fake audio appearing to capture a phone conversation with the
head of the Centers for Disease Control and Prevention describing terrifying
facts and calling for a cover-up to keep the public calm.
f
Undermining Diplomacy
Deep fakes will also disrupt diplomatic relations and roil international
affairs, especially where the fake is circulated publicly and galvanizes public
opinion. The recent Saudi-Qatari crisis might have been fueled by a hack that
injected fake stories with fake quotes by Qatar’s emir into a Qatari news site. 131
The manipulator behind the lie could then further support the fraud with
convincing video and audio clips purportedly gathered by and leaked from some
unnamed intelligence agency.
A deep fake put into the hands of a state’s intelligence apparatus may or
may not prompt a rash action. After all, the intelligence agencies of the most
capable governments are in a good position to make smart decisions about what
weight to give potential fakes. But not every state has such capable institutions,
and, in any event, the real utility of a deep fake for purposes of sparking an
international incident lies in inciting the public in one or more states to believe
that something shocking really did occur or was said. Deep fakes thus might best
be used to box in a government through inflammation ofrelevant public opinion,
constraining the government’s options, and perhaps forcing its hand in some
particular way. Recalling the concept of decisional chokepoints, for example, a
well-timed deep fake calculated to inflame public opinion might be circulated
during a summit meeting, making it politically untenable for one side to press its
129. Cecilia Kang, Hawaii MissileAlert Wasn’t Accidental, Officials Say, Blaming Worker,N.Y.
1iMES (Jan. 30, 2018), https://www.nytimes.com/2018/01/30/technology/fcc-hawaii-missile-alert.html
[https://perma.cc/4M39-C492].
The Agency, N.Y.
TIMES MAG. (June
2, 2015),
130. Adrian
Chen,
https://www.nytimes.com/2015/06/07 /magazine/the-agency.html [https://perma.cc/DML3-6MWT].
13 l. Krishnadev Calamur, Did Russian Hackers Target Qatar?, ATLANTIC (June 6, 2017),
https:/ /www.theatlantic.com/news/archive/2017 /06/qatar-russian-hacker-fake-news/529359
[https://perma.cc/4QA W-TL Y8] (discussing how Russian hackers may have planted a fake news story
on a Qatari news site that falsely suggested that the Qatari Emir had praised Iran and expressed interest
in peace with Israel).
2019]
DEEP FAKES
1783
agenda as it otherwise would have, or making it too costly to reach and announce
some particular agreement.
g.
Jeopardizing National Security
The use of deep fakes to endanger public safety or disrupt international
relations can also be viewed as harming national security. But what else belongs
under that heading?
Military activity-especially
combat operations-belongs
under this
heading as well, and there is considerable utility for deep fakes in that setting.
Most obviously, deep fakes have utility as a form of disinformation supporting
strategic, operational, or even tactical deception. This is a familiar aspect of
warfare, famously illustrated by the efforts of the Allies in Operation Bodyguard
to mislead the Axis regarding the location of what became the D-Day invasion
of June 1944.132 In that sense, deep fakes will be (or already are) merely another
instrument in the toolkit for wartime deception, one that combatants will both
use and have used against them.
Critically, deep fakes may prove to have special impact when it comes to
the battle for hearts and minds where a military force is occupying or at least
operating amidst a civilian population, as was the case for the U.S. military for
many years in Iraq and even now in Afghanistan. In that context, we have long
seen contending claims about civilian casualties-including, at times, the use of
falsified evidence to that effect. Deep fakes are certain to be used to make such
claims more credible. At times, this will merely have a general impact in the
larger battle of narratives. Nevertheless, such general impacts can matter a great
deal in the long term and can spur enemy recruitment or enhance civilian support
to the enemy. And, at times, it will spark specific violent reactions. One can
imagine circulation of a deep-fake video purporting to depict American soldiers
killing local civilians and seeming to say disparaging things about Islam in the
process, precipitating an attack by civilians or even a host-state soldier or police
officer against nearby U.S. persons.
Deep fakes pose similar problems for the activities of intelligence agencies.
The experience of the United States since the Snowden leaks in 2013
demonstrates that the public, both in the United States and abroad, can become
very alarmed about reports that the U.S. Intellig…
Purchase answer to see full
attachment