Deep fakes, Synthetic Media, and Media Manipulation

This talk was originally given as a part of the 2nd National Conference on Media & Journalism on the “Role of Media to Promote the Culture of Peace in the World” and was hosted by MIT-WIP University and Sponsored by the UNESCO Chair for Human Rights, Peace and Democracy on 23 September 2020.

This transcript has been edited for clarity and readability.

Professor Parasar Now, let's move forward. And it's high time to invite moderator and speaker of this interesting and very, very relevant session on deep fakes, synthetic media and media manipulation. Please, let's join hands to invite moderator of the session Professor Dougan for starting the session. And my submission to the moderator of the session, Professor Dougan. Sir, the total time with you is 50 minutes. And we will give signal 10 minutes prior to the end of the session. And so accordingly, these are the right time. We also request you to please take a question, answer. As and when they come in your session so that we can now proceed with the resolution and all of the next points. And before I hand over to the visiting professor Dougan to moderate the session. So, as part of the session, Professor Dougan can, link your question to the discussions and the deliberation. So, participants there are many dignitaries of this session with us. And please welcome them on the dais with a big round of applause. I think we may not hear those applause, but at least you can definitely see that there was applause are happening.

And with this now I will be handing over the proceedings for the next 50 minutes to moderator of the session, Professor Dougan, please take the proceedings forwards. Over to you Professor Dougan.

Neil Dougan Thank you very much indeed. Thank you. It was wonderful to see the footage from last year's conference. Hopefully, maybe this time next year we'll all be together. And we can ring that bell together. Thank you to Professor Parasar and Professor Chitnis. Of course, Professor Karad for the invitation to take part in this. On behalf of the panelists as well, I'm delighted to say that we've managed to gather real experts in the field, and they will introduce themselves in a moment. I just wanted to say that I took part in the panel yesterday on the fake news session, which was very, very interesting with misinformation, disinformation and history of propaganda, through to the common use of the term fake news. So, we won't be covering that ground, although I will say, as I mentioned at the end of this session yesterday, is that deep fakes in particular, the more malicious end of it, are like fake news on steroids. People have been surprised by fake news then, but they've got that there's an avalanche of pain to come in some senses, but also it can be a very creative thing as well. So, the purpose of the seminar today is to explore the positive and negative aspects of this. And thank you for your students for creating that video. I actually had a video lined up with the technical team, so technical tea, you can stand down that video because the students made a wonderful film which explained what the deep fakes were for. And thank you to Professor Chitnis for putting things in context. You've stolen the first paragraph of my speech. Thank you. You made my job a lot easier. Thank you very much.


We will be exploring the positive and negative aspects and maybe explaining the technology and exploring the wider impact on society and also to us, then we'll be looking at maybe some solutions the people may be able to... that may be coming forward. But they may be able to detect deep fakes. There's a kind of battle of technology between people creating and people detecting and perhaps towards the end. We will talk about that because one of the interesting things about the conversation on fake news yesterday was what can we do about it? So, we will be touching on that. So, as we saw in the video, deep fakes are manipulated videos produced by sophisticated artificial intelligence, artificial intelligence use algorithms called neural networks. They infer rules that replicate patterns and they fabricate images and audio as well that appear to be real. And the number of deep fake videos online in the last two years is doubling every six months. It's exponentially increasing. They're also disrupting everything we know about our agreed notions of truth, reality and also ethics in media and journalism, which is what's relevant to the to the actual conference here. In many ways, there was a 400-year gap between the Gutenberg press and the invention of photography and people kind of gradually, you know, adjusted to that. But because of the advanced, accelerated technologies that we have in society at the moment, society is playing catch up about how to absorb it, how to use it in a positive way, and the ethical considerations around that. So, we will be talking about that. I'm not playing the video, as I said, I would just ask one question. I'll just mention one thing before introducing speakers. Why? Why does this matter to us? What's the most important aspects?


Well, Professor Chitnis touched on it. The misuse of this technology can have an effect on international politics. There are some examples, but not major ones at the moment. It can affect domestic policy, politics as well as you can imagine, even involved in businesses. Businesses could be sabotaged. I mean, fake news sabotages various businesses and affected their credibility and the brand image and so on. But also, personally as well. Basically, at the moment, about 96 percent of the use of deep fakes remains in nonconsensual pornography, where people are using images of famous film stars and celebrities and so on, and almost all of them, women, but also people are going to be using it in terms of ordinary people as well. So personally, it could affect us all. If you can imagine audio being manipulated of our voices of having imaginary conversations with friends or family or in terms of your professional life, myself as a lecturer, maybe having a conversation with the student and the conversation is manipulated to destroy my career. So, there are personal aspects that could affect people professionally, personally and so on. We had revenge porn, which is happening already. So, in terms of deep fakes, this could have a profound effect professionally and in personal life, businesses and in politics as well. So, I've come to this as very much as an amateur. I've come across deep fakes by viewing a BBC drama last year called The Capture, and sometimes drama is the way that brings a subject to a mass audience. So, as part of the discussion, we have Laurence Davey, who's also written a drama about deep fakes and Laurence [Davey] at the end will be discussing that aspect of that as well. But we also have two experts in the field. Henry and Tom, who I will ask to introduce themselves. So as has been mentioned, deep fakes can be used positively in terms of educational aspects, artistic aspects as well as entertainment. There’s lots of deep fake on YouTube. Lots of entertainment aspects come up as well. But today, because it's media and journalism, we're going to be looking at more pernicious as well as the more positive aspects of deep fakes. So without further ado, I just want to ask the speakers just to introduce themselves and then I'll come. I'll be starting the session by coming back to Henry and asking him a couple of questions to get this action going. So perhaps. Is Tom there? Tom, if you want to just perhaps introduce yourself and what your credentials are in relation to deep fakes.

Tom Ascott Hi, everyone. It's really nice to be here today.

I guess I'll start by saying that I recently wrote a paper for the Journal of Digital Media and Policy on deep fakes called Microfakes. And I think what we've kind of been alluding to here, what Neil was talking about was the idea that deep fakes are becoming smaller and more personal. When we started thinking about deep fakes and the history of deep fakes it is very much rooted in ideas of heads of state. So Obama or now Donald Trump being manipulated to say things are obviously false or farcical in some way.

The problem is, is that that worked because deep fakes were at the time quite hard to make. Relied a lot of input footage and then they were able to put out small videos.

The technology, however, has become much more advanced and deep fakes are able to be more realistic and able to be crunched and made in a much smaller period of time with a much smaller input amount of data. This means that deep fakes can now imitate voices and can be used form training data that is incomplete or in poor lighting.

And that is something that affects everyone.

We all know that working in this field or being academics, we want our faces some voices out there. This conference is a great example. We will have our webcams on and there's enough data here for someone to make it a serious mask of any of the participants. And from that, you know, there are consequences.

Obviously, there are legal uses for this. So, there's a lot of commercial applications for deep fakes. Everything from serious Hollywood movies to applications which users who use them create deep fakes for their own personal enjoyment. And we see this in the popularity of applications like Instagram and Snapchat, which provide filters as well as big Hollywood blockbusters. But this technology that they use, a lot of it is open source and trickles down. Which means that other people can use it for less legitimate purposes. And this is something that we're starting to see. So, the security problems posed by deep fakes are becoming more of a reality. Deep fakes can offer quite a convincing alternative series of events. And this goes hand-in-hand with what Neil was talking about and what this conference has been discussing with fake news and disinformation.

We saw from the 2016 US presidential election that Russian trolls were involved in trying to create different narratives by using coordinated disinformation campaigns and inauthentic online behavior. At the time, the messages that were being able to be convincingly posted were hard to make. They were often poor photoshops and the disinformation campaigns would often be tracked by poor spelling, poor grammar and a general low level of attention made in English language. And this indicates that people who are on the ground level of disinformation campaigns often are not highly trained information operatives, they make a very reasonable amount of money and clock in and clock put like any other job. And as deep fakes become easier to make, it allows those people to engage in creating much more believable media, which then has a bigger impact on these campaigns.

When we think about what is manipulative towards us and the richer the ecosystem that we can make, the more dangerous it is. So, if you're only able to post text posts or images, the depth of the ecosystem can only go so far. But once they are able to create very advanced or at least believable deep fakes that ecosystem can go further. And this is something that we don't want to see.

The context right now is obviously that we're living in a highly volatile political times and media literacy isn't something that is taken as seriously as it should be. This means that though there are companies out there, Amazon, Facebook, Microsoft and Google who run deep fake detection challenges. This is only going to work for deep fakes which people are willing to flag. So go back to my original point that smaller people, people who might be participating in this conference, not heads of states, maybe local politicians, regional actors, academics or even non-state actors who are at risk. And deep fakes at that level are less likely to fall under the scrutiny of organisations that are trying to out deep fakes. And so while there is an ongoing arms race, as it were, between the quality of deep fakes and the ability to detect deep fakes, that only really covers the highest level of deep fakes, at a much lower level, we're seeing all of these small scale abuses and they're popping up everywhere. So an example would be we've seen individuals who effectively do not exist and they have been submitting articles to regional newspapers and their credentials aren't being checked. They might be on LinkedIn and they might be targeting junior policymakers with the hope of creating networks of entirely synthetic people, creating content that entraps others who are real and working in policy into this ecosystem, in an attempt to manipulate them over a prolonged period of time. And we're seeing this everywhere. The more that deep fakes are able to do this and the more that they're able to kind of create an illusory truth effect, which is where the more that someone perceives something, the more likely it is to be true. And so this is the biggest change in deep fakes and before you might have thought them as a heist, or a one off big attempt, almost like to rob a casino, Ocean's 11 style, where you were just trying to put out one deep fake, which is going to convince a couple of people. Now, it's low level and it's consistent. It's a regular it's part of your media diet. It's not flashy. And it's dangerous. Some people obviously will say this is not new, that we've be tackling this problem of synthetic media from Photoshop for a very long time, or before that, just the traditional propaganda techniques. And that's absolutely true. The problem is, is that Photoshop has been dangerous, and it doesn't feel like it now because if anyone saw a photo of a unicorn or Bigfoot, you wouldn't believe it.

But Photoshop is consistently used to just do things such as, well not Photoshop, but image manipulation is used from everything from people's Instagram pictures to their Facebook photos to remove blemishes in selfies.

It is creating an online media landscape where almost all photos are touched up. All of our photos that we see to advertising or now social media do not represent the reality that we live in. And that is where deep fakes will take us. At the moment, I think it is hard to think of any solutions for this. We have to think that this is a grey area between censorship and freedom of expression. Part of my work with Technoetic Arts, which is a peer-reviewed academic journal, is looking at artistic applications of technology and a nexus between art, technology and consciousness. So, the way that we perceive the world is something that we would like to see art and technology engage with. And it means that these ongoing artistic expressions, that use deep fakes are something that we want to protect. There are valid artistic reasons, commercial and personal reasons, that we need to make sure that we don't stifle innovation. And that deep fakes are allowed to continue growing and developing. What we need to make sure of is that they are not used as part of coordinated and authentic online activity or disinformation campaigns. And that is something that continues to be a problem to this day.

Neil Dougan Thank you, Tom. That's terrific.

We'll come back to some of your points. I'm just going to introduce Henry and Laurence as well. So, Henry, if you want to pick up the baton now, maybe just explain a little bit about your expertise and your key points regarding these things.

Henry Ajder Sure. Thank you, Neal. And thank everybody for having me at this is great conference. I have not prepared quite so in-depth remarks as Tom. And I'm glad I didn't because Tom is absolutely cut a lot of the key points that I think will be raised throughout this session. So, my career and my relations to deep fakes is I've been researching the topic since it first emerged in November of 2017 on Reddit. I've researched it at London based innovation think-tank called Néstor before moving on to the role as head of communications and research at the world's first fake detection company, Deep Trace. There I conducted extensive landscape mapping research on the topic and the growth of the phenomenon, including a report entitled 'The State of Deep Fakes', of which Neil quoted some statistics from earlier regarding the prevalence of deep fake pornography and the rate of growth of which we see. In that role at Deep Trace, I was responsible for communicating the growth and development of deep fakes as we were finding in our threat intelligence research. And since then, I've now taken on the role as an expert advisor where I provide consultation and expertise to international organisations and government, to the news and technology space, helping them to understand better the problem, the unique set of solutions they may need to apply to combat that issue or potentially as well, harness the opportunities that may present in their in their relevant field. I will pass on to Laurence now, as I think that the best way to kind of further build on the topics that Tom brought up is to do so by some of the questions I think needs prepared by very much. Look forward to continuing the session.

Neil Dougan Yeah. I'll come back to you, Henry and Tom with some questions and you can discuss those. So, to Lawrence do you want to introduce who you are and you what you particularly bring to the session today.

Laurence Davey My name is Laurence Davey. I am a dramatist and I've made a living for as long as I can remember writing screenplays. I've worked in Hollywood where I sold scripts to Hollywood studios, and I've worked in England where I've worked for a big episodic TV on BBC, ITV and Sky. Some of you might well have seen some of the stuff that I've done.

Very interestingly, what I'm here today, for the least expert out of everybody, I am a dramatist. And so, what that means is that I write intuitively about the experience of a lot of ideas that are beyond my pay grade. I don't understand everything that Tom and Henry or Neil. Be saying, but I certainly can write authentically about how those things are experienced on an individual level, at a community level, and perhaps by extrapolation, at a national level. That's what I do as a dramatist. Now I've written a deep fake drama, which is at the so-called Bible's stage. It's all laid out and that's under consideration by BBC and Netflix at the moment. And I'll come to that in a minute. But very interestingly, that drama is about a fifteen-year-old girl who is a target of a troll who uses deep fakes to attack her. And then it's her job to work out who is behind this and why they're doing it. So, you know, a direct involvement in the world of deep fakes. so, what I am interested in is how deep fakes affect somebody’s life, their psychology, the meaning of that. And I've thought a lot about that because under I'm doing this conference, I got something to say about it later.

But how about this? When the first video came on and was introduced. This is the picture of me that came up on your website. That's not me. That is not me. That says Lawrence Davey under it. So that's my name. That is not me. And very interestingly, I went through a very powerful psychological experience. I asked myself, is that actually me? Do I actually look like that? Am I wrong? Am I going to make a fool of myself by saying, this isn't me so interestingly I've just experienced dramatically an effect of what my drama is about.

So thank you for making it easy for me to talk about the destabilisation and some of the psychological processes that I go through. And by the way, I don't mind about that at all. There's no issue in that for me. I just find it very interesting.

Neil Dougan Perhaps, perhaps, perhaps it was deliberate as an experiment. I've no idea if the photograph is actually more handsome. Perhaps we can do a deep fake that you can use. Just to come back to Henry.

I just wondered. We've seen quite a lot of panics surrounding deep fakes and the potential malicious applications, including elections being sabotaged or starting from Dufy provocation. Should be. Should we be worried? And if so, should we be worrying about these kinds of scenarios specifically?

Henry Ajder Sure it's a good question and one I'm kind of frequently confronted by, especially from people who perhaps have heard about it in the news briefly, or I've seen a select few videos which have been kind of purposely made to be provocative. I think there is quite a hysteria surrounding deep fake switches. It's well-meant but is ultimately misguided. That is these kinds of worst-case scenario, a fake deep fake of Trump being released, goading Kim Jung Un into press the nuclear button. I think these are scenarios that we should definitely keep in mind, these are scenarios that we should definitely think about. But I think we also need to understand what would actually be required for those scenarios to take place in the way that people fear. When we look at things such as elections and especially when it comes to military applications, there are chains of command and verification, steps towards the way these pieces of media will be approached. So, to the extent that I think in a lot of those situations, these pieces of media would be caught out by key institutions and key kind of safety mechanisms. It's also worth considering the level of sophistication that would be required for those deep fakes or pieces of synthetic media to be created with that sort of precision that would be required to be effective. Typically, this would have to involve the synthesis of voice audio, which was indistinguishable, her real voice, audio, perfect synchronisation, or indeed entire facial re-enactment, which again, are quite complicated processes which are out of reach of the majority of people to be done in a perfect sense. It's not to say that it couldn't be done, but is to say that, this is not something that the average Joe can do with some of the tools which are commonly available today. So, I think one of the mottos that we use frequently or are used frequently is we need to not panic but prepare. We need to try and understand the threat, understand what is possible now and what is actually going on will be tangibly attained. And from their plan with an aim to the future, which is informed by what's happening now. So, yes, I think what's worth being aware of these threats, but also being aware of where we stand right now in relation to them becoming real.

Neil Dougan Just to pick up on what you're saying, if I could ask one question and I'll come to Tom. I saw video recently that actually recorded the women just in one sentence, maybe 10 seconds long. And from that one audio sentence, they had enough inflection, and enough tone to replicate her voice and make her say anything. Wouldn't you say it's possible the audio could be altered in the sense that we could hear. It looks like an undercover video, maybe grainy, but the audio could be, Prime Minister Shri Narendra Modi and it sounds like he's planning an attack on Pakistan or vice versa. Maybe it's just all audio population. It doesn't have to be deep fake because this brings in the idea of plausible deniability as well. The notion, too, that the footage of President Trump when he was on the bus and he was saying derogatory remarks about the power of celebrity and what it could do to women, he could probably deny that it was fake. What about this plausible deniability come into it, and can audio be pernicious as well?

Henry Ajder So, yeah, there's a couple of points to unpack that. I mean, I'll start with the first one which you raised, which is about synthetic voice, audio and the kind of the kind of progress that's been made.

Neil Dougan [Just frozen, Henry, for a second, this lost you. Well, just lost Henry. Just give him one second to come back, if not. Tom, if I could come to you. You can either pick up on those points, or I've got another question about do you have recent examples of deep fakes effects on political matters internationally, domestically, are there commonly well-known examples of where of them beginning to make an impact?

Tom Ascott Yeah, I think I'll leave those questions for Henry. He's more than capable of answering them, but I think what we are starting to see is it is not just state actors that are doing this. It is also non-state actors as well. And that's something to keep an eye on. When there is international news is when there are cases of accusations against specific states using this. But I think that the bread and butter of this is individuals, non-state actors, lone actors who are often mischievous as much as anything else. It kind of comes back to that idea that we saw of the troll farm in Robert Mueller's that Russian trolls often are just mistaken as individuals who are mischievous actors effectively. I think in terms of where we have seen this, we've seen this everywhere in terms of what you are saying about could Trump plausibly deny something we saw in Gabon, the President was accused of putting out a video that was deep faked. In that case, whether it was deep faked or wasn't almost was irrelevant because the belief was so strong that something was seriously wrong, that the video had been deep faked was enough to effectively cause a military coup. I think you can make arguments as to why that coup happened. If it really was because of the video or was there is a wider context going on there. But we are seeing that this has permeated the public consciousness enough for people to start to use it as an excuse for their behavior or make accusations about whether or not the validity of a certain piece of content is or isn't true or believable. And it goes both ways.


There are also connotations for this in terms of audio. There have been notable examples where speeches have been effectively dubbed into other languages, perhaps languages which the speaker does not natively speak. In order to target a different audience. And that really speaks to the ability for deep fakes to have an audience which is selected ahead of time, which doesn't overlap other audiences. When we saw the work that Cambridge Analytica did towards Brexit one of the key strategies was to identify smaller and smaller groups, special interest groups. For example, groups that cared about the environment would be targeted with a message about how leaving the European Union was good for the environment. Whereas groups that were perhaps more Right-Wing and were less concerned about the environment would be targeted with a different message that downplayed or maybe even argued that the EU is overly protectionist of the environment. And they were given two totally different messages. And that's something that we can also expect to see more with deep fakes. Effectively, the same video, the same person giving a speech which targets a different and increasingly smaller, niche group, and their interests.

Neil Dougan OK. Thank you. Yeah. Just picking up what you said Tom the six major Indian parties were all reprimanded by Facebook for fake news and perhaps an example online of someone quite high up in the EPG party actually being deep faked and speaking in three different languages. You could say that that's actually kind of helpful in some ways, but it wasn't made clear to people who were actually receiving the information. Again, there's that thing of power. If governments are doing it, if they have the technology to detect deep fakes, they're in charge it gets a little bit difficult. Coming back to you, Henry, you want to pick up where we left off from?

Henry Ajder For sure. Yes. I think if I if I remember correctly or your question was kind of composed of a few points, the initial one was regarding synthetic voice or also the generative synthetic audio and its relation to kind of video and its quality. So simply put, voice audio is a less competent, computationally complex form of media to generate to account for less parameters in terms of data points than video. When it comes to generation. So technically speaking, voice audio should. And is indeed proving to be easier to generate realistically. Having said that, it is nowhere near as widely accessible. So although some papers and some demos may be released that show highly realistic replication of an individual's intonation, tone of voice, even speaking style, more generally, these tools are not widely accessible in the same way as a lot of tools which are used, for example, facial re-enactment or face swapping. So, although there is perhaps a distinct difference in terms of the ease of generating increasing realistic synthetic voice audio, there is a kind of a mismatch when it comes to accessibility to that technology.


The second point you raised, which was about the kind of the ability of deep fakes to allow people not just who say that fake things are real, but also to dismiss real things as fake, is arguably one of the most important points, I think, especially for journalists on the topic at this conference. And as Tom also identified in the case of Ali Bongo in Gabon, has already been used to great effect to cause destabilisation in democratic processes. And, you know, governments around the world, this is a this is a concept referred to as the 'Liars Dividend'. The very idea of deep fakes being introduced into the public consciousness is enough that people will then just have this ability to plausibly deny any form of audio or video or image that they do not like or is inconvenient for them to believe it goes against their cognitive biases. As you know, because of the fact that there is the possibility that it is indeed fake or synthetic.


And this is an issue that we've seen arguably actually more prominently in the political sphere than deep fakes themselves actually emerging. So, in the case of Gabon, Ali Bongo, you know, the opposition used the idea of deep fake to dismiss the authenticity of this video and stoke unrest in the general population. We've also seen cases in Malaysia where an alleged sex tape of a male politician with another male politician has been leaked with the politician himself, dismissing that the video was real and also a deep fake, given the laws in Malaysia, that is a serious offence, according to that legal system. And we've also heard that Trump has allegedly, behind closed doors, been saying that the infamous tape that you referenced, is also a fake tape. And so, you can see how this idea alone is incredibly powerful, incredibly destabilising. And that's before we've even seen the widespread application of defects in politics. I


think this also touches on a point that Tom raised about his idea of micro fakes and the idea of the kind of increasing dissemination of deep fakes, a different kind of layers of society, which is that this is no longer just something which is targeting A-list celebrities. We are seeing increasingly private individuals being targeted by deep fakes, specifically that are pornographic. And so, as the idea becomes prominent, in different areas of society, this idea kind of infects or poisons the well for all people. And so, you know, that kind of subversive effect of deep fakes before they even incredibly prevalent is a very worrying one indeed. And one that we've got to do some work on, especially when it comes to things like critical thinking and media literacy.

Neil Dougan This has come out of Tom and Henry's, what they've been saying over the last few minutes raises the kind of overarching question. 'Is it possible to agree on a shared sense of reality and truth if deep fakes continue developing at this rate?' In terms of will it be attacking the fundamentals of democracy if we cannot agree on what's real and what is not and so on? Is it possible to be in a shared sense of reality will undermine democracy? And so, concerning one who wants to take that.

Henry Ajder Yeah, I mean, I can certainly share my views on the question, and I'm sure Tom has some interesting points to make on it, too. I think, again, we've got to think about this reasonably. And we've got to kind of avoid being too sensational. We've been living with synthetic media in various forms for as long as long as media has really existed, whether that's been fake articles spread in newspapers surrounding Churchill in World War II or manual editing of images of Abraham Lincoln. Media manipulation has been around for a long time. What deep fakes represent is the automation of highly complex media manipulation, whereas Photoshop is a very much a click by click job, which takes time and expertise to execute well. Deep fakes automate that process. And now moving forward, we're seeing increasingly user-friendly tools and apps that allow people to essentially operate highly sophisticated media manipulation with very little skill on that basis. We should definitely be concerned about the impact of how deep fakes could perhaps undermine not just breaking news stories, but also perhaps what you see on your WhatsApp feed, your friends or what you receive perhaps on your Facebook news feed also. I think we have to rely on the institutions that are epistemic providers of lots of knowledge, such as journalists, such as governments that have played such an integral role in providing us with key sources of knowledge and also invest heavily in solution approaches and other mechanisms technologically approaching the issue as well as sociologically. So, in short, I would say I think a shared sense of reality is not going to crumble in the next few years because of deep fakes. But I think we certainly need to take that threat seriously and prepare appropriately with a variety of different solution approaches.

Neil Dougan Thank you, Henry. I mean, it's interesting that through the Fake News session, just that they indicate that they were talking about proof and plausible deniability when you've actually got the President of United States being recorded about what his thoughts on coronavirus and the fact he thought it was actually serious and he actually played it down and life carries on, that is the actual non-manipulated audio recordings. And yet it makes no difference. And so, we're moving into round two, perhaps this post the deep fake.

Henry Ajder Life imitating art, perhaps.

Neil Dougan I just wondered, does dramas sometimes bring back vital issues to the attention of a mass audience rather than factual sources through personal storytelling, is there something about drama that makes it more digestible because of the personal aspects of it?

Laurence Davey I think it almost just defined what drama is. And I totally agree with what Henry says. Let's not be sensationalist about this, but my job is to be sensationalist. That's all I get paid for. So that is what I do. And thank you very much for lots of new ideas, Henry and Tom. The drama of a crisis, basically. Yeah. I mean, a very quick outline. Neal, I've got time because I did put down some thoughts on this from a dramatic viewpoint?

Neil Dougan I've got three or four minutes.

Laurence Davey Well, let me quickly say what the story is about, the story that is under consideration on Netflix and BBC. Like I say, it's about a girl, Amy Farrow, age 15, who is targeted in her community with increasingly horrible, deep fake evidence that she in the terms of the document I've got, proving she's a liar and a slut. So, I peered into fears of this girl in this part of her life. It's set in a small town. And each episode is about her and her friends investigating who is behind this deep fake trolling of her life. What is their intention and how do they get them to stop now? In doing that, they have to investigate everybody in the community who comes under suspicion. And in doing that, they find out a lot of those people are hiding things anyway. Their parents are hiding things. The values they've been brought up in aren't necessarily true to the values their parents are pretending. So, they uncover a whole layer of fictionalisation in order to get to the truth of who is actually creating the deep fakes. Now, one of the most disturbing and distressing things that they have to contend with isn't that Amy is confronted with video showing her doing sexual acts with a teacher, for example, which are all untrue. But just that the community believe those videos, which is, you know, the dramatisation of some of the ideas that Hannah and Tom have been talking about in a 15-year-old life. Now, I was thinking a lot since Neal asked me to do this about the meaning of this. And I've come across a current idea in psychology called narrative identity, that our identities are created by the stories we tell. So, don't think of ourselves so much as Homo sapiens. What would the humans who know as Homo Marans? The humans who tell a story now are stories that we tell obviously have a beginning and a middle and an end. As Aristotle said, we are to put plot above character. So, action. The action that we remember in our past creates our present and we project our identity, our values based on that into our future. This is the construct of a story. Now, obviously, if I'm confronted with the deep faked video, I know it's not true, but my insight into this and one that actually I'm kind of want to look at further is as a teacher of story, I tell the kids a story is co constructed. It exists because there is an audience. When I'm and when I get down to London to pitch ideas, I'm looking at executives and seeing what excites them. Now, my drama, an act, this idea that a child co constructs the meaning of her identity with her parents. They retell stories, the community retell stories. So, what happens to an identity when that loop is invaded by things that look like facts? There is a dissonance between the story of our own identity and the co constructive part of that with the community. And that's radically destabilising for development, a community and at a national level as well. Hence my drama. In fact, these ideas have I got one minute, Neal?

Neil Dougan We need to end up I'm afraid what we could do is for Henry, Tom and Lawrence if there is anything else you want to add? Perhaps you could incorporate that into the Q&A, if that's OK. Just to stay on schedule, because I've just had your hand turned back to Professor Parasar.

Professor Parasar So while it was a wonderful session. Thank you. Thank you. Thanks to all the dignitaries for such a great session. And I think a big round of applause for all the members on the dais, a good session. And I know that all the speakers can speak for hours and hours and in such a small span of time you have shared so many insights, so many new insights, I should say, and the way that the deliberation you have go and the questions that is also raised along these lines. Quickly, I'll just read out the question, because you have already answered that. So just to name the students will have other questions. I see faculty members also sharing. 'Is the use of machine learning and AI resources rising in order to manipulate news to preserve the person or organisation reputations?'. It's already answered by the distinguished panel. 'Should strict norms been imposed by government on media companies regarding the use of AI in order to prevent the use of deep fakes?' And then last question; 'deep fakes are increasing more and more, in different ways, in evidence concerning different cases, be it investigations or crime. How will this affect journalism? Do you think that this will contribute fraud and fake journalism?' I think quick remarks by any speaker on these on any of these.

Neil Dougan I just want to see the quick thing about, it came up with fake news yesterday, as well about why didn't the government do something about it? Why didn't they put restrictions on it, control it when they are actually the agents themselves? That's the problem. You say it's asking government to take powers when they might be the perpetrators of deep fakes themselves so that there's a danger and increasing the powers of government in that respect.

Henry Ajder I’d just like to add to the kind of the legal solution response, that is being kind of engaged with surrounding deep fakes. So, I think, again, that the government who is doing the most on this level is the U.S. right now, both from the state and federal level. The U.S. are engaging with laws prohibiting or criminalising pornographic deep fakes or deep fakes which are intentionally deceptive in the buildup to elections. They also have passed legislation requiring intelligence agencies to produce intelligence reports surrounding recent developments with deep fakes and which actors are using them in a malicious way. Some key problems with these approaches: even if you can find criminal, deep fake and you can find out who's perpetrating it, if they're not within the jurisdiction of the place where that law is being applied it can be very hard to actually prosecute. By the same token, it can be very difficult to identify these people in the first place. And it can also be very difficult to necessarily divine who was actually the creator, who was unknowingly distributing can claim plausible deniability again around whether they knew it was fake or not. So legal approaches are very difficult. I know there are certain existing laws also in India surrounding data privacy and defamation, which would protect individuals and would allow individuals to prosecute, but take specific laws, have a unique set of problems surrounding them, which has meant that a lot of legislation so far as is yet to be seen how effective it will be.

Professor Parasar Thank you so much. And also, I should say that the full session on deep fakes and media manipulation was very intriguing. It was a wonderful, wonderful session. It was full so much of insight in that concise, comprehensive session. And I appreciate that in such a short span of time, too many points that they were in a very crystal way. And I appreciate all the members on the dais for that, that came up with such great ideology, insights and perspective, which is enabling all of the participants in the way forward. That new lights and perspective on this. You know deep fakes is a very relevant topic. It's a new thing which is coming up and at such a relevant time all these insights are coming to the students, I feel. Thank you to all the members on the dais. And I hope that the participants they will equally agree with me that this session was very, very encouraging. It was very informative and valuable. And this session has disseminated that action. And the best advisers, thoughtful insights. Special thanks to the moderator of the session. Professor Neil Dougan, for conducting the session so smoothly and more importantly completing session in time! Thank you so much for such a thought provoking and insightful session. And there are so many valuable inputs that surely has benefited participants and participating students. Once again, a big round of applause to all the distinguished speakers on the dais. And thanks to participants for sharing these questions.

Recent Posts

See All

Sitrep

Fake news connected to the pandemic is spreading around the world.