top of page
Articles: Blog2

Little Da Vincis Interview with Tom Ascott

This podcast originally aired on the Little Da Vincis show on 16 November 2020s



This transcript has been edited for clarity and readability.

Christian Amyx Hi guys, today I am joined by Tom Ascott, who is a Digital Communications Manager at the Royal United Services Institute and is also an editor of Techoetic Arts. Welcome.


Tom Ascott Hi. It's good to be with you today.


Christian Amyx So, Tom, why and how is information used to attack and affect society negatively?


Tom Ascott That's a great question, a good way to think about it is that as our society has become more interconnected, especially online, to things like social media and more digital media online, that has opened up new ways for all of us as citizens to become vulnerable to malicious information. And that's information which might seek to cause us harm in our personal lives or information which might cause us to doubt other things that we might have previously assumed were true. And this is something that we're seeing happen more and more as people live their lives online. And this is an ongoing problem. It's a problem that everyone will be affected by if they're spending a lot of time online. And it may not feel like it, but every time that you're browsing social media, or you are using user generated content sites like Reddit or YouTube, you are opening yourself up to be a victim of a misinformation or a disinformation campaign.


Christian Amyx And how effective is this informational warfare?


Tom Ascott It depends how you want to think about it, any single piece of misinformation or disinformation tends to not be so effective. It's about creating an online ecosystem where people get all of the information and they can get really sucked into believing things which are fundamentally not true, and then it becomes highly effective. So you might think about right now when we record this, especially in America, which is still going through the coronavirus pandemic. And if you read online something that says, you know, there's a cure for coronavirus, which is not true, or that coronavirus isn't real or that it's safe to not wear a mask, or that it is dangerous to wear a mask. Each individual article might not convince you that that's the case. But taken together, if that's what you are repeatedly hearing, maybe on podcasts or seeing on videos online or reading on articles or being told on Facebook groups, then it starts to become something that becomes true for you. And on a personal level for individuals that can be really dangerous, it means that in the case of coronavirus, you might expose yourself to serious risk.


Christian Amyx Yes. And especially in the in the past, Russia's Cold War strategy of misinformation is to this day, one of their most effective arsenals that we've seen with their manipulation of elections around the globe. Right?


Tom Ascott Right. So, I think something we've got to remember when we talk about this kind of information warfare is that it's not new. This is something that's been happening for a long time. It's just traditional propaganda.


What's different now is that, one, you can micro target people. So the way that social media has traditionally worked through advertising models is that if you go Instagram now, you want to see personalised adverts. Anyone who uses social media will know that the adverts they get are tailored to them. And sometimes that's great, you might find stuff on Facebook marketplace or something like this. But the flip side to that is that these social networks know you really well and they will help advertisers set up campaigns to target specific individuals or groups of individuals that have specific interests. And that means that what before you might have thought of as kind of generalised propaganda, like leaflet dropping and things like this now becomes know specifically targeted to you. So, if you think about that classic idiom, which is that 'advertising doesn't affect me, it might affect other people, but it doesn't affect me'. And that's because you shouldn't know when you're being correctly advertised to. It's the same thing. People think that disinformation doesn't affect them, but it does. And you're being targeted for the type of disinformation that specifically speaks to you. If you are very medically savvy, you might not be targeted for coronaviruses disinformation, but there are other campaigns you might be targeted for. So, for example, as much as that is a valid dialogue and an important dialogue happening right now in America amongst civil rights issues, there's also a lot of bad faith disinformation going around as well. So, if you might be a victim of coronaviruses disinformation, but maybe you are a victim of another type of disinformation, for example, disinformation is being spread around American civil rights or here in the UK, we had a big problem with 5G mast antennas being destroyed. And again, this might be a different group of people who are afraid that 5G is being used to either spread coronavirus or is being used to give people cancer or some other thing. And each one is it is a different conspiracy theory, which is targeting different type of person.


Christian Amyx And just to go off of what you were just saying, are there certain factors that sometimes people are too oblivious to know that they're even getting misinformed and others during certain big factors in informational warfare that are mostly just disregarded by most of the public?


Tom Ascott Yeah, I think when you say, 'are people too oblivious?', I think that's quite a critical view of how people need to approach media and social media. I think the truth is, is that while, people can be oblivious, we have to question whose responsibility it is to crack down on this. And one answer might be that people need to be much more aware online. This is called media literacy or critical thinking, and we should absolutely encourage that. But the flipside to it is that, one, people are vulnerable psychologically. The more that someone is given the same message, even if they know it's not true, the more likely they are to believe it, the first time someone encounters an idea, the more likely to believe it's true. And if messages come from what people perceive to be legitimate sources, that also more likely to believe it. And these are all vulnerabilities that disinformation campaigns can seek to utilise. So, it's not just a question of people just believing anything they read online. We're seeing very sophisticated attacks which are taking advantage of people's latent psychological traits. So that's I think that's the first thing to consider is that, where does this stop? Is it the consumer like you or me? Is it the platform like a social media network, or is this something that states need to address? So, is this something you should be looking to policymakers to solve? I'm sure, you know like everyone else, that like when you use your phone there's a lot of psychological tricks which come from the design of everything from apps to adverts to take advantage of you. So, when you get an alert on your phone, you'll see a red notification, which is important because red is a colour which attracts attention. You might get notifications at different sporadic times, which are designed to always keep you inside of an app ecosystem. The way that you swipe down to refresh can be designed to mimic the handle on a slot machine. The ads that you're served and the content you're served is done in a way that is algorithmically designed to take advantage of you when you're most likely to buy something or most likely to think about leaving an app. Maybe they're more likely to be content that you're going to engage with. So, it's testing your willpower against teams of psychologists who are doing their best to keep you addicted to using your smartphone. And when they're trying to keep you in these apps, the content of the serving keep you engaged can then be this disinformation. And so, it's really easy again to say, 'well, people need to be more savvy online' but if you if you're trying to spread conspiracy theories on YouTube, YouTube is invested in helping you do that. They want you to spend as much time on the platform as possible. And we saw the last couple of years a huge boom in conspiracy theories because YouTube would automatically recommend other videos which were using catchy titles, thumbnails or clickbait to keep people in an ecosystem. So once you start looking for information on coronavirus conspiracy theories that might snowball into other conspiracy theories, that an algorithm understands that you also share an interest in and then it's really hard to get out. I think everyone has experienced spending more time online than they would like or having to set themselves cut off times because they end up just spending so much time on the Internet.


Christian Amyx I agree when you say that it has become very complicated because over time people have learnt how to put little things here and there that the mind is just automatically attracted to which I do get it. I do get that. And more specifically, we have been seeing more people gullible for misinformation on Facebook and other medias. And these are often told to our faces by the elected officials that are running our country.


Tom Ascott Yeah, everyone is using social media to campaign right now, and this isn't as much as it might feel sometimes like this is a specifically right-wing issue, everyone has played this game. A lot of what Trump's campaign did, and a lot of what Cambridge Analytica, what they're accused of doing, absolutely. They come right off the back of Obama's first campaign and the way that they utilised social media to find and target voters. So, this has been going on for over a decade now, and it's something that is happening on both sides of the aisle. And the troubling thing is that it's happening more. It's starting to feel like any candidate can win an election at any level if they know how to play this game, if they're really savvy with their digital communications, they're very savvy at taking advantage of these platforms, it feels like now it’s anyone’s game. The messaging almost doesn't matter, become somewhat irrelevant, which means that we're also seeing a lot of weaker policy proposals than we might have in the past.


Christian Amyx Yeah, that's a great point.


Christian Amyx Moving on to a slightly different category, what is a deep fakes and how is it being used in informational warfare?


Tom Ascott OK, so if we were just talking about kind of channels to which this information can get to you and we're thinking about YouTube and social media sites like Facebook or even peer to peer networks like Facebook Messenger or WhatsApp, then deep fakes look like the future of the kind of content that might be used for disinformation campaigns. So at the moment, you might get dodgy articles or you might get homemade videos, but deep fakes allow you to do, is they allow you to put someone's face and increasingly someone else's voice onto another person's body, which means that you can make videos that show people doing things that they've never done or saying things that they haven't said. And this is, again, in much the same way that information warfare is very similar to traditional propaganda, this is what we've seen with things like Photoshop in the past. And we know how that's gone. And it is very difficult now looking at content online, especially picture content, to know what's real and what's not. Obviously, no one's going to look at a picture of a unicorn or Bigfoot and be convinced. But think about the filters that you use on Snapchat or Instagram or Tik Tok. Often they can warp reality. It doesn't have to be in a significant way, but just things like apps that make your skin look better or your lips look fuller or place even brighter light. That's a form of photo manipulation that can be very convincing, and especially when it's quite subliminal and to a minimal effect. And although deep fakes, again, might seem like the very obvious to spot, if it's a video of Trump on a motorbike running over protesters. But when it becomes very minimal, it might be a deep fake of a local politician or even a teacher at a local school. That will be harder to spot. And it won't get the kind of coverage that we rely on to highlight deep fakes from mainstream news organisations. And the more that every one of us, like you or me or anyone listening to this, has their face online, the easier it will be to make a deep fake of them. And in the past, before social media, it was very hard to find a lot of footage of someone in good lighting, a lot of different angles to make a mask for deep fake. But now we're seeing quite advanced algorithms which rely on only one photo to animate it. And that isn't very convincing now. But for a lot of people online, they will have hundreds of photos of them tagged. If they're using Instagram Reels or Tik Tok, there might be hundreds of videos of them. And with that data, you can start to make a mask. And that means that these people are vulnerable to really quite sophisticated deep fakes attacks.


Christian Amyx OK. And so with these new technologies and algorithms that have been in place for deep fakes since, it's now harder to tell and it's easier to make one. What is your conclusion on it and what should the public do about it?


Tom Ascott I think, unfortunately, that there almost isn't a conclusion. It's just something that we know is coming. And we're going to have to think very carefully about how we approach content online. It might be in the future that the Wild West, the Internet model we have now, is just unfathomable for a lot of people. It ties into a lot of complicated issues about freedom of speech, when these deep fakes should be seen as really valuable artistic and creative endeavour, and when they become malign and malicious, it's going to be a really hard line for a lot of platforms to draw.


I think we know there are some obvious places where it can be abused, and that's very easy to separate the wheat from the chaff, as it were.


But there is going to be a grey zone where important ideas of comedy, or satire, or fair use are going to be really tested. I don't know that, unfortunately, we have an answer right now to stop this, if you think of your podcast, there are apps that can mimic voices.


If someone consistently made a fake voice, of you or fake episodes of your podcasts, to discredit you and make fun of you. How would you go about fighting that? How could you go about maintaining your reputation if you felt that someone was using deep fakes to discredit you, or to make it look like you were not producing the content that you wanted to be?


Christian Amyx Hmm, well, that's really big. Thank you for sharing your very strong, well-thought-out points on the show today.


Tom Ascott You're welcome. Any time.

Recent Posts

See All

Sitrep

Fake news connected to the pandemic is spreading around the world.

Comments


bottom of page