AI and Disinformation, an Algorithmic Assault on Democracy

Updated: 7 days ago

This report originally appeared in the publication Modern Britain: Global Leader in Ethical AI for the Young Fabians on 23 September 2020

AI and Disinformation, an Algorithmic As
.
Download • 5.38MB

Disinformation is already altering our political landscape


Disinformation has already helped to shape more of our significant political choices than one would like to admit, and the consequences of such a sharp rise in information warfare campaigns are only starting to be fully understood. Disinformation has played a role in not only recent UK elections, but the advancement of AI-driven disinformation technologies have worrying consequences for developing countries. Recently, an AI-created fake video sparked a failed military coup in Gabon. This technology has the extremely worrying potential to change the political landscape anywhere in the world. This paper follows the EU Commission’s High Level Expert Group on Fake News and Online Disinformation definition of disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit’. Disinformation, then, is not news stories that are found to be distasteful or disagreeable. Nor is it information that is peddled by self-styled experts or theorists – information that, while untrue, they believe to be true, which is misinformation. In this context, information warfare is a way to disrupt an adversary from being able to collect, process, and disseminate information. Disinformation campaigns are used in information warfare to meddle with what people think, and to manipulate their opinions. The longer that disinformation can persist, the more of a problem it creates. It only works because it exploits a simple but core democratic notion; that what one reads online can be trusted to be true. This concept is what has allowed the Wiki foundation to flourish. Fundamentally, disinformation corrupts the well of human knowledge - and it is not slowing down. Disinformation campaigns can currently prove beneficial for social media platforms, as they benefit from unclear and lax rules that allow for opaque political advertising. Such platforms build up detailed user profiles using thousands of data points. Even if these users remain anonymous to advertisers, their data is so comprehensive that adverts, or disinformation, can be tailored so specifically as to be incredibly convincing. Fears that advertising marketing and social media data are enough to unmask users are also deeply concerning.


Disinformation predates AI


This is the information warfare arena into which disinformation is currently being deployed, in order to move the pendulum on key strategic decisions through the manipulation of a trusting online culture and either using paid or organic groups. The disinformation itself is made manually and is posted online by trolls. In his 2016 report, Special Counsel Robert S. Mueller III, defined a troll as a user that will ‘post inflammatory or otherwise disruptive content on social media or other websites’.For the most part, disinformation is still produced in ‘troll farms’ – offices where real people clock in every day, sit down at a computer and write disinformation online as part of a coordinated campaign. These ‘trolls’ are paid and treat this like a normal job. Any website that has the ability for users to submit content or comments is a site where they can spread disinformation. There are two inefficiencies in the way that disinformation is currently being produced: quality and speed. In 2016, researchers from ZeroFOX tested SNAP_R (Social Network Automated Phishing with Reconnaissance) and found that it was six times faster at finding and engaging targets on Twitter, and five times more effective at converting them to click on malicious links, as compared to a human counterpart.


AI can supercharge disinformation


AI can make great leaps in eradicating these inefficiencies, thus becoming a faster and more efficient tool for spreading disinformation. The AI-powered algorithms used for targeting are already sophisticated. The online tools available to advertisers to find users and market to them are the same as those used in information warfare. In this instance, however, the bots themselves do not have to be particularly smart. A bot, or a software-controlled account, range from the simplest of designs to more sophisticated ones. There is a low marginal cost to having more bots on a network. At present, bots spreading disinformation do not have to be sophisticated because the disinformation itself isn’t. The future, however, may start to present more advanced forms of disinformation which are increasingly in tune with individual user data for content tailoring. Deepak Dutt, the CEO of mobile security company Zighra, opined that AI will be used to ‘mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on’. This information can then be analysed by AI tools to create disinformation that is tailored to individuals. Such an approach would be effective as the psychological impact of disinformation is bolstered through repetition. The more a fine-tuned statement is repeated, the more likely that social network users will believe that it is true. This is called the ‘illusory truth effect’. An effective way to spread disinformation – and to ensure users’ repeated exposure to it – is by message boosting. Bots do not need to send new and unique pieces of disinformation; instead, they can simply retweet or share existing pieces of disinformation that fit the same narrative.


Deepfakes and disinformation


In a simplistic way, disinformation campaigns can be split into two basic projects; the generation of content which is intended to manipulate an audience, and an ability to distribute that content. AI will greatly improve the ability for disinformation campaigns to distribute content, but it also provides for a greater ability to create content for those campaigns. The future of this content are deepfakes. Deepfakes are videos that are made by AI and replace the face, and sometimes voice, of one person with another. They are highly realistic, and can be easier to make than any previous form of video editing or manipulation. Currently, the capacity of trolls who wage information warfare is limited by their ability to create visual propaganda. This includes shoddily captioned memes or photoshopped images that are often of a very low quality. Memes can be thought of captioned images one might encounter on social media sites - for example ‘Pepe the Frog’, the green cartoon frog that is often seen online. If trolls are unable to create either of those then they will have to write text posts. Russian trolls can be betrayed by spelling, grammar or taxonomic errors. However, deepfakes offer the potential for AI to generate the propaganda for them. The spread of memes and meme culture gives an insight into how one might anticipate deepfakes to spread. Memes have come from online image boards such as 4chan, to the mainstream websites, such as Facebook, Instagram or Twitter. Presently, it is easy to find deepfakes on image boards that depict unethical content. 96% of deepfakes are of non-consensual pornography. Unless social media sites intervene, then it is only a matter of time until more of this type of content will be easily found on mainstream sites, much in the same way memes from image boards have become a staple of social media sites. Deepfakes will only get better over time, becoming more convincing to the human eye, using less footage to be made, and will be faster and cheaper to produce. They are yet to reach their full potential, and there is plenty of private funding that is interested in advancing this technology. As they get easier to make their different applications will be better understood by each sector, and they have already made their way into politics. In Gabon, a deepfake has already inspired a failed coup. President Ali Bongo left Gabon after suffering from a stroke. Months later, and after rumours of his death had started to circulate, the Vice President announced that President Bongo had suffered a stroke. An alleged deepfake video was then released, which showed President Bongo in good health in his address. But the ‘oddness’ of the video created doubt, and the military cited that oddness as evidence that President Bongo was not well and launched an unsuccessful coup. During the Brexit referendum social media platforms allowed each campaign to segment their audience into groups that were interested in different, specific, issues. Each group could be talked to individually, and exclusively of other groups. If you cared about animal rights, then you could be served adverts about how Brexit might advance animal welfare. Soon deepfakes will allow for those adverts to be AI-generated messages from politicians, or other recognisable figures, adverts designed to target increasingly smaller and more specific audiences. The numerous legal and productive applications of deepfakes, from mobile apps like Snapchat to blockbuster movies, make it unreasonable to suggest that their creation ought to ever be illegal. The ability of a country to quickly and thoroughly fight disinformation is a metric that will soon be used to know how likely it will be to keep stability. While the UK currently has the capacity to fight disinformation campaigns, deepfakes will allow for far more realistic and convincing fakes videos that will need more capable infrastructure to fight it. Additionally, the ability to fight disinformation in an ethical manner whilst preserving certain civil liberties will be a key test for liberal democracies such as the UK.


The AI arms race


As much as AI can be used to create and spread disinformation, it can also be used to fight it. The latter is, however, much more difficult. The first target of anti-disinformation campaigns might be what one considers to be ‘inauthentic activity’, such as spam posts by troll farm accounts. While some social networks like Facebook only want authentic users who represent real people, platforms like Twitter do not share this expectation. On Twitter, there is no preference for users to use their real names; a fact highlighted by the many satirical accounts which spoof real people. To remove or limit these accounts would be a direct blow against what the platform’s users enjoy about them. Instead of restricting satirical and ‘anonymous’ accounts, Twitter must look for behavioural patterns.These can point to coordinated inauthentic activity and are often signs of disinformation campaigns. AI can help to identify word patterns that can be indicators of disinformation and bot networks. However, as a technology, AI pattern recognition is still developing. As such, it does not currently provide a complete solution to detect disinformation. It also still relies on human users to identify disinformation for training data and to make more complex decisions on the content that is flagged by this technology to avoid false positives. This human labour is often extremely manual, repetitive, and outsourced to developing countries Algorithms are already being used to detect different types of content. Email spam filters, for example, are incredibly efficient at detecting spam emails. Advancements in the necessary technology will enable such detection tools to thrive further.


Human moderation


Moderation of explicit content is difficult. For example, the line between art and pornography as established in the United States Supreme Court by Justice Potter Stewart was simply: ‘I know it when I see it’. In many cases, human moderation is still used to identify content that is in breach of social media guidelines, including disinformation. Human moderation has drawbacks, including the psychological cost. Moderators are frequently exposed to images of a graphic nature, and the most common type of imagery used in disinformation campaigns is hate speech. Mueller’s 2016 report, for instance, highlighted how race and gender were often used as ways to explore divisive issues in American contemporary politics. The psychological impact of constant exposure to material of this nature is demonstrated by the fact that Facebook moderators who have been exposed to this type of graphic and hate-fuelled content are now suing the platform as a result of developing Type 2 PTSD. Humans being exposed to disinformation, even as moderators, are still subject to the illusory truth effect and may come to believe the content they are being exposed to. In an investigation for The Verge, Casey Newton found that some moderators at Facebook had started to believe that the world was flat and 9/11 was not a terrorist attack, as well as denying the Holocaust.


AI is not without its weaknesses


There are significant drawbacks with using AI to fight disinformation, including the underlying issues associated with the automatic moderation of free speech. Any attempt at using AI ought to err on the side of caution and not be overzealous, as dealing with disinformation online is a persistent problem that cannot be solved, only stemmed. False positives from AI tools are a threat to platforms themselves. Over-moderation is antithetical to sites that thrive on user-generated content. Perspective is the machine learning algorithm that Alphabet and Google use. Unlike many of Google’s products, it is not an open tool. There are fears that tools used to moderate speech online – such as Perspective – can be misused or biased. For example, it could be used by authoritarians to control speech, or a malicious actor to discriminate against a particular minority. If Google was more open with how the tool worked, then the tool itself could be manipulated to be more prone to flagging the wrong kind of speech online. By giving the tool the wrong input data, it could easily flag dissent instead of disinformation. And because disinformation affects the general public, the status quo means the general public is putting their trust into Google.


Policy can forge a better future


There have already been serious impacts from disinformation campaigns. The coronavirus pandemic is an example of how any situation can be used as narrative fuel for a disinformation campaign. Early on in the pandemic, conspiracies started to spread that 5G towers were in some way responsible for either the origin or spread of coronavirus. These conspiracies escalated to the extent that 5G towers were burned down by individuals who believed they were taking measures to protect themselves. There is an increasing scope of damage that these disinformation campaigns could do in the foreseeable future through the use of AI. It is imperative that the government takes action through policy to contain the impact of these campaigns, and that it considers the benefits of using AI to fight disinformation. With this in mind, policymakers must be careful not to stifle innovation in AI or bots. AI is a powerful tool to disseminate information and can be used for the public good as an ‘early warning system for computational propaganda’ to stop disinformation campaigns before they go viral. Many online bots can also be useful to facilitate this and are not weaponised to spread disinformation. The most popular bots are often satirical and humorous. Dylan Wenzlau, founder of meme website Imgflip, used a natural learning processing algorithm to generate completely artificial memes that become popular online. Other playful bots provide services like tweeting emoji aquariums or randomly generating soft landscapes or star fields. There are more serious uses, too, such as bots that tweet whenever a Wikipedia edit is made from a New York Police Department IP address or from the Houses of Parliament. These provide a level of accountability. It is not hard to imagine that journalists or investigators benefit from services that give them real-time updates of open source information as it happens or becomes available. It would not be prudent to suggest that these bots, or any bot that does not represent a real or authentic person, should in some way fall foul of the law. Some bots must be protected. But AI systems do not work independently of people, and a balance must be struck with the role of humans in AI. From selecting and generating training data to assessing the work that an AI has generated, humans have a role to play. Policymakers must continue to ensure that humans double-check the results that AI produces in moderating free speech online. This must be balanced against the human and psychological cost that moderators face in being exposed to disturbing and misleading content. If disinformation is seen as being truly opposed to the root of democracy, then fighting it can be viewed as a patriotic duty that must be supported.


Bibliography


European Commission, A Multi-Dimensional Approach to Disinformation: Report of the Independent High Level Group on Fake News and Online Disinformation (Luxembourg: Publications Office of the European Union, 2018)


Special Counsel Robert S. Mueller, III et al., Report on the Investigation into Russian Interference in the 2016 Presidential Election, March 2019.


Christian Davies, ‘Undercover Reporter Reveals Life in a Polish Troll Farm’, The Guardian, 1 November 2019, <https://www.theguardian.com/ world/2019/nov/01/undercover-reporter-revealslife-in-a-polish-troll-farm> accessed 3 May 2020 4.


John Seymour and Philip Tully, ‘Weaponizing Data Science for Social Engineering: Automated E2E Spear Phishing on Twitter’, <https://www. blackhat.com/docs/us-16/materials/us-16-Sey- mour-Tully-Weaponizing-Data-Science-For-So- cial-Engineering-Automated-E2E-Spear-Phish- ing-On-Twitter.pdf>, accessed 27 April 2020.


George Dvorsky, ‘Hackers Have Already Started to Weaponize Artificial Intelligence’, Gizmodo, <https://gizmodo.com/hack- ers-have-already-started-to-weaponize-artifi- cial-in-1797688425>, accessed 27 April 2020.


Lynn Hasher, David Goldstein and Thomas Toppino, ‘Frequency and the Conference of Referential Validity’, Journal of Verbal Learning and Verbal Behavior (Vol. 16, 1977), pp. 107–12.


Robert Chesney and Danielle K. Citron, ‘Disinformation on Steroids’, Council on Foreign Relations, 16 October 2018 <https://www.cfr.org/report/deep-fake-disinformation-steroids>, accessed 3 May 2020.


Sara Fischer, ‘How reporters outsmart the internet trolls’, Axios, 17 September 2019, <https://www. axios.com/reporters-trolls-news-media-misinfor- mation-ae4a6e2a-1266-49bd-838e-d519588c66cf.html>, accessed 9 May 2020.


Giorgio Patrini, ‘Mapping the Deepfake Landscape’, Deeptrace Labs, 7 October 2019, <https:// deeptracelabs.com/mapping-the-deepfake-landscape/>, accessed 18 May 2020.


Ali Breland, ‘The Bizarre and Terrifying Case of the “Deepfake” Video that Helped Bring an African Nation to the Brink’, Mother Jones, 15 March 2019, <https://www.motherjones.com/ politics/2019/03/deepfake-gabon-ali-bongo/> ac- cessed 3 May 2020


Peter Pomerantsev, ’This is Not Propaganda: Ad- ventures in the War Against Reality’, Faber and Faber, London, 30 July 2019.


Evelyn Douek, ‘Senate Hearing on Social Me- dia and Foreign Influence Operations: Progress, But There’s A Long Way to Go’, Lawfare, 6 Sep- tember 2018, <https://www.lawfareblog.com/ senate-hearing-social-media-and-foreign-influ- ence-operations-progress-theres-long-way-go>, accessed 27 April 2020.


Louk Faesen et al., ‘Understanding the Strategic and Technical Significance of Technology for Security Implications of AI and Machine Learning for Cybersecurity’, The Hague Centre for Strategic Studies (HCSS) and The Hague Security Delta, 28 August 2019.


Jacobellis v. Ohio, 378 U.S. 184 (1964), <http://cdn.loc.gov/service/ll/usrep/usrep378/usrep378184/usrep378184.pdf>, accessed 27 April 2020.


David Gilbert, ‘Bestiality, Stabbings and Child Porn: Why Facebook Moderators Are Suing the Company for Trauma’, Vice, 3 December 2019, <https://www.vice.com/en_uk/article/a35xk5/bestiality-stabbings-and-child-porn-why-face- book-moderators-are-suing-the-company-for-trauma>, accessed 27 April 2020.


Casey Newton, ‘The Trauma Floor’, The Verge, 25 February 2019, <https://www.theverge. com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona>, accessed 27 April 2020.


Emily Dreyfuss, ‘Hacking Online Hate Means Talking to the Humans Behind It’, Wired, 8 June 2017, <https://www.wired.com/2017/06/hackingonline-hate-means-talking-humans-behind/>, accessed 27 April 2020.


BBC News, ‘Mast Fire Probe Amid 5G Corona- virus Claims’, 4 April 2020, <https://www.bbc.co.uk/news/uk-england-52164358>, accessed 3 May 2020.


Samuel Woolley, The Reality Game: How the Next Wave of Technology Will Break the Truth, (New York, NY: PublicAffairs Books, 2020).


This Meme Does Not Exist, <https://imgflip.com/ai-meme>, accessed 3 May 2020.


Emoji Aquarium, <https://twitter.com/EmojiAquarium>, accessed 3 May 2020.


Joseph Brogan, ‘Some of the Best Art on Twitter Comes from these Strange Little Bots’, Ars Technica, 6 July 2017, <https://arstechnica.com/information-technology/2017/06/the-art-bots-thatmake-twitter-worth-looking-at-again/>, accessed 3 May 2020.


NYPD Edits, <https://twitter.com/nypdedits>, ac- cessed 3 May 2020.


Parliament WikiEdits, <https://twitter.com/parliamentedits>, accessed 3 May 2020.


Cailin O’Connor, ‘The Information Arms Race Can’t Be Won, But We Have to Keep Fighting’, Aeon, 12 June 2019, <https://aeon.co/ideas/the-information-arms-race-cant-be-won-but-wehave-to-keep-fighting>, accessed 27 April 2020.

Recent Posts

See All