This is a transcript of Episode 1 of Synthetic Society. It has been edited for brevity and clarity.
Tom Ascott: Welcome to the first episode of synthetic society, the new weekly podcast that offers a deep dive into AI, technology and internet culture. I'm your host, Tom Ascott. And each week, I will talk to guests who are at the cutting edge about the latest developments in their field. This show is in association with the Online Harms Foundation and the Fabian Society. The Online Harms foundation is grounded on providing tech companies with the tools and resources they need to effectively counter online harms on their platforms. Find out more about the work that Online Harms foundation is doing by going to onlineharms.org or following @onlineharms on Twitter. Synthetic society is also a part of the Fabian Societies #AI2021. campaign, a public engagement campaign to get the general public learning thinking and talking about AI. Have we sparked any thoughts for you? Contact us on our website, AI2021.org. For our first episode, our guest is Lord Clement-Jones. And we'll be talking about if technology is becoming more polarised, how the government can lead on regulation for the internet and artificial intelligence and the relevance of culture and the arts in a digital space. Thank you so much for joining us today.
Lord Clement-Jones: Pleasure. Very Good to see you, Tom.
Tom Ascott: I wanted to start by looking at your work with the all-party parliamentary group on artificial intelligence. It's a nonpartisan committee that works across the board. I feel more recently the tech sector is becoming increasingly politicised, is this something that you're concerned about that, AI could become more of a partisan issue?
Lord Clement-Jones: Now that's quite interesting, because I don't think politicised is necessarily the right word. I think that there may be different approaches to regulation. And that's why I'm so keen on getting the right framework for regulation in terms of risk and so on. We can go into that probably a little bit later. But I don't think as such, it's political. I mean, for instance, I have very good relationships with parts of government, like the office for AI, with regulators like the Information Commissioner's Office. And so, I think, actually, we do have a common goal. And the government has made it very clear that ethical, trustworthy AI, is what they're aiming for. So, I would say that actually, we may disagree about the means. But I don't think we're disagreeing about the goal.
Tom Ascott: And that's very positive to hear. Something that the group has produced is the report 'AI in the UK, ready, willing and able', and I believe that came out in 2018. And this has gone on to be quite an influential paper in the field. Could you maybe talk to us about some of the impacts the papers had, that you've been particularly satisfied with? And if there's been any impacts the paper that maybe have surprised you?
Lord Clement-Jones: Yes, first of all, I must make it clear there are two groups and that you made that clear in the introduction, there's the all-party group, which has its own agenda, and is cross party. And then we have Select Committees, which have a limited life as ours did, and we produced a follow up report very recently. But the original report from the select committee was 'AI in the UK, ready, willing and able', and it's very interesting, because the ripple effect from a select committee report can last for quite a long time. And that certainly was the case with our AI report in terms of setting an overall agenda. Because what we had just before the select committee report was the whole percentage review, which really talked about incentives for investment in AI, AI development, and so on in the research area. So, it was much more about if you'd like the opportunities. Well, we covered those to some extent in our House of Lords report, but actually, we focused quite heavily on the risks and the ethical aspects that were needed in order to mitigate those risks. And I think, one of the things that absolutely has been the case is the way that agenda has influenced the particularly the private sector, if anything, we've made less progress in the public sector in some respects, because different government departments have different sorts of frameworks and so on. But the private sector, the commercial sector represented by people like Tech UK, have been extremely willing to adopt this whole idea of trustworthy AI, ethical principles with the governance of AI and so on. And then that's had a knock-on effect in terms of the way the regulators have looked at it, such as the Information Commissioner's Office and so on, as well. So, I think that's the surprising thing that in some ways, there we were kicking the tires on government policy. And they've been reasonably agreeable to our conclusions. But actually, it had a bigger impact on the private sector, in my view.
Tom Ascott: I think that's a really interesting point that certainly in the past couple of years, ethical AI and ethical aspects and implications of AI have become much more prominent part of the field. And one of the recommendations from the paper was for the government to build that trust in AI. And you mentioned there that you are concerned, perhaps that that hasn't happened as much as you'd like to have seen. I think there have been some elements in 2020, where this has also come to the fore. One of those was the GCSE and A Level grading controversy, where an algorithm was used to determine grades for students after their exams are cancelled. Do you think this government is doing a good enough job in building trust in this area?
Lord Clement-Jones: I think some of the agencies of government are, but I don't think there's enough central push really on this. I mean, for a start, they never adopted an overarching set of principles. We talked about five principles. You can look at the OECD, you can look at the G 20 principles, you can look at what the European Union put in place. But our government has never explicitly said, these are the principles. And, it doesn't matter whether you look at the report on standards in public life, which looked at AI, and how that was being adopted by government, or whether you look at what the Information Commissioner's Office has been doing, and so on. And indeed, what the Centre for Data Ethics and Innovation has been writing about, there seems to be no central kind of compliance agenda by government, this would take something like the Cabinet Office or Cabinet Committee to make sure happened, and we don't see that. And when we did our follow up report, which we entitled, 'no room for complacency', which came out, as you saw in December, we made that point very strongly, because a lot of the right ingredients are there. I mean, we have some fantastic institutions, the Office for AI have some really good people working for it, the Alan Turing Institute is working extremely well. But it's bringing all those elements together to make sure that we're actually delivering that trustworthy AI, which is really, really important. And let's face it, the first adopter of ethical principles should be government. And you quite rightly say that we had a problem with algorithms in the Ofqual assessment last year. But we've got people who are taking judicial review and have done against the Minister for the housing algorithm used to decide where housing new housing is located. We've got people worried about the home office using algorithms and so on. Now, if you're going to create public trust, you really do need to get it right in government. And we're not there yet. And, the other aspect which the government have published something on very recently, but only two days ago, and rather belatedly, is live facial recognition. Now, that's another very big area that we absolutely need to have trustworthiness on.
Tom Ascott: Do you think the best way to build trust from government in artificial intelligence is through intelligibility? Many algorithms that we see now are black boxes, which is to say they are impenetrable, and often cannot be understood from the outside, even if the code can be shorter or simpler than transparent box algorithms. This is something that has presented an issue in the past with understanding AI and is continuing to present an issue. Do you see this as being a driver of ethical behaviour, and especially a driver of eliminating bias?
Lord Clement-Jones: I think it's really important to have that degree of explainability and transparency. And there's no excuse actually, because you look at the IEEE standards for ethically aligned design, you talk to AI developers, if you get it right up front, then there's no, no, no need for a blank box. It's only if you don't really think about the design elements, that you do get that black box. And I don't think, people have talked about the conflict, if you like, between transparency and accuracy. I think that as we go along, there's less than less of that, if you get the design, right. So again, you can specify these things, but I'm a great believer in not overregulating, so don't get me wrong, I don't think that I'm trying to sort of stamp my tiny foot and get everybody, regulated and having to do this than the other I would like to see the right behaviour, but without having to regulate except in the high-risk cases. And you could debate what the high-risk cases are. Are they things like deep fakes or things like live facial recognition and they're things like use of algorithms in particular sectors, financial services I would say, where you don't particularly want your credit rating to be done by an algorithm which is wholly opaque. So, I’m very keen on, if you like, the risk base the proportionate approach to regulation
Tom Ascott: I think my personal concern on this matter has got to be that minorities especially women, people of colour, other BAME minorities seem to be the victims of bias overwhelmingly again and again when it comes to AI. And as a result of this we're not seeing action that is fast enough or strong enough. Do you think that the lack of representation in these sectors for minority individuals is causing a problem with regulation?
Lord Clement-Jones: Yes, I think that's one of the absolutely fundamental problems and I don't see enough evidence of enough attention being paid to that. I mean this was a theme that ran all the way through 'AI in the UK' report three years ago. It's something we highlighted, it's something that we said, 'look you're going to have bias in the algorithmic decision making and in the data sets that are that it's trained on and the decisions that are made etc unless you have people who are sensitive to the fact that bias may be inherent in the data and in the algorithmic decision making'. So, the workforce and the people involved, it's absolutely crucial that we get a much better and more diverse workforce in that respect. So, it's one of the big agenda items which I don't think has yet been tackled. people like Tech UK and I know some parts of government have this in mind, but I think we need a really much more concerted approach to this, because otherwise, algorithms are seen as oppressive. The great belief that I have is that AI should be our servant not our master, but it's the other way around if basically there are biased algorithms that are making decisions without human in the loop, for instance, on data which goes back years and just perpetuates prejudice.
Tom Ascott: I'd like to shift gears away a bit from AI and talk about your work in online harms. During the online harm’s consultation recently, you noted specifically that social media companies have failed to tackle the spread of fake news and misinformation on their platforms. On the one hand it seems to me the algorithms are creating these problems of misinformation; on the other hand, it does also appear to be that the only way that we're going to be able to tackle this is through ai regulation. Can AI solve its own problems?
Lord Clement-Jones: Well, I think yes, he can in some ways because the thing that I’m very keen on is the Avaaz agenda of detoxing the algorithm. And you can only detox algorithm in a social media platform which is directing misinformation in different directions, its amplifying misinformation, it's being deceptive actually in many respects. You could only detox it if you've got the power of audit, got the ability to inspect and therefore that you there's a level of transparency that the regulator is able to get access to in all of this. And I’ve made this point, that for all the duty of care there may be, for all the online harms that you may legislate against, unless you actually have the ability to say to the social media platform 'I know how this algorithm is working, and it's working in the wrong way and you've got to change it' then frankly I don't think the powers of the regulator mean anything at all actually. I’ve made that point on a number of occasions. It's not all about content it's about in a sense how the traffic is being directed. That's the Cathy O'Neil agenda, she wrote a very good book about it, I think she called it the 'Weapons of Math Destruction', which was quite a good pun, but also Shoshana Zuboff has made the point that it's not about content, not even all about the algorithm half the time, it's about the behavioural data which is actually used by the algorithm. So, you have to be vigilant on a number of fronts, I mean this is highly complex technology. It's got to be transparent. We've got to know much more about how its operating but of course the commercial model of the platforms is entirely built on opacity. So, we've got we've got to kind of counter that
Tom Ascott: Do you think we are prepared to counter that? Because it has been many years since researchers were ringing alarm bells around QAnon and other right-wing extremist behaviour, especially in America. We saw on the sixth of January that come to a head in a very violent, real and physical sense. So far, the UK has almost sidestepped a lot of that. I don't want to make it sound as if QAnon isn't a present force in the UK because it is, and we are seeing QAnon rallies up and down the country, even in small towns that are not necessarily seen as political hubs. How long before the UK is going to have to really consider this to be a national security issue?
Lord Clement-Jones: Well, that's going to be very interesting, because what we've got is we've got a response to a government white paper on online harms, which gives something of a framework, but there were certain things that were not very clearly delineated. And one of those was the whole question of legal but harmful. And that's exactly where things like disinformation fits in, because you have the tension with freedom of expression, and so on. But the one thing that the US has illustrated, the US experiences illustrated hugely, is the harm that disinformation and misinformation can have. And, I've had consultations with a large number of MPs, peers, from every party, a great phalanx of former Secretaries of State, who are all now determined that misinformation is going to be caught by the duty of care. And AI, that we've got to make sure that, this isn't just something that we treat as 'because it's not illegal as such, we just, treated as part of the everyday life of a platform', that is not going to happen. And there's going to be, a very, very strong push. And you may have seen the David Putnam report on democracy and, and technology, which came out about nine months ago, maybe a bit longer than that, but a really good report about the dangers of social media disinformation and misinformation to democracy. And he's asking for exactly the same, not only the sort of ability to inspect and audit algorithms that I'm talking about, but also the making sure that the duty of care covers misinformation and disinformation. So, it's going to be, the QAnon stuff. I mean, we do not want guys with horns running around Westminster, thanks very much
Tom Ascott: To that end, you sound very confident that this will be caught by the bill?
Lord Clement-Jones: I'm confident that may be after an initial debate, it will be caught by the bill yes, because I just don't think the government will have any, any choice. At the end of the day, there'll be so many conservative backbenchers, who will begin to see it.
Tom Ascott: One of the solutions that has been kicked around in American politics right now is the repeal of Section 230. For those who don't know, Section 230, provides immunity from liability for both internet service providers, and end users of interactive Computer Services, which is to say, internet users, in this case, it would be social media users, and to repeal that would open them up to a whole host of civil litigation. Would you think that this is something that the UK is prepared to investigate?
Lord Clement-Jones: Well, funnily enough, that's exactly what the online harms legislation will do. I mean, duty of care may not take a platform directly to being a publisher. But it's very, very close to it. I mean, the difference between, a platform with social, a social media platform with user generated content, he's obviously quite strong. I mean, there is a difference between that and the newspaper, where all the content is curated by the editor, effectively. So, I think it's right to have that distinction. But nevertheless, you've got to make sure that the content of the social media platform and the operation of the algorithms is not harmful. So, we're several degrees ahead of where the Americans are in our thinking already. And I would say that we don't have this Section 230 mentality here, and now funnily enough, I think the first thing that's going to go, I think the democrats in the states are going to get rid of Section 30. Before too long, I don't think it's tenable anymore for platforms just to say 'sorry, gov', and especially when you see what Twitter and Facebook have done in the face of, of egregious tweets and postings by Donald Trump. I can see some people's objection to a social media platform saying, 'sorry, you're off our site', there's no regulation or anything like that. But nevertheless, they stop you tweeting forever, potentially. But I think there should be a regulatory framework. And I think you'll find that if you had an interview with Google now or with Facebook or Twitter, you would find them now veering towards regulation because it puts them in a very difficult position. They have to make decisions, and then they get accused of banning free speech.
Tom Ascott: Thank you so much for your time today. Before we finish, I would just like to touch on maybe a slightly more light-hearted subject. I'd like to turn to your time his spokesperson on the creative industries, we've seen the impact that AI has had across most industries. And also, recently, we've seen the breadth of abilities of things like DALL-E and GPT3, I wonder what you think the future holds for AI and creative industries and creative endeavours generally?
Lord Clement-Jones: Well, you see, I am optimistic about it. I mean, we've had some extraordinary things. I mean, there's Eduardo de Belamy, a portrayed painted by a generative, adversarial, adversarial network, things like that, as well as the GPT3. So, there is this ability to be creative, but I would like to see used in an argumentative way, in an assistive way. So that we use the power of technology to enhance what we can do creatively. And sadly, I don't think yet our powers that be in the educational world, actually understand what needs to be done, we need to be able to use creatively. And I think if we understood more about how to use it, and we brought up our school students to do that, I think we'd be in a fantastic place. This isn't just about STEM, this isn't just about scientific and technology, understanding how to write an algorithm or whatever it may be. It's about understanding creativity. And I, I think that's really important. But I mean, currently, you may have seen that our creative industries, with locked down and with Brexit, and a whole variety of things are in a very difficult position. I want to give them as much hope and encouragement because, the creative industries, the arts, are what enhances our lives, we may talk about the economy and health and so on and so forth. But actually, what an awful lot of us spend an awful lot of our time doing is things that we enjoy, sport and art and theatre, and, and, and cinema and television, and so on. And I think we've got to remember that when we're making policy,
Tom Ascott: Lord Clement-Jones, thank you so much for your time today.
Lord Clement-Jones: Pleasure. Great to see you, Tom. Thank you.
Tom Ascott: Thanks for listening to this episode of Synthetic Society.
Comments