Reading time: 8 minutes

Interview by Josh Lee, Lenon Ong and Elizaveta Shesterneva | Edited by Josh Lee

TechLaw.Fest 2020 (“TLF”) will take place online from 28 September – 2 October 2020, becoming the virtual focal point for leading thinkers, leaders and pioneers in law and technology. In the weeks leading up to TLF, the LawTech.Asia team will be bringing you regular interviews and shout-outs covering some of TLF’s most prominent speakers and the topics they will be speaking about.

This week, LawTech.Asia received the exclusive opportunity to interview Dr Ian Walden, Professor of Information and Communications Law and Queen Mary University of London and the Director of the Centre for Commercial Law Studies. Ian will be speaking at a panel on “Global Perspectives on Tackling AI Governance” on the second day of TLF (29 September 2020).

In simple terms, could you explain what AI governance is and why it is important?

Broadly, governance is about governing relationships between people and things. In the case of AI, it’s a relationship between the system and people. This is a complicated relationship, as it involves people who manage the AI model, people who deploy the AI model, as well as the beneficiaries (and even victims) of the system. 

There is, of course, the law and regulation side of the equation. There is public law, private law. But it goes beyond just law. After all, Lawrence Lessig talks about a broad conception of regulation, and he noted the fact that technology itself governs us, and how people design and put that technology together regulates how we use that technology. It’s also about the market. There are characteristics of the market that change the way we consume things. For example, the two-sided market that comes from advertising-led services. We consume services like Facebook and Google for free, and in return, we allow our personal data to be used for advertising.   

The last element is more nebulous – that of ethics. These are not strict rules, but more normative and help shape ways of behaviour. These are things we’ve seen more work on in recent years. This is understandable, because some aspects of AI, such as robotics, have clear ethical implications. What is surprising is how it seems we only started talking about ethics with AI. After all, many other technologies carry ethical implications – computers, cloud computing, and so on. Ethics imbue everything and they didn’t just get discovered with AI. 

The EU’s plans to regulate artificial intelligence (AI) have made waves in the AI governance world. Image credit: AI News

Around the world, it is said that governments and international organisations are moving “from principles to practice”. What does this mean – does it refer to laws? Certification programmes? Technology tools? Are there any key instances that you would highlight as good examples to watch? Who are the leaders that are at the forefront of AI governance and regulation at this point and why is that the case?

I don’t really see this as moving from principles to practice. AI has been around for a long time – in the legal space, for instance, Richard Susskind wrote his book on legal expert systems sometime in the 1980s. We’ve known for some time that machines can replace humans, and it is simply a matter of big data, computing power, and investment. AI has only come to fore recently because it’s the next iteration of computing.

We’ve had laws that impact directly or indirectly on AI for many years. In the UK, the Copyright, Designs and Patents Act of 1988, specifically provides for computer-generated work: who owns such a work? So in the 1980s, legislators were already aware the copyrighted work could be generated by computers, and since computers could not own that work, they had decided that the copyright would go to the person who set the computer system up.

More recently, everybody is talking about the automated decision making provision in General Data Protection Regulation (“GDPR“). But what’s less known is that the provision already existed in the 1995 Data Protection Directive. So we’ve had laws that directly impinge on AI, but what we’re seeing now is a more coherent approach to governance of AI. It is not from principles to practice: it is a movement from diverse regulatory issues into a more coherent framework. In short, it is not always about re-inventing the wheel, but about seeing how it all makes sense and how traditional direct laws (e.g. on automated decision making and copyright) and indirect laws (e.g. liability rules and concepts of autonomy) apply in an AI environment. 

As for jurisdictions that have made great progress – I’ll have to say Singapore has taken the lead – it has an opportunity to take a holistic view at the issue. The EU is taking steps in certain areas, such as its White Paper and ethical guidelines. But frankly, my concern is that we have more ethical guidelines than we know what to do with. There has been a proliferation of these around the world. These guidelines are valuable, but if there’s anything about moving from “principles to practice”, it is the ethical portions that that statement is most applicable to. That’s because simply producing ethical guidance is far too “high-level”. The key question is how to implement and operationalise them.

My concern is that we have more ethical guidelines than we know what to do with … if there’s anything about moving from “principles to practice”, it is the ethical portions … simply producing ethical guidance is far too “high-level”. The key question is how to implement and operationalise them.

Dr Ian Walden, Professor of Information and Communications Law at Queen Mary University of London; Director of Centre for Commercial Law Studies

Earlier this year, the Europeans released a White Paper on AI. This comes after the US released their own Memorandum for government agencies on AI regulation, and just as China (in late 2019) also released a white paper on AI security standardisation. At the same time, the middleweights in AI, such as the UK, Australia, Canada, Japan and Korea appear to be taking more of a wait-and-see approach. What do you make of these moves by these different approaches to AI governance and its implications for the future?

This partly has to do with market issues. In terms of technological development and investment, the weight is clearly in the US and China. Just the scale of activity in the private and public sectors is enormous. That is not to say countries like the UK are non-existent: the Alan Turing Institute, for example, gives the UK leadership in AI; and we have smart people doing good work, but it’s at the more theoretical end of the spectrum. In terms of AI development and deployment, however, the US and China are very much ahead of the pack. They have the computing power, the big data sets – the components that underpin successful AI deployment.

Europe cannot play very well in the same space due to its fragmentation. Its industrial policy is still hampered by a lack of coherence because of its member states, and it cannot do things as swiftly as the US and China. 

So in my view, for countries like the UK to take a wait-and-see approach remains the right approach, to a degree. If we are not at the forefront of AI development or deployment, we will have to wait and see what effects there are, in order to regulate the technology properly. You could say that’s the same as what’s happening in social media – the FAANGs and the BATs – we are going to be consumers of those services and we have had to wait and see what their effects were before making decisive regulatory steps.

A big debate in the world of AI governance at present is whether AI regulation should be horizontal (i.e. a general piece of AI law that regulates the technology) or vertical (i.e. different laws that regulate the uses of AI in different sectors). What is your stance on this issue and why?

It has to be both. There are regulations like competition law and data protection law that are horizontal in nature, but there will also be issues in various vertical sectors, such as the application of AI in the medical sector, which will raise issues specific only to that sector. So it’s not an either-or. Going back to my previous point about AI being the next iteration of computing, the current approach of regulating horizontally and vertically will have to continue.

Let me give a deeper example. Data protection is a form of horizontal regulation. Within that, the GDPR identifies automated decision-making as an area where we may have to recognise a few rights for a data subject, such as transparency and the right to appeal. Similarly, competition law is applicable to any market where an AI system will be deployed. So the law starts with horizontal regulation, whereas sectoral regulation comes along where there is something specific about medicine or banking that applies to a particular sector (that is not applicable to others).

People also talk about “risk-based” regulation. That carries an assumption: that we understand how to measure risk. It places the onus on the controller to assess risk – the GDPR mentions “risk” no less than 57 times. This makes sense from a theoretical perspective, but evidence shows that notions of risk play with human mindsets in very interesting ways. We talk, for example, about whether AI will enhance or reduce human bias. It’s the same thing with risk: if we do not understand how to measure risk, it will be very subjective. While there is a lot of literature out there on how to do risk analysis, this is not very widely known and we need to give the regulatees more help in this respect. We need greater tools and understanding, or we might cause a knee-jerk reaction – just like the public outcry that came with the use of algorithms to determine A-Level scores in the UK

Another point about regulation is the need for the public to understand it. Many AI governance frameworks talk about transparency. The same notion appears in data protection regulations. You can make all your data policies transparent to me – but if I don’t do anything with that information, has transparency enhanced my protection? Digital literacy is thus very important. A lack of it means that all this information, all this research into explainability, cannot be understood. I recognise that this may not be a popular policy option as it takes a long time to build, but the key idea is tempering transparency with the audience. 

We need to avoid the danger where politicians and legislators make laws that do not seem to make sense. Take the GDPR for example and how it spawned the myriad of cookie banners. That regulation does not enhance my rights of privacy – it just annoys me. I understand the rationale for it, but the result is not one that turns out to make sense.

On 11 June 2020, the Economist remarked that an “AI autumn” is approaching as it is “running up against limits of one kind or another, and has failed to deliver on some of its promises”. Do you agree with this, and do you think the world will enter an “AI winter”? What will be the outlook for AI in this decade?

That sounds like a “Game of Thrones” reference! I hate to be lawyerly about this, but it depends. We are seeing significant moves in facial recognition, where there are significant concerns about privacy issues. For example, we just had a judgment from the UK courts about use of facial recognition. We also have companies like Google and Amazon making public positions about facial recognition systems in law enforcement contexts. So, we’re seeing specific applications coming up against strong public or corporate concerns. Those systems may have to prove themselves before they get widely deployed. But as in other areas of technology, small AI developments will proliferate – those will pass unnoticed in the same way that algorithms dictating our lives right now (in social media and advertising personalisation) will arise without noticing. In short, AI is not a technology that’s going to stop, but it is going to see “winter” in some areas. 

In fact, an area where AI is going to continue to play a huge role in is the medical sector. For facial recognition, there is always a tussle between issues of personal privacy and law enforcement. So discomfort about where the balance is struck is natural. For the balance between privacy and public health, however, people seem to be more understanding: if I can use AI to diagnose quicker and cure COVID-19 faster, you won’t see the same huge pushback from the public.

This piece of content was produced by LawTech.Asia as an official media partner for TechLaw.Fest 2020. Click on the banner below to access TechLaw.Fest, and register for the event (it’s free – for limited slots only)!