Reading time: 8 minutes

Written by Elizabeth Thomas and Josh Lee Kok Thong

TechLaw.Fest 2025 (“TLF”) took place from 10 to 11 September 2025, bringing together participants from around the world to engage in leading-edge conversations at the intersection of law, technology, and business. This year, LawTech.Asia had the unique privilege of interviewing John Edwards, Information Commissioner at the UK’s Information Commissioner’s Office (“UK ICO”). Commissioner Edwards also delivered a keynote address on the second day of TechLaw.Fest 2025, titled “Trust as Infrastructure: Can Regulation Be the Foundation for Responsible Innovation?”.

Drawing from his distinguished career in public service and private practice, Commissioner Edwards offered valuable perspectives on a range of pressing issues, including the regulation of emerging technologies, the geopolitical dimensions of artificial intelligence, and the evolving role of legal professionals in a rapidly changing landscape. Our discussion focused on the future of AI governance, cross-border legal complexities, and the pivotal contribution of young legal minds to shaping the future of law and technology.

LTA: Commissioner John, thank you for speaking with us today. You took over the helm of the UK ICO in January 2022. Since then, global discourse about the technology regulation appears to have shifted into high gear — since 2022, we have seen the regulatory discourse cover generative AI, agentic AI, quantum computing, children’s privacy, online safety, cross-border data transfers, de-regulation and more. Do you think digital regulatory discourse has indeed become super-charged and if so, why so? How do you personally make sense of the whole space?

Commissioner Edwards: It has indeed become supercharged. The most recent and dramatic of changes has been a shift in geopolitics, particularly with the US tech firms aligning with the new regime of US exceptionalism, which is interpreted as wanting to further export US regulatory standards into the markets in which digital products are sold. 

This is not the only force that is operating in this space. The pace of change in technology has been exponential, which has left lawyers, policy makers and commentators wondering if we are able to keep up. 

The third trend is what the Chief Justice (Sundaresh Menon CJ (who gave the opening keynote on Day 1 of TechLaw.Fest 2025)) observed about the Collingridge Dilemma. It is hard to predict the harm that may cause one to put a pause on the current regulatory approach; but once the harms reveal themselves, it becomes very hard to regulate. We have seen this with social media, where the online environment for children has become quite toxic, while the rearguard action (regulators are taking) to make the online world a safer place for children has faced headwinds. 

This also loops back to the fact that there is a cultural battle on the supremacy of the right to free speech. This right is enshrined in the US Constitution – which gives (the right) a primacy that is more nuanced in other jurisdictions. For example, in Singapore and the UK, people agree that there are limits on the way that speech should be practiced online in circumstances that would put young people at risk. 

So, there are several threads intertwining, which create several challenges. Both Singapore and the UK have signalled our preparedness to face these challenges — in the ways that the Chief Justice has articulated. First, we are showing how existing laws can apply through regulatory guidance. Second, by creating new laws to fill regulatory lacunae (as seen Singapore’s and the UK’s approach to online harms). Third, normative uncertainty. We have seen that in the approaches to AI. Both Singapore and the UK seem much more aligned and measured on this – and that signifies a strong bond. If we can keep pace with the developments, we should be able to avoid the worst of the harms. 

Commissioner Edwards delivering his keynote speech at Day 2 of TechLaw.Fest 2025. Image credit: LawTech.Asia

LTA: Today, regulatory thinking around privacy, data protection and AI regulation is regularly intertwined with issues of digital trade, digital sovereignty, geopolitics and geoeconomics. What have insights or reflections do you have about this shift, where will it take us, and how should policymakers and regulators navigate this space?

Commissioner Edwards: There is an ongoing struggle internationally. While there is domestic sovereignty – something that technology firms must respect – it is also evident that there is a growing desire to resist this. 

On an economic level, policymakers recognise the advantage of having harmonised laws and common approaches. At the core, however, parliamentary sovereignty remains in each jurisdiction. Thus, we must find ways to reconcile the economic, cultural and political imperatives. 

While what this means in practice is yet to be determined, revision back to sovereignty will not be the answer. It will result in losses of economies of scale. Instead, I believe the answer lies increasingly in international agreements. Instead of identifying differences and erecting fences to protect these, we must identify commonalities and provide the circumstances in which developments can occur. For example, what are the circumstances in which AI can be trained on proprietary intellectual property (“IP”) – a question that has not been resolved? These questions cannot be resolved by individual countries, and must be elevated to the international plane. 

Data protection is in a similar space: how do we respect individual rights? To your point on digital sovereignty, there was a European colleague that shared about their decision to develop an entirely domestically trained Large Language Model (“LLM”). To respect data protection, the training parameters were set such that it could be trained on individual data only if the individual had a Wikipedia page. I suspect a step like this might be impracticable. 

We are seeing more of these whack-a-mole situations. There was a large Anthropic settlement just this week. The basis for the settlement was not that there was infringement of the copyrights of individual rights holders, but that Anthropic had trained their LLM on pirated sites — a subtle distinction. Resolving these issues by individual litigation may take a very long time. For instance, we still have not solved the New York Times v OpenAI case or the Getty Images cases. There is still much to be done.

At the risk of over-simplifying, it feels like we are at a Napster moment. We must find a way that we can derive the benefits of models trained on all the knowledge in the world, while allowing content creators to still be fairly compensated for their work. 

Coming back to data protection, the UK ICO has done a lot of work in data protection and generative AI – as Singapore has. To what the Chief Justice called “application uncertainty”, we have issued discussion papers around AI and how the UK’s version of the General Data Protection Rules (“GDPR”) applies to them. This provides certainty for developers, but it also constitutes the ICO setting a baseline for enforcement should the need arise. 

The ink is barely dry on these papers, and we now see agentic AI coming in to add complexity. This is particularly so in an area of important but often-overlooked concept of law – the concept of “controllership”. To find someone accountable and responsible under data protection law, we need to identify the controller of the data. In the context of LLMs, this is already complicated. It depends on the application: is it the enterprise using the LLM, or is it the developer? For agentic AI, however, it becomes even more difficult to understand who is the controller, who is responsible and accountable, where does the data flow, and who has the responsibility for meeting transparency obligations. These problems are not unsolvable, but they are pressing.

On pragmatism and idealism – remember that careers are not linear. You therefore have to keep your eyes open for opportunities, and seize them where you can. Even though you may not see how they will get you to your next step, do your job to the best of your ability and identify further opportunities to move towards.

COMMISSIONER JOHN EDWARDS, UK INFORMATION COMMISSIONER’S OFFICE

LTA: The UK has traditionally been seen as a mature jurisdiction at the forefront of technology and digital regulation. What policy lessons has the UK gleaned from being a first-mover in issues like online safety and the regulation of generative AI?

Commissioner Edwards: The approach that the UK takes is outcome-based regulation. This requires a high level of engagement with industry – one cannot stand at arm’s length and expect industry to simply take the regulations we issue. Significant asymmetry exists in the relationship between regulators (even the well-funded ones) and the tech behemoths. Regulators must thus be informed by the technology developments, and this requires engagement.

The balance we need to tread is to get the benefit of industry’s insights, but avoid regulatory capture. This is a risk we must engage with. At the same time, most digital businesses understand the benefits of engagement, even if there may be times when interests do not converge. 

In any case, one thing that industry does generally appreciate is regulatory certainty. This is something that we think the UK – which has the third largest digital economy in the world – has a compelling advantage to work with industry and provide that regulatory certainty. When I did a tour of the US West Coast recently, I heard feedback that it was difficult for companies to do business in the EU as they do not know if the European regulators will ban their services or products. Thus, what we offer in the UK is to welcome these companies to work with us, enter our sandboxes, and allow us to provide them a clear guide on how the UK’s and regional regulators will view your technology under the law. This then gives these companies an opportunity – and assurance – to take their products and services into the EU. The corollary is that because the UK ICO is a widely respected regulator, our EU regulatory colleagues will consider the ICO’s positions seriously – especially on critical points.

Beyond being a gateway, the UK ICO has been invited by the UK government to think about how we could assist the government’s growth agenda by promoting growth and innovation. One suggestion we had was for the UK ICO to be given authority to disapply aspects of the law to an organisation to allow testing and sandboxing with confidence. On our part, we are keen to move from an ex post sanctions regime to an ex ante regime. While I know of regulators who tend to be averse to making declarations in advance, my view is that doing so could be a derogation of responsibility. In my view, there is no material distinction between the task required to examine an issue after a harm has occurred, versus one of examining an issue ex ante and providing clarity and confidence upfront.

LTA: You have had a long and distinguished career in law and public service. In this day and age, what would your advice be to young legal professionals hoping to build a meaningful and lasting career in law and / or policy?

Commissioner Edwards: My answer, to be trite, is to be pragmatic and practical.

Look inside yourself, and understand and hold onto your own values. These should guide you at every step. But adherence to ones values must be tempered by pragmatism.

On pragmatism and idealism – remember that careers are not linear. You therefore have to keep your eyes open for opportunities, and seize them where you can. Even though you may not see how they will get you to your next step, do your job to the best of your ability and identify further opportunities to move towards. The worst thing one can do is to think you have made a mistake, and not do anything about it.

We also live in an age where we are much more flexible, and where legal skills are much more fungible. From corporate spaces to the public sector, opportunities will always take you far in different ways. It is important to recognise them, grab them, and take leaps of faith. 

LTA: We are seeing a lot of sudden changes and advancements in how legal technology is being incorporated into firms, and what that means for young and senior lawyers in terms of oversight. What advice would you give students — in particular, those who are entering the workforce in 2 to 3 years? What advice would you give them to keep true to their values?

Commissioner Edwards: Drawing from the speeches at TechLaw.Fest – the Chief Justice spoke about the half-life of knowledge. My advice is not to despair. At the same time, the Senior Parliamentary Secretary (Eric Chua) said that AI is not going to replace people’s jobs – people who use AI are going to replace people who do not. I think there is a powerful message that there you are not going to just be swamped by this tsunami of technology, because you can get your own surfboard.