Please click here to view the contents.
Or adjust your cookie settings under privacy policy.
Ian Brown is one of the leading voices when it comes to interoperability, AI, and competition in Europe. And he was quite influential when it came to the Digital Markets Act in Europe. But you've also been contacted in many instances in Germany for many issues related to competition and privacy policy. Ian, we've been in an interesting phase in Europe where we have a lot of legislation coming out.
What is your feeling on where we should go further?
Ian Brown: It's almost hard to keep track of all the legislation, including the AI act. Of course, I hope over the next couple of years we will see them all start to have a big impact. There certainly is scope within the DMA for the Commission over time if it wants to go further.
So, for example, the first DMA review the Commission will have to do by 2026. One of the questions they have to answer is whether or not the messaging interoperability obligation should be extended to social networking services, which was something the European Parliament wanted right from the start. What's been happening with Twitter is perhaps a demonstration that it might be a good idea.
Twitter is smaller than the biggest gatekeepers, but under the qualitative designation criteria in the DMA Twitter could be designated as a gatekeeper. That would be a powerful way to reduce the market power which has allowed Twitter to very significantly reduce its quality, whilst keeping most of its users for now.
I think alongside that, what's happening with the AI Act has been focused more on the safety aspects of AI rather than the competition aspects. The DMA does cover cloud computing, and cloud processing is a really important part of AI and competition. Because we already see especially Microsoft and Amazon having commanding positions in the cloud computing market– and, to a smaller extent, Google. They all, of course, are working on AI as well. And cloud capability will be really critical for the big AI actors of the future. It might be one place where the Commission over time might say, okay, with the DMA, with the Data Act, have we gone far enough in ensuring the cloud market is competitive?
Because if it's not, that will cause problems for competition with AI also.
Just a spontaneous question, because you stressed the cloud market and you know the European initiatives around Gaia-X, etc. Do you believe we need to reconsider our policies when it comes to the cloud in Europe? And should we maybe try some industrial approach?
Ian Brown: Gaia-X has gone in a number of different directions and I've seen criticisms of it, but now apparently it looks like some of the big US companies are going to be very involved. And how far that matches up with ideas about strategic autonomy and digital sovereignty is a big question. The privacy issues are still there.
We know going back to the Edward Snowden leaks ten years ago that the US government has a lot of power in terms of surveillance via the computing systems of the big companies that are headquartered in the US. Whether European governments, for example, really want to be putting sensitive government data on services controlled by non-European businesses remains a big open question.
We've also seen how difficult it is for European competitors to break into these markets. It’s partly because of the factors we are all now very familiar with as far as digital markets, which the Digital Markets Act is trying to address. The question is whether you can overcome those through industrial policy. We don’t have a clear answer yet. There are going to be many of the same issues with semiconductors, which obviously the US and Europe are now putting a lot of money into.
But that is an industry where there are gigantic efficiencies of scale. How much it's going to cost Europe as a whole to be funding these fabs in Europe with potentially higher costs -and perhaps a lower quality of the products that come out of those industrial policy-backed instruments- remains to be seen.
But are good interoperability remedies a kind of replacement for the privacy aspects?
Ian Brown: No, I don't think so. I always stress this when I talk about interoperability; it's not a replacement for other aspects of internet regulation, including privacy. And actually, making interoperability work in a way that makes individual users comfortable with it very much depends on protecting their privacy. So they're not concerned that by linking up their messenger services or perhaps linking up social networking services, they're going to lose control of their data to other firms whose privacy policies they have not explicitly themselves signed up to.
„Actually, so far Meta is not launching Threads in Europe precisely because Meta is not certain yet that they can meet the standards of the GDPR and DMA. I've seen a small number of people say that shows that the legislation goes too far. I would say it shows the opposite.“
Ian Brown
We see that with a lot of the debate over Meta’s new Twitter clone, Threads, and I've seen a lot of people especially on Mastodon, on the decentralized open social network (which is another alternative to Twitter) say, “Mm, I feel very uncomfortable about my Mastodon server connecting to Meta’s Thread server”- which will be technically possible in a few weeks’ time, according to Meta.
And one of the things I say in response to those concerns is that in Europe, the GDPR is there protecting your privacy. So even if Meta technically was getting access to some of their posts via interoperable technical channels, Meta would not be allowed to process them to profile that data and to try to target them with adverts, for example.
Actually, so far Meta is not launching Threads in Europe precisely because Meta is not certain yet that they can meet the standards of the GDPR and DMA. I've seen a small number of people say that shows that the legislation goes too far. I would say it shows the opposite.
It shows that the EU has succeeded in making Meta think carefully about the privacy and competition aspects of their service before they launch it in Europe.
We talked to Cory Doctorow a couple of weeks ago, and he was quite critical when it comes to implementing interoperability based on the DMA in Europe. He thought maybe it would be a better idea to start with social media instead of interpersonal messaging services. Is he right? Why does he have these worries?
Ian Brown: I can see that perspective. Certainly, I think social media in the sense of most tweets are public. You can make your Twitter account private, but most people don't. Most people broadcast their tweets to the world. They want people to hear them. They don't have the privacy concerns with their tweets they might have with their WhatsApp messages, for example.
So I can see why Cory and others have said actually interoperability would be easier with social networking services. The European Parliament was right to ask for both. It's a shame that in the final negotiations they couldn't persuade the EU Council of that. But The European Parliament is interested in revisiting this.
But no worries for interpersonal messages which are encrypted?
Ian Brown: If you look at how Article 7 of the DMA is drafted, it's done very carefully. There are a lot of specific protections in there to preserve the level of security and integrity in messaging systems, to make sure end-to-end encryption is maintained. I know from a technical perspective, because that's actually my background, it's perfectly possible to have interoperable end-to-end encrypted messaging.
So I look forward to seeing it next year in action, hopefully.
Okay, related issue to encryption, you are following the discussion regarding child sexual abuse material, the regulation that is currently under discussion in Europe. The right to privacy is not an absolute right. And we are witnessing that enforcement agencies are having big troubles when it comes to criminal enforcement and the protection of children in the online sphere.
What would be the middle ground? Is there any middle ground when it comes to encrypted communication and the protection of legitimate interest?
„You can't just make your encrypted software just a little bit less encrypted in a way that only the police are able to access it.“
Ian Brown
Ian Brown: Part of the problem here is there isn't an easy trade off. You can't just make your encrypted software just a little bit less encrypted in a way that only the police are able to access it -- that software companies like Apple, or like Meta with WhatsApp, can scan on the phones for one type of illegal content such as child sexual abuse imagery without creating a potential backdoor for all kinds of other government surveillance, especially in authoritarian countries rather than in democracies.
It's a very difficult question for that reason. There are things that can be done by companies like Meta and Apple, and many of which they already do internally, on a voluntary basis. And putting those behaviors into statute is a good thing. For example, they can look at metadata because end-to-end encrypted messaging still leaves some metadata available to the messaging providers.
They could look, for example, for individual adult users that seem to be contacting a lot of children that they had previously not been connected with, and treat that as a warning flag. They can make it as easy as possible for children who are using these systems to report cases of harassment and abuse. If they're receiving such content, that’s potentially illegal.
And then once the police have those reports, they can take action. But there isn't an easy answer. And of course it's an abhorrent crime. I quite understand why police and childrens’ rights advocates are calling very strongly for this regulation. But equally, I see the point of the security advocates saying that you can't just make a tiny little change here and enable potential child sexual abuse imagery to be scanned or to be decrypted, but not everything else. That's the technical difficulty.
And basically, even if you have enough legal safeguards, the situation remains the same.
Ian Brown: The problem with the legal safeguards is that even in democracy sometimes those safeguards are not fully respected. I mean, we've seen even just recently in Spain that there have been a number of credible allegations that the government was eavesdropping on opposition politicians– which you would hope the law would not have allowed, but it still happened. To give you another example, there were allegations that lawful interception capabilities in mobile phone networks were being abused in Greece to spy on quite senior politicians.
It's still not entirely clear who that was done by; there have been allegations more recently that the Greek government has been misusing surveillance powers. So even in the liberal democracies, the legal protections on their own you have to be slightly cautious about. Of course, the bigger problem is these are technologies that are available globally, and they're going to be available in all sorts of illiberal regimes.
And then that can potentially be very, very dangerous for democracy (for example, to opposition activists in authoritarian regimes).
Interestingly, in this case, Facebook is cooperating quite properly when it comes to delivering all the materials that are of relevance for the investigators. Another issue connected to Meta these days that's really creating headlines is the issue that Meta lost the battle in front of the European Court of Justice. Basically saying the Federal Cartel Office in Germany was right enforcing data privacy also in competition cases. What would be the lesson for the rest of Europe?
„The Bundeskartellamt has been a really leading competition authority in Europe. And this is one of the cases (Bundeskartellamt vs. Meta) I think, where they've really pushed the boundaries of antitrust law. And I think in a very positive way, thinking about privacy abuse as an abuse of dominance, as well as being a breach of the GDPR.“
Ian Brown
The Bundeskartellamt has been a really leading competition authority in Europe. And this is one of the cases (Bundeskartellamt vs. Meta) I think, where they've really f pushed the boundaries of antitrust law. And I think in a very positive way, thinking about privacy abuse as an abuse of dominance, as well as being a breach of the GDPR.
I'm glad to see the Court of Justice has agreed with them and said that yes, it is. We have seen some similar arguments in other jurisdictions that are not strictly bound by the Court of Justice precedent. In the UK, for example, there is a claim against Meta in the Competition Appeal Tribunal with some similar features to this, which so far – more because the UK collective action legal regime is very new - has not progressed very quickly.
But we'll see if the UK courts are inspired by the Court of Justice, even if they're no longer bound by the Court of Justice's decision. I think in general the sort of convergence of antitrust, privacy, and data protection with consumer protection law (and safety laws like the Digital Services Act) is really important.
It's something that would be very positive if European countries were doing more of that as well. And the DSA and the DMA will encourage them to do that.
Maybe the very last question: You've been consulting for the UK government on AI and supply chains. We've been talking a lot these days when it comes to foundation models, and we have the best examples of OpenAI and GPT. There is a kind of challenge in allocating responsibility. Why is that?
Ian Brown: Obviously, AI is a very broad technology. It can be used in many different ways. And regulating it close to the customer, I think, makes sense. Generally speaking, the complication that foundation models bring is that they may be several hops back up a supply chain.
You can imagine because OpenAI for example is making ChatGPT directly available to the public as a chat bot. I mean, really more as a technology demonstrator than anything else. But there will be many businesses in the future that are paying OpenAI directly (or perhaps even via one or two businesses in between) to bring their technology into their own.
Just to pick one of a million examples, a bank that is receiving applications for mortgages might use foundation models to do some of the assessment of the application. It will be the bank that is directly dealing with the customer and that will be, by and large, responsible under the AI Act and some of the similar approaches elsewhere.
But the problem with saying, ”okay, well, foundation models are back here,” is that we're going to put the responsibility on the bank (in this case) to deal with OpenAI or Google by contract or by some other mechanism if there are faults in the foundation model that turn into problems for consumers.
But the bank here is required to remedy them. There are two problems with that approach, however. One is very often that the bank or other business dealing directly with the consumer will be much smaller than the giant companies that we know – the ones currently making the biggest strides with foundation models. So the balance of power between those businesses is not necessarily going to make it easy for the business which has the relationship with the end customer to have problems fixed upstream.
We see this with cloud computing a lot, where organizations using cloud services would like perhaps to change some of the features of the contract that they have with the cloud provider, often to do with privacy, and the cloud providers just say, “we're not willing to do that. You’re too small of a business for us to negotiate custom contracts with you.
“You either accept our standard contract or you don't use our services. That's it.” I wouldn't be surprised if we see that happening with AI as well, without specific government regulatory attention. The other problem with an approach that's focused entirely on the business dealing with end customers is it may be very inefficient because there will be some problems with foundation models, as you can imagine.
A foundation model might be being used by thousands, tens of thousands, or millions of businesses, actually, in the future. And if there are problems in the foundation model such as (for example) widespread bias perhaps against women, or perhaps against people from specific ethnic minorities in gaining access to credit and in getting jobs, there will be all sorts of people that can be very significantly affected by decisions made with support from AI systems.
Those biases might be affecting hundreds of thousands of businesses. It's going to be very inefficient for each of those businesses individually to try and take action to deal with some of these defects, rather than the people who are actually running the foundation model who are in the best place to fix problems with the model, because obviously they know much more about it than the businesses that are just contracting with them to receive services.
The last-minute amendments to the AI Act which the European Parliament has proposed, that try to put some responsibilities back onto foundation models providers, make sense. I'm sure they'll be tweaked in the final negotiations with the EU Council. But I don't think just saying “sorry, the foundation models are back here, they're too difficult to deal with right now” will do.
Are you considering them as a potential gatekeeper?
Ian Brown: I think that especially because of their link with cloud computing on the tipped market we already see in cloud services, I would not be at all surprised if we end up seeing big AI companies as gatekeepers in that role. I wouldn't be surprised if at some point in the next few years the European Commission goes through the process in the DMA to add generative AI (or some very specific type of AI) as a new core platform service that is explicitly covered by the DMA.
Even before that, they may well start using the fact that cloud services are explicitly in the DMA and also in the Data Act, of course. They may try dealing with some of the problems by that route.
So let's hope that the European Commission is reading “Die Politische Meinung”- I'm sure they are. And yes, it's a kind of special day because we have the new legislation in Germany dealing also with market investigations. But we're going to comment on it on another occasion. It was really great having you, and many, many thanks.
The interview was conducted by Dr. Pencho Kuzev at the Academy of the Konrad-Adenauer-Stiftung in Berlin.