“AI made in Europe” has been the lodestar of EU AI policy since 2018. Six years later, US companies’ grip on everything digital in the EU has only tightened. But Brussels doesn’t need to breed OpenAI clones. What matters is getting these new technologies to help European societies flourish. And that won’t happen if corporate profits shape the digital age.
In the early days, making the EU into a third AI power – next to the US and China – was the mountain for Brussels policymakers to climb. That goal meant coordinating EU investment programmes, pooling resources, and cutting red tape where possible, among other things.
The first-ever AI regulatory law, the AI Act, adopted to much fanfare earlier this year, is just one pillar of the Commission’s AI strategy. Read the early strategy documents, and you will mostly find plans to boost and diffuse AI in Europe, such as enabling pan-EU data sharing, funding research, and making small and medium-sized enterprises digital tech-ready. This ambition could be labelled as “AI sovereignty”.
The central motivation for this project? Europe would pay a hefty price if it couldn’t stand on its own AI feet.
So far, the push for AI has borne little fruit. EU AI darling Mistral, the French start-up, has embraced deep cooperation with US tech giant Microsoft, while US chipmaker AMD has bought Finnish Silo AI, and German Aleph Alpha recently pulled back from head-on competition with the American AI juggernauts and abandoned its ambitions in large language models. If that wasn’t enough, Intel announced it would halt construction (which hadn’t even begun) on its flagship chip plant in Germany in September. Intel chips “Made in Germany” may never see the light of day.
All this must be a tough pill to swallow for champions of European AI. Go with the tide and American firms will dominate digitalisation in Europe. Just how much of a problem is that, though?
A misguided strategy from the start
The central economic arguments for AI sovereignty rested on shaky assumptions from the get-go. In the 2010s, AI was feted as an unparalleled productivity booster. The drive was not only for “AI deployed in Europe”, but AI actually made there. European counterparts to the OpenAI’s of this world seemed crucial. Why? Because, so the idea went, that’s where the money was. AI development, not just AI deployment, held the key to future prosperity. (In the meantime, nothing has kept EU investors from putting their euros into US tech stocks to ride the AI bull market.)
But the productivity idea is questionable. If non-AI companies modernise their production lines to take advantage of AI, they should reap the benefits themselves, unless Big Tech squeezes them. What matters is competition in digital markets, not the flag under which companies sail. Now, many economists are left wondering not only when but if that boost will ever materialise. And if it does, how sure are we that it will be broadly shared across society? If anything, home-grown AI would drive up inequality, concentrating the spoils in a few hands, with little benefit for ordinary folks.
Common thinking suggests that in AI, the winners take all. That may be true for high investment, easy-to-scale products like large language models – as with OpenAI. But thousands of firms already use many other, relatively more mundane applications that don’t require data centres the size of small cities. Think of targeted algorithms that optimise logistics or energy use or of factory robots that learn new tasks thanks to AI. Once we look beyond the headline-grabbing top dogs, there is space enough for mid-sized players.
Finally, Brussels optimists had also counted on strong global demand for “trustworthy AI” made in Europe. That ambition sounds noble enough. But how often will consumers themselves get to choose which AI they use? Most of it will be baked into other platforms and products, invisible to the end users. And even where people do have a choice, experience suggests that they happily trade away their privacy for the convenience that WhatsApp or casually accepting hundreds of online cookies offer.
To be sure, there are other motivations to foster EU AI sovereignty. Dependence on other countries’ tech is and remains a weakness, certainly with Trump back at the helm of US government. For the time being, it is also a given. The same is true for the military dimension of AI. Europe will need to build its own tech capacity in the long term and mend key vulnerabilities. But genuine tech decoupling from the US is unrealistic in the short run, and it wouldn’t make for a smart strategy. Decreasing tech dependency is a long-term project, requiring equally long-term financing and political will.
None of this is meant to belittle the real achievements of EU AI policy. The AI Act was a fiendishly difficult legislative nut to crack. Getting 27 member states to join forces is tricky in any policy field, and AI is no exception. Nevertheless, if successful competition in some kind of AI race was the goal, it’s hard for Team Europe not to be disappointed.
Getting priorities straight
Take a step back, though, and maybe the whole project was misdirected. There is a distinctively European goal that should serve as a lodestar, but it’s not “AI made in Europe”. Digitalisation should help create a society worth wanting; the societal implications of AI matter much more than who makes it. This goes beyond safeguarding individual rights such as privacy or non-discrimination. Instead, we also want to steer the aggregate effects AI has on society – its effect on how our children learn and grow, on how we care for people who are sick or need help, and on how we learn about each other, such that we can live in peace and with mutual respect.
Making sure AI promotes these goals requires collective sovereignty over digitalisation, not European versions of Big Tech companies. Without more forceful steering of digitalisation, how will we avoid driving deeper wedges into society and between relatively tech savvy EU member states and the rest? As long as profits rule how and when AI is deployed, there will be unsavoury societal side effects.
Maybe, “digital sovereignty” put us on the wrong track from the start. My WZB colleague Julia Pohle suggested that “self-determination” might be more to the point – and I couldn't agree more. That is what EU AI policy should be about: managing the societal effects of AI while helping individuals and societies flourish and looking out for Europe’s digital safety. How we move into the digital age is too important to let profit motives derail that journey.
Teaser image by Solen Feyissa on Unsplash