Ireland can’t regulate AI while living off big tech’s taxes

Expert OpinionsFounder's Desk
Illustration

Published in The Irish Times on 24 November 2025
Ireland can’t credibly regulate AI while also living off Big Tech’s taxes

Read the full article on The Irish Times website:
https://www.irishtimes.com/business/2025/11/24/ireland-cant-credibly-regulate-ai-while-also-living-off-big-techs-taxes/

When my friend and entrepreneur, Elaine, woke to find a defamatory TikTok video accusing her of scamming clients, she reported it, sought legal advice, and contacted the Data Protection Commission (DPC). Weeks passed. The video stayed online. No one was held accountable, not the person, not the platform. The DPC’s remit, it seems, is data inaccuracy, not defamation.

Elaine’s story reveals a more profound truth about Ireland, a country that has
grown rich from the profits of the very companies it is supposed to regulate. Our
reliance on Big Tech has quietly transformed not only our economy but also the remit and independence of our regulator.

Ireland is now the European hub for nearly every major technology giant: Meta,
Google, TikTok, Apple, Microsoft and X. Their glass towers dominate Dublin’s
skyline, and their taxes dominate the Exchequer. Last year, corporation tax receipts exceeded €28 billion, roughly one in every five euros collected by the State. That dependence has consequences. We call ourselves Europe’s digital capital, yet we’ve become its bottleneck for enforcement. The EU’s other regulators complain of Ireland’s “extremely slow case handling”, lenient enforcement, and endless deliberation.

On paper, the DPC has issued significant penalties; €310 million against LinkedIn
and €1.2 billion against Meta, but these were not its preferred outcomes. In several cases, Ireland’s draft orders contained no fine until EU peers in France, Germany, Austria and Spain objected. The European Data Protection Board (EDPB) overruled our DPC and instructed it to impose penalties. The pattern is clear: when Ireland acts, it is usually because Europe pushes. In one case, Ireland pushed back, actually suing the EDPB to block an order requiring further investigation into Meta. It lost the case and much of its credibility.
After scandals like Cambridge Analytica, governments wanted to act. But the
question became what to regulate, and Big Tech made sure the answer suited them.

The companies embraced privacy laws, cookie banners and compliance forms. They shifted the debate from how lies spread to how data is stored. Privacy became the shield; identity remained the loophole. While regulators obsessed over paperwork, anonymous accounts and troll networks thrived. The danger was never surveillance; it was the collapse of accountability. For 10 years, we’ve been protecting data rather than protecting truth. GDPR made us feel secure, but did little to stop defamation or industrial-scale misinformation. We built a bureaucracy instead of a backbone.

Tech companies claim that requiring users to verify identity would endanger
dissidents and be impossible to enforce. It’s a clever argument, not entirely false, but entirely self-serving. Anonymity fuels toxicity; toxicity drives engagement; and engagement drives profit. Yet identity verification is neither new nor complex. Revolut, the online bank, has verified the identities of over 3 million Irish users under money laundering legislation. The process is mandatory and takes minutes, requiring photo ID and a short video or facial scan. If a fintech company can verify millions of users to prevent fraud, then billion-dollar social networks can do the same to prevent abuse. This is not a radical
idea but an existing principle: we already demand verification to protect our financial systems, but refuse to apply it to safeguard public discourse. The barrier isn’t technology, it’s our lack of will to legislate.


Ireland’s regulatory culture prizes procedure over principle. Whether on defamation reform, online safety or data protection, we prefer fairness and consultation, the language of neutrality that avoids confronting power. That soft touch has served us economically. It has also served Big Tech perfectly.
Now, artificial intelligence is racing ahead, and we stand at the same cliff edge.
Facebook’s power came from personalisation, AI’s power comes from personality, learning who we are and reflecting it back in human form. Recommendation engines shaped our attention; replication engines will shape our emotions.

When tech companies drive revenue from emotional imitation, from engagement that feels personal, then regulation can no longer remain procedural, it must become flesh in other words, it must become law. Europe’s AI Act, expected in 2026, is hailed as a breakthrough, but enforcement will again fall to national regulators, in practice, Ireland. The law contains a crucial loophole: for “stand-alone high-risk AI systems”, where companies can self-assess their own risk. Their duties are mostly procedural, setting up risk management systems and producing documentation, in effect, marking their own homework. It’s GDPR all over again: ambitious in theory, timid in practice.

If the past decade was about privacy, the next must be about authenticity. Real AI regulation must rest on accountability, identity and independence. Every major AI system should have a named, legally responsible individual, someone answerable for harm, bias or defamation, just as corporate and environmental law already requires. AI-generated voices, text and images must be clearly labelled and traceable. Counterfeit authorship should be treated as seriously as counterfeit currency.

A portion of Big Tech’s tax receipts should fund a truly independent AI Authority - separate from the Department of Finance and the DPC, with the power and
resources to investigate, suspend and fine. We need a single expert body capable of acting decisively. In sectors such as healthcare, education, media and justice, AI systems should face a full public-interest test, the digital equivalent of an environmental impact review. The greater a technology’s reach into human life, the higher the obligation to prove it serves the public good.

We missed the chance to shape social media before it shaped us. With AI, there will be no second bite of the apple. If we let profit logic outrun moral logic again, Ireland won’t just host the digital future, it will surrender it.

Stay in the loop

Subscribe for insights, trends, and certification updates that keep you employable and competitive.

Newsletter subscription