top of page

European Industry Pushes Back on the EU AI Act: Is Regulation Moving Too Fast?

  • Writer: Mark Addington
    Mark Addington
  • Jul 9
  • 3 min read

As the European Union prepares to implement its landmark Artificial Intelligence Act, business leaders across the continent are urging caution. Their concern is not about the existence of regulation, but the speed and complexity with which it is being introduced. Several industry groups argue that the framework, while aimed at protecting users and society, may unintentionally burden developers, especially in sectors where compliance clarity is lacking.


Business Leaders Sound the Alarm

In early July, a coalition of more than 150 companies, including household names such as Airbus, Carrefour, and Siemens, sent a letter to European Commission President Ursula von der Leyen requesting urgent adjustments to the rollout of the AI Act. The signatories expressed concern that the new law imposes disproportionate burdens on European providers of general-purpose AI and fails to clearly define critical terms such as "high-risk" systems.


They argue that the current framework creates legal uncertainty and will likely disadvantage European developers in comparison to their U.S. and Chinese counterparts. The Financial Times reports that these companies view the current compliance timeline as unworkable and fear that innovation will be driven out of Europe as a result.


Call for a Delay

Reuters reports that a group of 45 European and U.S. companies has formally requested a two-year delay in the enforcement of the EU AI Act. Their argument centers on the lack of finalized guidance and infrastructure. Specifically, the long-awaited Code of Practice for general-purpose AI systems has not yet been published, and companies claim that without it, they are being asked to comply with laws that lack clear operational guidelines.  


Despite industry pushback, EU Digital Chief Henna Virkkunen has confirmed that the Code of Practice will be published before the August 2025 deadline, while also reaffirming the Commission's intention to stay on schedule.  


Compliance Risks and Competitive Pressure

The Act categorizes AI systems according to risk tiers, with "high-risk" systems subject to strict documentation, transparency, and testing requirements. Businesses argue that many enterprise uses of AI, such as fraud detection or algorithmic hiring, might be swept into these categories, creating compliance costs that are both disproportionate and unclear.


A central tension lies in striking a balance between responsible AI development and global competitiveness. Critics worry that companies in the U.S. and China, which are subject to less rigid oversight, will gain a commercial edge during this two-year window. There is also concern that startups and SMEs may lack the legal and technical resources to navigate compliance, even if their products are not intended for high-risk uses.


As reported in the Financial Times, some executives view the regulatory climate as placing a regulatory tax on European AI innovation, while simultaneously lacking the specificity needed to enable confident product deployment.  )


What to Watch Next

The final version of the Code of Practice is expected to be published before the end of 2025. That document will likely clarify expectations for general-purpose AI developers and may offer simplified pathways for demonstrating compliance. Still, the fundamental structure of the Act is unlikely to change.


Until then, businesses operating or selling in the EU should assess whether their systems might fall under the Act’s high-risk classifications. They should also begin documenting training data, risk mitigation efforts, and human oversight procedures for affected systems.


The EU has made clear that enforcement will proceed on schedule. However, as companies prepare for compliance, many are left hoping that the Commission's promised guidance arrives in time to offer meaningful clarity.

Comments


bottom of page