August 2 2025 marked a significant milestone for the European Union’s AI Act. The EU’s AI Office officially became operational and an AI Board of member‑state representatives started advising the European Commission. At the same time, member states were required to designate national market surveillance and notifying authorities and report their resources. The Act also launched a scientific panel of independent experts to advise on systemic risks posed by general‑purpose AI (GPAI) models.
The new rules impose several baseline obligations on providers of GPAI models. From 2 August 2025, providers must create and maintain technical documentation describing the model’s architecture and training data, establish policies to ensure compliance with EU copyright and intellectual‑property laws, and publish a summary of the training data. These requirements apply regardless of whether the model is considered to have systemic risk.
For GPAI models with “high impact capabilities” – those trained with a large amount of computation – the Act adds stricter requirements. Providers must implement risk‑management policies, conduct model evaluations, ensure cybersecurity protections and report serious incidents to the AI Office. To support compliance, the European Commission has published a GPAI Code of Practice and guidelines, but penalties for non‑compliance include fines of up to €35 million or 7 % of global annual turnover. Businesses developing or deploying AI in the EU should review these obligations carefully.