What you need to know about the General-Purpose AI Code of Practice

31 July 2025 Deborah Mercier

Artificial intelligence (AI) is reshaping the regulatory landscape, and the European Union (EU) is setting the pace. The new General-Purpose AI Code of Practice, published July 10, 2025, serves as a voluntary guide for providers of certain types of AI models to align with the EU AI Act’s upcoming requirements. The Code is currently under review with the European Commission and Member States. Once they endorse it, providers can begin adhering to it.

Before we get into the details, let’s unpack two big questions: What is a general-purpose AI model? And who’s considered a provider under the AI Act?

What is a general-purpose AI model?

A general-purpose AI (GPAI) model is an AI system designed to perform a wide range of tasks, rather than being built for a single, narrow application. Under the AI Act, these are models that serve multiple purposes, both directly by end-users and as a component integrated into other AI systems or products. They may or may not pose systemic risks, as defined by the Act.

Likely GPAI models include, for example:

  • Large language models (LLMs) like GPT-4, Claude, Gemini, and LLaMA that can generate text, summarize documents, translate languages, write code, and more
  • Multimodal foundation models like GPT-4 Vision or Gemini Vision, which can process and generate content across text, images, and (in some cases) audio
  • Speech models like Whisper that can transcribe and translate audio for a wide variety of downstream applications
  • Image generation models like DALL·E 3, Midjourney, or Stable Diffusion, which can be used for creative design, marketing, product prototyping, and more
  • Robotics foundation models under development that integrate computer vision and control to enable broad physical tasks (e.g., warehouse picking, manufacturing assembly, autonomous movement)
  • Tool-using AI agents, such as open-source AutoGPT or enterprise orchestration agents, that can integrate with APIs and tools to perform multi-step workflows

Each of these models is not limited to a single fixed task: they serve as building blocks for multiple products, services, and AI systems across sectors, increasing their reach, risk, and regulatory focus. Because of their broad capabilities and reach, GPAI models carry heightened risks and regulatory obligations.

Who are providers?

(Or in other words, who does this apply to?)

The Code is generally intended for providers—those who originally develop general-purpose AI models and put them on the EU market, whether for free or otherwise, and regardless of where the provider is located. 

But don't tune out yet if you're not an AI developer. Organizations that utilize GPAI models can be considered “downstream providers” under the AI Act. That means even if you’re not building the technology, the Code—and the transparency information it facilitates—could affect your ability to meet your own compliance obligations.

Why was this Code created?

GPAI models underpin many systems used in the EU. The Code was drafted by 13 independent experts, incorporating feedback from over 1,000 stakeholders—including industry leaders, SMEs, academics, AI safety experts, rightsholders, and civil society organizations—to:

  • Promote safety, transparency, and copyright compliance
  • Provide clear documentation for regulators and downstream providers
  • Offer providers a structured way to demonstrate alignment with the AI Act

What does the GPAI Code cover?

The Code is organized into three chapters:

  1. Transparency: Requires clear documentation of model information using a standardized Model Documentation Form, simplifying compliance for providers and enabling transparency for users and downstream providers.
  2. Copyright: Provides practical solutions for AI providers to comply with EU copyright rules, especially relevant for models trained on vast datasets that may include copyrighted material.
  3. Safety and Security: Provides guidance for GPAI models with systemic risks (for example, models with a risk of loss of control or harmful manipulation) to help those providers meet their additional obligations under the AI Act. Systemic risks are those that:
    • Scale with model capability or reach
    • Emerge rapidly, potentially outpacing mitigation efforts
    • Trigger cascading harms or irreversible impacts

How will compliance be assessed?

The AI Act obligations for GPAI model providers take effect August 2, 2025. Enforcement begins:

  • August 2, 2026 for new models
  • August 2, 2027 for models already on the market

Providers who adhere to the Code will benefit from:

  • A streamlined compliance pathway
  • Reduced administrative burden
  • Regulatory consideration of their good-faith efforts, which could potentially result in reduced fines

What does this mean for compliance teams?

  • Assess your exposure: Even if your organization is not developing AI, your suppliers or business partners may be impacted. Review what AI models and tools you're using; keep an eye out for transparency-related information from those providers that may impact your internal risk assessments or that you may need to comply with your own EU AI Act obligations.
  • Review AI risk management frameworks: Ensure systemic risks—especially manipulation, misinformation, and loss of control—are part of your assessments.
  • Prepare for transparency demands: Data lineage, documentation, and explainability will become essential compliance capabilities.
  • Monitor upcoming guidelines: The European Commission will publish clarifying guidelines later this month to define scope and obligations.

The bottom line?

The GPAI Code of Practice reflects a proactive approach to AI governance. For compliance teams, it’s a signal to expand your regulatory horizon, embed AI risks into your enterprise risk management, and support your business in meeting these emerging standards with confidence.

 


Deborah Mercier (1)Deborah Mercier, Senior Compliance Counsel, is a licensed attorney with over 13 years of experience in the compliance field, spanning a diverse range of sectors. She is deeply committed to developing engaging and effective ethics and compliance training programs and helping organizations align their business objectives with legal and regulatory requirements.

 

 

Compliance
17 July 2024
You know that feeling when you want to coach every employee on how to handle a conversation? But also, you can’t split yourself in a...
Compliance
17 December 2024
The end of the year is a natural time to reflect on key developments and plan for what’s ahead. In that spirit, we’ve rounded up some of...
Compliance
29 January 2025
Data Privacy Week is here, offering an important opportunity to evaluate how your organization is addressing the ever-changing challenges...