Artificial intelligence is evidently no longer just a tool for technical tasks – it is quickly becoming a major part of everyday life, work, and even decision-making processes across organizations. At the center of this transformation is General-Purpose AI (GPAI), a new class of AI systems designed with adaptability in mind.
Unlike traditional AI, which is trained to perform one specific function, such as, for example, scanning invoices or classifying images, GPAI models can tackle a wide range of tasks, from writing legal summaries and drafting emails to generating code, creating images, and even reasoning through complex questions. This remarkable flexibility makes GPAI incredibly powerful and useful across countless sectors, enabling new levels of productivity, creativity, and efficiency. However, it also brings significant challenges: as these models can be used in ways their developers never fully anticipated, they are far more difficult to control, monitor, and regulate, raising questions about accountability, transparency, and potential misuse.
Content
As the European Union moves forward with the EU Artificial Intelligence Act (EU AI Act or AI Act) – the first major legal framework of its kind – it has become clear that GPAI requires dedicated rules, and as a result, it has taken center stage in the EU’s approach to AI regulation. The original idea behind the EU AI Act was to classify AI systems based on the risks they pose in specific sectors. Still, with the rise of GPAI, which can be used in many ways across different industries, the rules have had to evolve. EU lawmakers now aim to make sure that these powerful systems are developed and used responsibly, with clear rules to support safety, transparency, and fundamental rights.
What is GPAI?
As the European Union moves forward with the EU Artificial Intelligence Act (EU AI Act or AI Act) – the first major legal framework of its kind – it has become clear that GPAI requires dedicated rules, and as a result, it has taken center stage in the EU’s approach to AI regulation. The original idea behind the EU AI Act was to classify AI systems based on the risks they pose in specific sectors. Still, with the rise of GPAI, which can be used in many ways across different industries, the rules have had to evolve. EU lawmakers now aim to make sure that these powerful systems are developed and used responsibly, with clear rules to support safety, transparency, and fundamental rights.
According to Recital 98 and Recital 99 of the AI Act, GPAI models typically have at least a billion parameters and can generate or process content such as text, audio, video, or images.
Even if a model is specialized for certain tasks or operates within a single domain, such as text processing, it can still be classified as GPAI if it is capable of supporting a wide variety of uses within that area – for example, generating meeting summaries, drafting internal compliance updates, producing blog content, and creating client email drafts, all within the scope of text.
This broad applicability within one modality demonstrates the general-purpose nature of the model. However, because GPAI encompasses a diverse and rapidly evolving set of models constantly being updated, improved, and adapted for new applications, regulators are still developing more detailed and practical guidance to clearly define which models fall under GPAI and how they should be governed in practice.
Systemic Risk: Why GPAI Needs Special Rules
The EU AI Act initially focused on a risk-based framework, assigning obligations based on the sector and intended use of an AI system. However, GPAI does not fit neatly into the original risk-based approach – one model can power thousands of different applications, including both low-risk and high-risk, which makes it hard to regulate only at the application level. As these powerful models continue to improve, the risk of serious impacts, such as spreading misinformation or affecting democratic processes, has become more real.
The AI Act recognizes that general-purpose AI models are not all created equal when it comes to their potential impact on society. To address this, it introduces a crucial distinction between GPAI models that pose systemic risk and those that do not.
A model is classified as having systemic risk if it has high-impact capabilities. This includes:
- Models that have been trained using immense computing power, specifically, those exceeding a threshold of 1025 floating-point operations, which indicates their advanced scale and capability. Training at this magnitude typically involves massive datasets and powerful infrastructure, enabling the model to develop capabilities that can affect society at scale, such as generating highly realistic content, reasoning across multiple domains, or supporting critical decision-making processes with minimal human oversight.
- Additionally, even if a model does not exceed this computational threshold, it can still be designated as systemic risk if the EU AI Office determines that its capabilities are powerful enough to potentially affect society at scale. This means that the assessment is not solely based on technical measures like compute power but also takes into account the real-world influence and potential uses of the model. The AI Act explicitly sets out a list of factors to consider when determining whether a GPAI model qualifies as systemic risk, including its size, the quality and volume of its training data, performance benchmarks, input and output modalities, market impact within the EU, and other indicators of its overall capabilities and reach. For example, the model may be considered as GPAI with systemic risk if it has at least 10,000 users in the EU.
In practice, models falling under this category are those that could, for example, influence democratic processes by generating targeted disinformation, pose risks to public safety by enabling sophisticated cyber-attacks, or widely spread illegal or harmful content with insignificant human oversight. By setting these criteria, the AI Act aims to ensure that the most powerful and potentially impactful AI models are subject to stricter obligations and oversight, to protect public interests and uphold fundamental rights in an AI-driven society.
Recognizing these risks, EU lawmakers introduced dedicated rules for GPAI under the AI Act. These rules aim to ensure that GPAI developers meet core obligations such as technical documentation, transparency about training data, and prevention of unlawful content generation.
Key Obligations for all GPAI Providers
Under the EU AI Act, all providers of general-purpose AI (GPAI) models have significant obligations, regardless of whether their models are classified as posing systemic risk.
- Firstly, they must keep comprehensive technical documentation up to date, covering every stage of the model’s lifecycle – from training data and methods used, to testing processes and evaluation results. This documentation serves as proof of compliance and must be readily available to the AI Office or national competent authorities upon request.
- Additionally, providers are required to share clear, detailed, and regularly updated information with AI system developers who integrate their models into downstream applications. As a result, it is ensured that these developers understand what the model can and cannot do, supporting responsible and lawful use, while also protecting any confidential business information or trade secrets involved.
- Providers must also implement robust copyright compliance policies, using state-of-the-art technologies to identify and respect the rights of creators who have opted out of having their works used for AI training, in line with EU copyright rules. This requirement is crucial for upholding intellectual property rights and avoiding legal disputes over training data use.
- Finally, GPAI providers are obligated to publish a sufficiently detailed summary of the data used to train their models, following the template provided by the AI Office. This promotes transparency and accountability, giving the public and regulators insight into the sources and nature of the data that underpin these powerful AI systems, and building greater trust in their deployment across society.
These requirements are part of the AI Act’s phased compliance timeline, which was initiated when the Act entered into force on 1 August 2024. Under this structured approach, while the prohibitions on unacceptable-risk AI systems took effect from 2 February 2025, the key date for GPAI providers is 2 August 2025, when their specific obligations, along with governance measures, notification duties, confidentiality provisions, and most penalties, become enforceable.
This will be followed by full enforcement of general compliance measures from 2 August 2026, with the final phase requiring mandatory compliance for high-risk AI systems and existing GPAI models by 2 August 2027. This timeline ensures that GPAI providers have a clear pathway to prepare for and align with their new responsibilities under the AI Act.
Additional Duties for Systemic GPAI Models
When a general-purpose AI model is classified as posing systemic risk, its providers face a significantly extended set of obligations under the EU AI Act, beyond the baseline requirements that apply to all GPAI models.
Providers must conduct comprehensive evaluations of their models, using standardized and state-of-the-art protocols to ensure that the model operates safely and as intended. This includes carrying out adversarial testing, such as red teaming, to identify vulnerabilities or ways in which the model could be misused or manipulated to produce harmful outputs. These evaluations are crucial for understanding the risks that a powerful GPAI model might pose, both intentionally and unintentionally.
Providers are also required to assess and mitigate systemic risks at the EU level. This includes looking at potential dangers not just in isolated use cases but across the model’s entire lifecycle – from its initial development and training methods, to how it is placed on the market, and how it is ultimately used by downstream developers and end-users. The goal is to ensure that any risks that could threaten public safety, democratic processes, or economic stability are proactively identified and reduced.
In addition to these proactive measures, providers have a duty to monitor their models continuously, keeping track of their real-world performance and documenting any serious incidents, such as failures or unexpected harmful outputs. They must promptly report such incidents, along with any corrective actions taken, to the AI Office and, where appropriate, to competent national authorities. This enables regulators to remain informed of potential threats and ensure accountability.
Finally, providers of systemic GPAI models must maintain a high level of cybersecurity protection for both the model itself and its supporting infrastructure. This is essential to prevent security breaches, unauthorized access, or misuse by malicious actors who could exploit the model’s powerful capabilities to cause widespread harm.
GPAI Guidelines & Codes of Best Practice
As the deadline for the EU AI Act’s rules on general-purpose AI (GPAI) models approaches, companies are eager to understand exactly what steps they need to take to ensure compliance. To help providers understand their obligations and implement the new requirements effectively, two key initiatives have been launched: the GPAI guidelines and the upcoming codes of best practice.
a) GPAI Guidelines
In April 2025, the EU AI Office published draft guidelines to provide much-needed clarity on the upcoming obligations for providers of general-purpose AI (GPAI) models under the AI Act. These rules, set to apply to all GPAI models released after August 2, 2025, represent a significant compliance milestone for AI companies operating within the EU. This date forms part of the AI Act’s structured timeline, which gradually introduces obligations to give providers time to adapt, with full compliance for existing GPAI models required by 2 August 2027.
The guidelines not only outline the core obligations for GPAI providers but also highlight that companies modifying or fine-tuning third-party models may trigger additional compliance duties if their compute usage during modification exceeds certain thresholds. This means that even companies that are not building models from scratch but are instead adapting existing models will have to carefully assess whether their modifications result in the creation of a new GPAI model, requiring them to prepare separate technical documentation, publish updated training data summaries, and ensure full compliance with the AI Act’s requirements.
b) Codes of Best Practice
To support compliance and reduce the burden on providers, the EU AI Act promotes the development of codes of best practice tailored for GPAI models. These codes aim to bridge the gap between the Act’s complex legal requirements and the daily operational realities that AI companies face, translating high-level rules into clear, actionable measures for workflows, governance, and technical processes.
The EU AI Office coordinates this effort, working with national regulators, GPAI providers, industry representatives, and experts to ensure the codes reflect both technical capabilities and legal and societal expectations. Although voluntary, following these codes will be an important way for providers to demonstrate good-faith compliance and meet the EU’s high standards for safe and trustworthy AI. This approach promotes consistency and clarity across the EU, avoids fragmented national interpretations, and helps build trust with users, regulators, and the public by showing that providers follow recognized best practices for transparency, risk management, and responsible AI development.
The first codes of conduct were initially scheduled for May 2025, after an iterative drafting process launched with the AI Act’s entry into force in August 2024. Although finalization has taken longer due to extensive stakeholder input, the codes are expected soon, giving GPAI providers practical guidance to align their systems and compliance strategies with the new EU framework.
Non-Compliance with the GPAI Rules
The EU AI Act imposes significant penalties for non-compliance with its rules, including those specific to general-purpose AI (GPAI) models. Providers that fail to meet their obligations – whether related to technical documentation, transparency, copyright compliance, risk assessments, or systemic risk duties – may face administrative fines imposed by national authorities or the AI Office.
For GPAI-related breaches, the fines can reach up to EUR 15 million or 3% of the company’s total worldwide annual turnover, whichever is higher. The AI Act establishes these penalties to ensure that providers take their obligations seriously, reflecting the potential societal impact of powerful GPAI models. In cases involving systemic risk GPAI models, where failures could lead to large-scale harm, the risk of enforcement actions and higher fines is even greater.
Navigating the GPAI Regulatory Era
The introduction of GPAI-specific rules under the EU AI Act marks a turning point in AI governance, shaping how powerful and versatile models are developed, integrated, and used within the EU and beyond. As GPAI becomes embedded in legal, corporate, and creative workflows, its promise is matched by new responsibilities to protect fundamental rights, public safety, and societal trust.
For AI providers, these obligations are not just legal hurdles but an opportunity to build models and services grounded in transparency, security, and ethical use. By embracing the guidelines, codes of best practice, and evolving compliance expectations, companies can position themselves as trusted innovators in an AI-driven market where accountability is no longer optional – it is a business imperative.