Practical guide on how to apply the AI Act for businesses

This Guide is designed to help companies that make, market, or put into service AI systems or implement AI systems. This guide is a tool designed to introduce the reader to the key aspects of the AI Act. The guide is for informational and guidance purposes only. Its content does not constitute and cannot be interpreted as legal assistance, recommendation, or professional advice tailored to a specific situation. Although the information presented has been carefully prepared based on the legislative framework available at the time of writing, no guarantee is given as to its accuracy, timeliness, or completeness.

The application of the AI Act may vary depending on the specifics of each company, and compliance requires an individualized analysis. An individualized analysis is necessary when making legal or compliance decisions. We recommend consulting a lawyer or specialized advisor. The authors of this guide assume no responsibility for any consequences resulting from the use of the information contained herein without further verification and appropriate advice.

Introduction

EU Regulation 2024/1689 establishing common rules on artificial intelligence, referred to as the AI Act, is the act that regulates artificial intelligence within the European Union. This act aims to strike a balance between promoting the adoption of artificial intelligence, increasing investment and supporting innovation in the field, while ensuring a high level of protection of health, safety and fundamental rights, including democracy, the rule of law and the environment against the harmful effects of AI systems in the Union.

The implementation of the AI Act is taking place in stages, with many requirements becoming mandatory progressively until the end of 2030. As of February 2, 2025, a number of AI systems have been banned and literacy requirements for the safe use of AI have become mandatory. The AI Act differentiates between criteria such as degree of risk, economic quality, type of AI placed on the market or put into service, and the obligations incumbent on each agent involved in the AI economic ecosystem. To see the practical implementation of the legal framework, we’ll look at the key aspects of the AI Act: application, types of AI covered, risk categories, operators involved in the AI lifecycle, their obligations, the evaluation, certification, monitoring, and reporting procedures, internal governance, and the penalty and accountability system.

  1. AI systems application domain and classification

The AI Act sets out clear rules for the marketing, deployment, and use of AI, prohibits certain AI practices that pose unacceptable risks, establishes specific requirements for high-risk AI systems and obligations for their operators, sets criteria for transparency, monitoring, supervision, and governance, and proposes measures to support innovation.

The AI Act applies to the following entities:

  • Suppliers placing AI systems on the market or putting them into service, or placing general-purpose AI models on the market in the EU, regardless of their location;
  • Implementers based or having a presence in the EU;
  • AI providers and developers established in a third country, when the results of AI systems are used in the EU;
  • Importers and retailers of AI systems;
  • Manufacturers of products that place an AI system on the market or operate it together with their product and under their name or trademark;
  • Authorized representatives of suppliers not based in the EU;
  • Notifiers in the EU.

Details on the legally defined roles can be found in Chapter 2 of this Guide.

The AI Act is not applicable for:

  • AI developed or used only for military, defense, or national security purposes, no matter what kind of entity it is;
  • AI systems, models, and their results developed and used only for scientific research and development;
  • research, testing, or development activities related to AI before it is placed on the market or put into service;
  • the obligations of individual developers who use AI in the ordinary non-professional course of their personal activities;
  • AI systems released under free and open-source licenses.

1.1 Legal definition of AI

The AI is defined as follows:

“An AI system means a machine-based system that is designed to operate with varying levels of autonomy and may exhibit adaptability after deployment, and that, pursuing explicit or implicit objectives, deduces, from the input data it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The definition takes a perspective based on the life cycle of an AI system, which comprises two main phases:

1. The pre-implementation or “design” phase of the system;

2. The post-implementation or “use” phase of the system;

Not all elements of the definition need to be present in both phases of the life of the system. Instead, the definition recognises that certain elements may occur in one phase but may not persist in both phases. This approach to defining an AI system reflects the complexity and diversity of systems, ensuring that the definition aligns with the objectives of the AI Act by including a wide range of AI systems. [Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689, Brussels, 29.7.2025]

What is the meaning behind each element of the definition?

  1. The term “machine-based” refers to the fact that AI systems are developed with machines and run on them. The term “machine” can be understood to include both the hardware and software components that enable the AI system to function. All AI systems are machine-based because they require machines to enable their operation, such as model training, data processing, predictive modeling, and large-scale automated decision-making.
  2. The term “different levels of autonomy” means that AI systems are designed to operate with “a certain degree of independence of action from human involvement and certain capabilities to operate without human intervention.”
  3. Adaptability refers to self-learning capabilities that allow the system’s behavior to change during use. The new behavior of the adapted system may produce different results from the previous system for the same input data.
  4. Express objectives refer to clearly stated goals that are directly hard-coded into the system by the developer. Implicit objectives refer to goals that are not explicitly stated but can be inferred from the system’s behavior or underlying assumptions. These objectives may result from training data or from the AI system’s interaction with its environment.
  5. The ability to deduce is an essential and indispensable condition that differentiates AI systems from other types of systems. It refers to the fact that the AI system must be able to deduce, from the input data it receives, how to generate results.

The legal definition covers AI systems based on machine learning, which involve learning based on data, how certain objectives can be achieved, and systems based on logic and knowledge, which involve making deductions based on encoded knowledge or symbolic representation of the task to be solved. AI systems based on machine learning are: supervised learning, unsupervised learning, self-supervised learning, reinforcement learning, deep learning. Logic- and knowledge-based approaches include: knowledge representation, inductive (logical) programming, knowledge bases, inductive and deductive engines, rationales (symbolic), expert systems, and search and optimization methods.

The legal definition does not cover mathematical optimization improvement systems (e.g., linear and logistic regression), simple data processing, systems designed exclusively for descriptive analysis, hypothesis testing, and visualization, systems based on classical heuristics, and simple prediction systems.

Determining whether a system is an artificial intelligence system should be based on the specific architecture and functionality of a particular system and should take into account all elements of the definition. It is not possible to automatically determine or provide an exhaustive list of systems that fall within or outside the definition of an AI system.

Only certain AI systems are subject to regulatory obligations and oversight under the AI Act. The risk-based approach means that only systems that pose the most serious risks to fundamental rights and freedoms will be subject to the interdiction provided for in Article 5, its strict compliance regime for high-risk AI systems falling under Article 6 and the transparency requirements for the limited number of systems covered by Article 50. The vast majority of systems, although they may qualify as AI systems within the meaning of the definition, will not be subject to any regulatory requirements.

  1. Types of AI

AI Act classifies artificial intelligence according to the risk it poses.

These categories are:

  1. AI systems with non-acceptable risk: these practices are completely prohibited because they contravene the fundamental values of the Union or involve very high risks. [Chapter II of the AI Act]
  2. High-risk AI systems: practices that are permitted, but subject to compliance with all obligations: risk management, transparency, security, monitoring, human oversight. [Chapter III of the AI Act]
  3. Limited risk systems: practices subject to less stringent transparency obligations: providers and implementers must ensure that end users are aware that they are interacting with AI tools (chatbots and deepfakes);
  4. Systems with minimal or negligible risk: the least problematic AI systems from a regulatory perspective, including most everyday AI applications that do not significantly affect the fundamental rights or health and safety of users (AI-based video games, spam filters). The AI Act does not provide any special rules for these.

Other requirements apply to certain AI systems and models designed to fulfill specific tasks:

  • General-purpose AI models and systems: specific information and risk minimization requirements are provided for these. A general-purpose AI model is an AI model trained with a large volume of data that uses large-scale self-supervised learning, which exhibits significant generality and is capable of competently performing a wide range of distinct tasks and can be integrated into a variety of systems or applications. A general-purpose AI system is a system based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use and for integration into other AI systems. [Chapter V of the AI Act]
  • Generative AI and Chatbots: transparency requirements are provided depending on the system’s classification in the high-risk category. Generative artificial intelligence represents AI systems or models created for the purpose of generating synthetic content in audio, image, video or text format. Chatbots are systems created for interaction. [Chapter IV of the AI Act]
  •  Legally enshrined roles

To determine exactly the obligations that fall on companies, it is necessary to classify them into one of the roles provided by the Act. Thus, the AI Act distinguishes between providers, deployers, importers and distributors, each with clearly distinct obligations.

  • The provider is a person or organization that develops an AI system or a general-purpose AI model or commissions its development and places it on the market or puts it into service under its own name or trademark. Placing on the market or putting into service may occur for a fee or free of charge.
  • The deployer is a person or organization that uses an AI system under its authority. This does not include the situation where the system is used in the course of a personal, non-professional activity.
  • The importer is a person located or established in the EU who places on the market an AI system bearing the name or trademark of a person established in a third country.
  • The distributor is a person in the supply chain, other than the provider or importer, who makes an AI system available on the EU market.

Most of the obligations provided by the AI Act fall on providers.

However, there are situations where a distributor, importer, deployer or other third party will be considered a provider of a high-risk AI system, at which point the obligations incumbent on the provider will also become applicable. These situations are determined by:

1) Rebranding – applying the name or trademark to an already functional or launched high-risk AI system, without modifying the allocation of contractual obligations;

2) Major technical modification – substantial change to a high-risk AI system, if it remains in the same risk category;

3) Modification of the intended purpose – use of a system (including general-purpose) in a way that transforms it from a low-risk system into a high-risk one.

3. Prohibited AI systems

The following AI systems are not allowed starting from February 2, 2025:

  • Use of subliminal, manipulative or deceptive techniques to distort behavior and affect informed decision-making, causing significant harm;
  • Exploitation of vulnerabilities related to age, disability or socio-economic circumstances to distort behavior, causing significant harm;
  • Social scoring systems that result in unfavorable or prejudicial treatment in unjustified or disproportionate contexts;
  • Risk assessment that a person will commit offenses solely based on profiling or personality traits, except when used to complement human assessments based on objective, verifiable facts directly linked to criminal activity;
  • Creation or expansion of facial recognition databases through indiscriminate extraction of facial images from the internet or from CCTV recordings;
  • Inference of people’s emotions in the workplace or in educational settings, except for medical or safety reasons;
  • Biometric categorization systems that infer sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation), except for labeling or filtering legally acquired biometric datasets or when law enforcement authorities classify biometric data;
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement, except when it occurs:
  • Searching for missing persons, victims of kidnappings and persons who have been victims of human trafficking or sexual exploitation;
  • Prevention of a substantial and imminent threat to life or a foreseeable terrorist attack; or
  • Identification of suspects, defendants in serious crimes (e.g. murder, rape, illegal trafficking in substances, persons, organs, weapons and ammunition, organized crime and environmental offenses)

These exceptions permitted by the AI Act are determined by several conditions for the use of the respective system to be justified:

  • Non-use of the tool would cause considerable harm and must take into account the rights and freedoms of affected persons;
  • Conducting a fundamental rights impact assessment before deployment;
  • Registration of the AI system in the EU database before use, except in the case of a justified emergency, when deployment can begin without registration, provided that registration takes place subsequently without unjustified delays;
  • Obtaining authorization from an independent judicial or administrative authority, except in emergency situations, when deployment can begin without authorization, provided that authorization is requested within 24 hours. If authorization is rejected, deployment must cease immediately, deleting all output data and results obtained.
  • High-risk AI systems

High-risk artificial intelligence systems can pose risks to health, safety and fundamental rights, such as the right to privacy or the right to dignity and non-discrimination. They are of 2 types: high-risk products and high-risk applications.

The product that has an AI system as a safety component or that is subject to EU regulation and for which there is an obligation of conformity assessment by a third party under that legislation will be considered a high-risk product, covered by the requirements of the AI Act. This category includes the following products:

  • Machinery (Directive 2006/42/EC)
  • Toys (Directive 2009/48/EC)
  • Recreational craft and personal watercraft (Directive 2013/53/EU)
  • Elevators (Directive 2014/33/EU)
  • Equipment and protective systems intended for use in potentially explosive atmospheres (Directive 2014/34/EU)
  • Radio equipment (Directive 2014/53/EU)
  • Pressure equipment (Directive 2014/68/EU)
  • Cableway installations (Regulation (EU) 2016/424)
  • Personal protective equipment (Regulation (EU) 2016/425)
  • Appliances burning gaseous fuels (Regulation (EU) 2016/426)
  • Medical devices (Regulation (EU) 2017/745)
  • In vitro diagnostic medical devices (Regulation (EU) 2017/746)

In addition, the Act also contains a series of products and areas, likewise considered as high-risk carriers, but which are not subject to direct requirements in the provisions of the act. However, at a later time the AI Act will specify the regulations applicable to these products. These products and areas are:

  • Civil aviation security (Regulation (EC) No 300/2008, Regulation (EU) 2018/1139)
  • Two or three-wheel vehicles and quadricycles (Regulation (EU) No 168/2013)
  • Agricultural and forestry vehicles (Regulation (EU) No 167/2013)
  • Marine equipment (Directive 2014/90/EU)
  • The European railway system (Directive (EU) 2016/797)
  • Motor vehicles and trailers and systems, components intended for them (Regulation (EU) 2018/858, Regulation (EU) 2019/2144)

Applications with high risk are exhaustively presented in the Act and include the following areas:

a. Non-prohibited biometrics:

– remote biometric identification systems, excluding 1-to-1 biometric verification;

– biometric categorization systems that infer sensitive or protected attributes or characteristics;

– emotion recognition systems.

b. Critical infrastructure: safety components in the management and operation of critical digital infrastructure, road traffic and supply of water, gas, heating and electricity;

c. Education and vocational training:

– AI systems that determine access, admission or allocation to educational and vocational training institutions at all levels;

– Systems for assessing learning outcomes, including those used to guide the student’s learning process;

– Systems for assessing the appropriate level of education for a person;

– Systems for monitoring and detecting prohibited behavior of students during tests;

d. Employment, workers management and access to self-employment:

– Recruitment and selection systems, in particular specialized job advertisements, analyzing and filtering applications and evaluating candidates;

– Systems for promotion and termination of contracts, allocation of tasks based on behavior, personal traits, characteristics, monitoring and evaluation of performance, behavior;

e. Access to and enjoyment of essential public and private services:

– Systems used by public authorities to assess eligibility for benefits and services, including allocation, reduction, revocation or recovery thereof;

– Creditworthiness assessment systems, except for financial fraud detection;

– Systems for assessing and classifying emergency calls, including prioritization of dispatchers for police, fire, medical assistance and urgent patient triage;

– Risk assessment and pricing systems in health and life insurance;

f. Law enforcement:

– Systems used to assess the risk of a person becoming a victim of a crime;

– Polygraphs;

– Assessment of the reliability of evidence during criminal investigations or prosecutions;

– Assessment of the risk of a person committing a crime or recidivism not exclusively based on profiling or assessment of personality traits or previous criminal behavior;

– Profiling during detection, investigation or criminal prosecution;

g. Migration, asylum and border control management:

– Polygraphs;

– Assessment of risks of irregular migration or health risks;

– Examination of applications for asylum, visas and residence permits, as well as related complaints regarding eligibility;

– Detection, recognition or identification of persons, except for verification of travel documents;

h. Administration of justice and democratic processes:

– systems used in the research and interpretation of facts and application of the law to concrete facts or used in alternative dispute resolution;

– Influencing the results of elections and referendums or voting behavior, excluding results that do not interact directly with people, such as tools used to organize, optimize and structure political campaigns;

The applications listed above will have a high degree of risk unless:

  • The AI system performs a narrow procedural task;
  • Improves the result of a previously completed human activity;
  • Detects decision-making patterns or deviations from previous decision-making patterns and does not have the purpose of replacing or influencing the previous human assessment without adequate human review; or
  • Performs a preparatory task for a relevant assessment for the purposes of application use cases.

Applications are always considered to present high risk if they profile natural persons, that is, they automatically process personal data to evaluate various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location or movement.

Providers whose AI system falls within the list of applications who consider that their application does not present a high risk must document such an assessment before placing it on the market or putting it into service.

4.1 . Requirements

The requirements entered into force for providers of high-risk applications from August 2, 2026, and for providers of high-risk products from August 2, 2027. Thus, companies still have time to ensure compliance with legal requirements.

Risk management system

A risk management system is established, implemented, documented and maintained in connection with the AI system. The system must comply with the rules:

1. The management system must be a continuous process and permanently updated throughout the entire life of the AI system, covering both the design and use phases.

2. The targeted companies must identify and assess known and foreseeable risks to health, safety and fundamental rights, taking into account the intended use and reasonably foreseeable misuses.

3. Risk assessment will take place and appropriate measures will be applied to reduce risks to an acceptable level, proportional to the potential impact.

4. When identifying the most appropriate management measures, elimination or reduction of risks is ensured, if this is technically feasible, mitigation and control of risks whose elimination is not possible, as well as provision of necessary information and training of deployers.

5. AI systems are tested for the purpose of identifying the most appropriate specific management measures. Testing can be done under real conditions.

6. Testing is mandatory before placing on the market or putting into service and can take place at any time during the development process.

7. The AI system’s susceptibility to have a negative impact on persons under 18 years of age and, where appropriate, on other vulnerable groups will be analyzed.

Data and data governance

Training, validation and testing datasets must comply with a series of governance and management practices, appropriate to the purpose of the system. These practices refer to:

  • Design choices and data collection processes and their origin;
  • Processing operations for data preparation;
  • Formulation of assumptions and assessment of the availability, quantity and adequacy of necessary datasets;
  • Identification of possible biases likely to affect health, safety, fundamental rights or lead to discrimination, as well as identification of measures to detect, prevent and mitigate possible biases found. Subject to compliance with certain conditions, the use of special categories of personal data is permitted to detect and correct biases.
  • Identification of relevant gaps that prevent compliance.

Datasets must be relevant, sufficiently representative, and as far as possible, error-free and complete. They also take into account the specific characteristics of the specific geographical, contextual, behavioral or functional framework.

Technical documentation

Technical documentation must demonstrate the compliance of the high-risk AI system with the requirements provided by the AI act. This must be prepared before placing on the market or putting into service of the system and is updated whenever necessary.

Technical documentation must contain:

1. general description of the system, including its intended purpose, the provider’s name, instructions for use, hardware and software used, the interface used;

2. detailed description of the system elements and the development process: development stages, design specifications, system architecture, datasets used, human oversight and cybersecurity measures, validation and testing procedure;

3. information regarding monitoring, operation and control of the system, performance indicators, risk management system, modifications made throughout the life cycle;

4. applicable standards;

5. EU declaration of conformity.

SMEs, including start-ups, may provide the elements of technical documentation in a simplified manner, through a simplified form to be developed by the Commission.

For high-risk products that are also subject to other compliance regulations, a single technical documentation is prepared.

Traceability and logging

AI systems must allow automatic recording of situations that generate risk or substantially modify the system and its continuous monitoring.

High-risk biometric applications must log the period of each use of the system, the reference database, the input data that generated a match and the identification data of the persons involved in result verification.

Transparency and information of users

The high-risk AI system must be sufficiently transparent to allow deployers to interpret the system’s results and use them appropriately. For this purpose, the system must be accompanied by instructions for use with concise, complete, correct and clear information, relevant, accessible and easy to understand.

The instructions must contain at least the following main elements:

  • Identity and contact details of the provider or authorized representative;
  • Characteristics, capabilities and performance limitations of the system:

– intended purpose;

– level of accuracy, robustness and cybersecurity, their indicators and factors likely to influence them;

– any known or foreseeable risk to harm health, safety or integrity of fundamental rights;

– technical capabilities and characteristics of the system;

– its performance;

– any relevant information regarding the training, validation and testing datasets used;

– information that allows deployers to interpret the system’s results and use them appropriately;

  • modifications made to the system and its performance, predetermined by the provider at the time of initial conformity assessment;
  • human oversight measures, including technical measures established to facilitate interpretation of results by deployers;
  • necessary computing resources, hardware, expected lifetime of the system, maintenance and care measures, their frequency, necessary for proper functioning;
  • description of mechanisms included in the system that allow deployers to collect, store and interpret log files.

Human oversight

Through the way they are designed and developed, the AI system must include appropriate human-machine interface tools that allow effective oversight of it. Oversight measures are proportional to specific risks, the level of autonomy and the context of use. These are ensured through one or both methods:

i. Measures identified and incorporated, when technically feasible, into the system by the provider before placing it on the market or putting it into service;

ii. Measures identified by the provider before placing it on the market or putting it into service appropriate to be implemented by the deployer;

Persons who oversee the system must possess, at least, the following abilities:

a) to understand the system’s capabilities and limitations, to monitor its operation;

b) to be aware of possible automation biases;

c) to correctly interpret the system’s results;

d) to decide not to use or to ignore, cancel or reverse the system’s results;

e) to stop the system safely.

In the case of remote biometric identification systems, it is prohibited to make any decision based on the identification resulting from the system, unless it has been separately verified and confirmed by at least 2 qualified persons.

Robustness, accuracy and cybersecurity

The high-risk AI system must achieve an adequate level of accuracy, robustness and cybersecurity throughout its entire existence. In achieving this requirement, the following measures must be respected:

  • Accuracy levels and indicators must be found in the instructions for use;
  • The system must be as resilient as possible to errors, failures or inconsistencies;
  • System robustness can be ensured through backup or emergency operation plans;
  • Systems with continuous learning must include mechanisms to eliminate or maximize reduction of feedback loops;
  • The system is resilient to attempts by third parties to exploit the system;
  • The system’s cybersecurity is adequate to risks and circumstances, containing measures against data poisoning, model poisoning, adversarial examples or model evasion, attacks on confidentiality or model defects.

The Commission will develop benchmarks and methodologies for measuring performance indicators.

5. Obligations

The AI Act establishes obligations for all operators involved in the life circuit of AI systems.

The requirements provided for high-risk AI systems, the obligations of providers of general-purpose AI models and those of systemic risk should be standardized in the coming years [AI Standards Work Programme]. Compliance with future standards will also mean compliance with the provisions of the AI Act.

5.1 Obligations of providers of high-risk AI systems

Providers of high-risk AI systems are obliged:

  • To comply with the requirements of the AI Act;
  • To indicate on the system, on the packaging or accompanying documents, their name, their registered trade name or trademark, their contact address;
  • To have a quality management system, systematically and orderly documented in the form of written policies, procedures and instructions, a system proportional to the size of the organization;
  • To keep technical documentation, quality management system, documentation regarding modifications, decisions and documents approved by notified bodies, EU declaration of conformity for 10 years after placing on the market or putting into service of the system;
  • To keep automatically generated log files, when they are under their control for a period appropriate to the intended purpose, of at least 6 months;
  • To undergo the conformity assessment procedure before being placed on the market or put into service;
  • To prepare the EU conformity documentation;
  • To apply the CE marking on the system, on the packaging or in the accompanying documentation, to indicate conformity;
  • To register in the EU database;
  • To take necessary corrective measures in case of system non-conformity and to inform the competent market surveillance authorities and the notified body;
  • To demonstrate to competent national authorities the conformity of the system, by making available to them, at their reasoned request, all necessary information, documentation and log files;
  • To ensure compliance with the system’s accessibility requirements.

5.2 Obligations of deployers of high-risk AI systems

Deployers of high-risk AI systems are obliged:

  • To take technical and organizational actions to ensure that they use systems in accordance with instructions for use;
  • To entrust human oversight to competent natural persons, with necessary training and authority;
  • To ensure that input data are relevant and sufficiently representative in relation to the intended purpose, when they exercise control over them;
  • To monitor the system’s operation;
  • To inform without unjustified delays when the system presents a risk and to immediately suspend its use or when they have identified a serious incident the provider or distributor and the supervisory authority;
  • To keep automatically generated log files, when these are under their control, for a period appropriate to the intended purpose, for at least 6 months;
  • To inform workers’ representatives and affected workers that they will be involved in the use of a high-risk AI system, before putting it into service or using it, when deployers are employers;
  • To perform a data protection impact assessment;
  • To ask for the authorization to use the remote biometric identification system, within 48 hours at most, within an investigation to search for a suspect or convicted person, from an authority whose decision has binding effect, except for use of the system for initial identification of a potential suspect based on objective, directly checkable facts linked to the crime; if authorization is not accepted, use of the system stops immediately, and personal data used are deleted;
  • To document any use of the remote biometric identification system in the relevant police file and to make available to the supervisory authority and the data protection authority, upon request;
  • To present to supervisory authorities and national data protection authorities annual reports regarding the use of remote biometric identification systems;
  • To inform natural persons who are subject to the use of high-risk applications;
  • To cooperate with authorities in actions taken by them in connection with high-risk AI systems.

5.3 Obligations of importers

Importers have the following obligations under the AI Act:

  • To ensure that the system to be placed on the market complies with legislation:

1) the conformity assessment procedure has been carried out,

2) the provider has prepared technical documentation and placed the CE marking on the system,

3) the system is accompanied by the declaration of conformity,

4) the provider has designated an authorized representative.

  • Not to place the system on the market if the system is non-compliant, false or accompanied by falsified documentation;
  • To inform the provider, authorized representatives and market surveillance authorities when the system presents a risk;
  • To indicate their name, registered trade name or registered trademark and contact address on the system, packaging or in accompanying documents;
  • To ensure compliance with storage or transport conditions, throughout the entire period when the system is under their responsibility;
  • To keep, for 10 years after placing on the market or putting into service a copy of the certificate issued by the notified body, instructions for use and EU declaration of conformity;
  • To provide competent authorities, based on a reasoned request, with all necessary information and documentation;
  • To cooperate with authorities in actions taken by them in connection with high-risk AI systems.

5.4 Obligation of distributors

Distributors have the following obligations:

  • To verify whether the system bears the CE marking, whether it has a copy of the EU declaration of conformity and by instructions for use, before making it available on the market;
  • To verify whether the provider and importer:

1. have indicated on the system, packaging or documentation, the name, registered trade name or trademark and address at which they can be contacted,

2. have a quality management system.

  • Not to make available on the market a non-compliant system;
  • In case of system non-conformity, the distributor may choose between:

1. Taking necessary corrective measures to bring the system into conformity;

2. Withdrawing the system from the market;

3. Recalling the system from the market;

4. Ensuring that the provider, importer or any relevant operator takes the respective corrective measures

  • To inform the provider, importer, competent authorities when the system presents a risk;
  • To ensure that storage or transport conditions do not jeopardize the system’s conformity;
  • To provide competent authorities, based on a reasoned request, with all necessary information and documentation, to demonstrate the system’s conformity;
  • To cooperate with authorities in actions taken by them in connection with high-risk AI systems.

5.5. Transparency obligations for providers and deployers of certain AI systems

For certain AI systems, catalogued as medium-risk carriers, such as AI chatbots, or generative AI systems, the AI Act provides transparency and information obligations for providers and deployers.

  • AI Chatbots

Providers of AI systems intended to interact directly with people (commonly called AI chatbots) must be designed and developed in a way that allows them to be informed about the interaction with an AI system. This obligation must be respected even when the system is publicly available to report a crime.

  • Generative AI systems

Providers of AI systems that generate synthetic content in audio, image, video or text format must ensure that the system’s outputs are marked in a machine-readable format and detectable as being artificially generated or manipulated. In fulfilling this obligation, providers must ensure the effectiveness, interoperability, reliability of the technical solutions used. The chosen solution must depend on the following factors:

  • Particularities and limitations of different types of content;
  • Implementation costs;
  • Generally recognized state of advancement of technology.

This obligation is not applicable if the AI system:

  • Assists with standard editing,
  • Does not substantially modify the input data provided by the deployer or their semantics,
  • Is authorized by law to detect, prevent, investigate, criminally prosecute offenses.
  • Artificially generated or manipulated content

Deployers of an AI system must disclose that the respective content has been artificially generated or manipulated in a clear and distinct manner, at the latest at the time of first interaction or exposure, in the following two situations:

1. The system generates or manipulates images, audio content or video that constitute deepfakes;

The obligation is limited to disclosing the existence of generated or manipulated content when it is part of an artistic, creative, satirical, fictional or similar work

2. The system generates or manipulates texts published with the purpose of informing the public regarding matters of public interest.

The obligation is not applicable when the content has passed through human editorial filter and editorial responsibility for its publication is held by a natural or legal person.

The obligation is not applicable when use is authorized by law for detection, prevention, investigation or criminal prosecution of offenses.

6. General-purpose AI models

A general-purpose AI model is an AI model, including when it is trained with a large volume of data that uses self-supervision at large scale, which exhibits significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the manner of placing on the market, and which can be integrated into other systems or applications.

A general-purpose AI system is that system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use and for integration into other AI systems.

The AI Act distinguishes between general-purpose AI systems and general-purpose AI systems with systemic risk. Providers of general-purpose AI systems with systemic risk must comply with a series of additional obligations. Providers of general-purpose AI models can demonstrate compliance with the obligations provided by the AI Act by adhering to the Code of Practice, by signing the form [Form for adherence to the Code of Practice] made available by the AI Office and transmitting it to the address indicated on the site [EU-AIOFFICE-CODE-SIGNATURES@ec.europa.eu]. Proof of conformity by adhering to the Code of Practice is valid until publication of a harmonized standard.

Classification of general-purpose AI models as general-purpose AI models with systemic risk takes place under the following, non-cumulative conditions:

1. The cumulative amount of computation used for training the system, measured in floating-point operations is greater than 10²⁵

2. Based on a Commission decision, ex officio or as a result of an alert from the scientific panel, starting from the criteria:

  • number of model parameters;
  • quality or size of the dataset;
  • amount of computation used for training the model, measured in floating-point operations or indicated by a combination of other variables, such as estimated training cost, estimated time needed for training or estimated energy consumption for training;
  • input and output modalities of the model, such as text-to-text (large language models), text-to-image and multimodality, and thresholds according to the most advanced state of technology for determining high-impact capabilities for each modality, as well as the specific type of input and output data (for example, biological sequences);
  • benchmarks and assessments of model capabilities, including taking into account the number of possible tasks without additional training, adaptability to learning new and distinct tasks, its level of autonomy and scalability, tools it has access to;
  • whether it has a high impact on the internal market due to its reach, which is presumed when it has been made available to at least 10,000 registered business users established in the Union;
  • number of registered end users.

6.1 Obligations of providers of general-purpose AI models

Providers of general-purpose AI models have the following obligations:

  • To create and update the model’s technical documentation, to be provided, upon request, to the AI Office and competent national authorities;
  • To prepare, update and make available information and documentation intended for providers of AI systems who intend to integrate the general-purpose AI model into their systems. Without prejudice to intellectual property rights, confidential business information or trade secrets, the information and documentation:

– Enable providers to understand the capabilities and limitations of the general-purpose model and to comply with the obligations incumbent upon them under the AI Act;

– Provide a general description of the general-purpose AI model (e.g. tasks for which the model is intended to perform, type and nature of AI systems in which it can be integrated; release date and distribution methods; model license) and a description of model elements and its development process, (technical means necessary for integrating the general-purpose AI model into AI systems; manner and format of input and output data and their maximum size; information regarding data used for training, testing and validation, as appropriate, including data type and provenance and organization methodologies).

  • To implement a policy aimed at respecting copyright and related rights;
  • To prepare and make available to the public a sufficiently detailed summary regarding content used for training the model;
  • To cooperate with the Commission and competent national authorities.

Providers established in third countries designate, by written mandate, before making a general-purpose model available on the EU market, an authorized representative established in the Union. The authorized representative fulfills the tasks provided in the mandate received from the provider. A copy of the mandate can be provided to the AI Office, at its request.

This mandate empowers the representative:

  • To verify the completion of technical documentation, as well as fulfillment of all other obligations provided by the AI Act;
  • To keep at the disposal of the AI Office and competent national authorities a copy of technical documentation for 10 years after placing the model on the market and the provider’s contact details;
  • To provide the Office, at its reasoned request, with all necessary information and documentation, to demonstrate system conformity;
  • To cooperate with the office and competent authorities, at their reasoned request, regarding any action in connection with a general-purpose model, including when the model is integrated into systems placed on the market or put into service in the EU.
  • To be contacted, in addition to the provider or in their place, by the Office or authorities, regarding all aspects related to compliance.

The authorized representative terminates their mandate if they consider that the provider is acting contrary to the AI Act. In this case, they immediately inform the Office regarding the end of the mandate and reasons for ending.

6.2 Obligations of providers of general-purpose AI models with systemic risk

In addition to the obligations listed in point 6.1, providers of general-purpose AI models with systemic risk must also comply with the obligations:

  • Provide the model assessments in accordance with standardized protocols and tools that reflect the state of advancement of technology, with a view to identifying and mitigating systemic risks;
  • Evaluate and cushion potential systemic risks at EU level, including their sources, which may arise from the development, placing on the market or use of general-purpose AI models with systemic risk;
  • Track, document and report without unjustified delays to the AI Office, competent national authorities, relevant information regarding serious incidents and possible corrective measures to address them;
  • Ensure an adequate level of cybersecurity protection for models and their physical infrastructure.

Conclusions

The Artificial Intelligence Act represents an essential step in the responsible regulation of emerging technologies, establishing a balanced framework between innovation and protection of fundamental rights. Its implementation requires companies to take a proactive approach to risk management, transparency and solid internal governance, adapted to the specifics of each organization. Compliance with legal requirements becomes not only an obligation, but also an opportunity to strengthen the trust of customers and business partners. For effective compliance, collaboration between legal, technical and ethics experts is essential, in an interdisciplinary framework. Also, education and continuous training regarding the responsible use of AI will contribute to a sustainable transition toward a safe and competitive digital economy. Thus, companies that align in time with the provisions of the Act will be better prepared to capitalize on the potential of this technology in an ethical manner and in accordance with the principles that have been at the foundation of the European community.

Schedule your Corporate Legal Consultation

Hategan Attorneys offers comprehensive legal solutions tailored to your business needs, with specialized focus on technology-driven industries and emerging sectors. Our multidisciplinary approach combines technical excellence with deep understanding of the Romanian and regional business environment.

Contact us to schedule a consultation with our team.