Don't miss out on any news and changes relating to CE! Register now for the CE InfoService
Share article
More and more machines, sites or electrical equipment make use of self-learning algorithms to increase the usability for customers and users. In relation to product safety this represents a novelty: the behaviour of products does no longer necessarily possess a certain functional range when being placed on the market. Due to self-learning software components this functional range can face modifications or upgrades.
To face these challenges the EU Commission has submitted a legislative proposal for a concept dealing with Artificial Intelligence (AI) on a European scale. This proposal promotes the use of AI and intends to take all risks into account that might emerge together with AI systems. At the end of this legislative initiative there shall be a regulation with harmonized specifications. These rules are supposed to regulate the development, placing on the market as well as the application of such AI systems in the EU.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted “, states Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age.
In recent years AI systems have undergone a rapid technical development. The EU Commission has made a call for action to launch a harmonized European regulation to counter the challenges caused by AI systems. This involves a common European framework that considers positive aspects as well as possible risks of such systems. The targets are to protect both fundamental rights as well as users to establish a legal basis for the rapidly developing field of AI.
With the new regulation, the legislator is taking a risk-based approach. This states that "the nature and content of such rules shall be tailored to the intensity and scale of the risks that AI systems may pose". This means that AI systems that pose an unacceptable risk to human safety would be strictly prohibited. This includes systems that use subliminal or intentionally manipulative techniques, exploit human vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status or personal characteristics)
AI systems with an unacceptable risk are therefore banned as they pose a safety threat to users. If a system poses a high risk, strict requirements must be met before it can be authorised on the market. In the case of a low risk, the regulation merely points out possible dangers; in the case of a minimal risk, there should be as little interference as possible in the free use of such systems. In the case of a specific transparency risk, users must be informed if biometric categorisation or emotion recognition systems are used.
In the new Comprmiss proposal of May 2023, MEPs expanded the classification of high-risk areas to include hazards to health, safety, fundamental rights or the environment. They also added AI systems used to influence voters in political campaigns and in recommendation systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the list of high-risk areas.
Companies that do not comply with the regulations will face fines. The fines amount to EUR 35 million or 7 % of annual global turnover (whichever is higher) for violations of prohibited AI applications, EUR 15 million or 3 % for violations of other obligations and EUR 7.5 million or 1 % for providing false information. For SMEs and start-ups, more proportionate upper limits for fines are provided for offences against the AI Act.
The AI Act introduces special rules for general purpose AI models to ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations in terms of risk management and monitoring of serious incidents. These new obligations will be operationalised through codes of conduct developed by industry, academia, civil society and other stakeholders together with the Commission.
The regulation envisages a key role for harmonised standards in ensuring compliance with the provisions of the AI Regulation. According to Article 40 (1), harmonised standards are to be developed specifically for high-risk systems or general-purpose AI models, which will subsequently be published as references in the Official Journal of the European Union. On 14 January 2025, the European Commission published a draft standardisation mandate to the standardisation organisations CEN and CENELEC. This document already communicates details for future technical standards of this kind to ensure security, transparency and protection of fundamental rights in connection with the application of the AI Act throughout the EU.
With regard to the requirements profile, harmonised standards must therefore meet the following technical aspects:
You can open and download the draft standardisation mandate (so far only available in English) for the AI Act via the following link:
Draft standardisation request for the AI Act
The Technical Committee CEN/CLC JTC 21 ‘Artificial Intelligence’ and its subgroups, led by Danish Standards, are primarily responsible for developing the harmonised standards. The schedule for developing the harmonised standards provides the following rough framework:
The following European standards, which are expected to be used to meet the requirements of the AI Act, have been published to date:
Artificial Intelligence in mechanical engineering
According to the new Machinery Regulation (EU) 2023/1230 'artificial intelligence [...] raises new challenges in terms of product safety’ (Recital 12). Later on (see Annex III, Part B, 'General Principles'), the term AI is defined as follows: 'The risk assessment and risk reduction shall include hazards that might arise during the lifecycle of the machinery […] that are foreseeable at the time of placing the machinery […] on the market as an intended evolution of its fully or partially self-evolving behaviour or logic as a result of the machinery […] designed to operate with varying levels of autonomy. […]
It was already clear from the first draft of the AI Act that the AI Regulation and the Machinery Regulation were intended to complement each other. The AI Regulation primarily covers safety risks emanating from AI systems that control safety functions of a machine. As a complement, the Machinery Regulation is intended to ensure the integration of the AI system into the overall machine so as not to jeopardize machine safety as a whole. Furthermore, the European Commission underlines that manufacturers must conduct only one declaration of conformity for both regulations.
Diskussionen der Expert Group on Machinery und geplanter Leitfaden
The EU Commission's Expert Group on Machinery also discussed the interaction between the AI Act and the new Machinery Regulation at one of its meetings. The focus was on the classification of machines as ‘high-risk AI systems’ under Article 6 of the AI Act. The experts concluded that AI components of machines should only be classified as high-risk systems if they are subject to mandatory third-party certification in accordance with Annex I (Part A) of the MVO, based on the wording of paragraphs 5 and 6 (safety components or machines with fully or partially self-developing behaviour).
Another topic of discussion was the term ‘self-evolving behaviour’ in the MR. This refers to whether the actions or reactions of a machine are predictable or not; if they are not, it should be considered an AI-supported safety function. If the AI is not a safety function, it would not fall within the scope of the MR.
However, as there are still numerous unresolved issues in this area, the EU Commission intends to publish a set of horizontal guidelines by February 2026 explaining in detail how the two pieces of legislation interact.1
Standards and specifications for AI in mechanical engineering
An initial attempt to integrate the limits of mechanical learning into machinery is the recently published ISO/TR 22100-5:2021-01 „Safety of machinery - Relationship with ISO 12100 - Part 5: Implications of artificial intelligence machine learning“. The contents of this technical report can also be found in the draft standard for the new EN ISO 12100, which addresses the unintended, self-developing behaviour of AI in the ‘risk assessment’ section.
The technical report ISO/IEC TR 5469:2024 ‘Artificial intelligence – Functional safety and AI systems’ establishes for the first time a basis for the development and control of AI-based safety-related functions. It describes the properties, associated risk factors, available methods and processes for using AI within a safety related function to realize the functionality. The document also describes the use of AI systems to design and develop safety related functions.
The regulation was published in the Official Journal of the EU on 12 July 2024 and will enter into force 20 days later on 1 August 2024. It will become binding 2 years after publication on 2 August 2026, although there will be exceptions for individual provisions:
For a detailed overview of the various implementation dates, please refer to the timeline of the artificial intelligence act.
You can open and download the full text of the AI Regulation via the following link, and the full text of the legislation can also be found as a bibliographic data set in the Safexpert StandardsManager:
Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence
Posted on: 2025-06-25 (Last amendment)
Daniel Zacek-Gebele, MSc Product manager at IBF for additional products and data manager for updating standards data on the Safexpert Live Server. Studied economics in Passau (BSc) and Stuttgart (MSc), specialising in International Business and Economics. Email: daniel.zacek-gebele@ibf-solutions.com
We will inform you free of charge by e-mail about new technical articles, important standard publications or other news from the field of mechanical and electrical equipment safety or product compliance.
Register
CE software for systematic and professional safety engineering
Practical seminars on aspects of risk assessment and ce marking
With the CE InfoService you stay informed about important developments in the field of product safety.