More and more machines, sites or electrical equipment make use of self-learning algorithms to increase the usability for customers and users. In relation to product safety this represents a novelty: the behaviour of products does no longer necessarily possess a certain functional range when being placed on the market. Due to self-learning software components this functional range can face modifications or upgrades.
To face these challenges the EU Commission has submitted a legislative proposal for a concept dealing with Artificial Intelligence (AI) on a European scale. This proposal promotes the use of AI and intends to take all risks into account that might emerge together with AI systems. At the end of this legislative initiative there shall be a regulation with harmonized specifications. These rules are supposed to regulate the development, placing on the market as well as the application of such AI systems in the EU.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted “, states Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age.
In recent years AI systems have undergone a rapid technical development. The EU Commission has made a call for action to launch a harmonized European regulation to counter the challenges caused by AI systems. This involves a common European framework that considers positive aspects as well as possible risks of such systems. The targets are to protect both fundamental rights as well as users to establish a legal basis for the rapidly developing field of AI.
With the new regulation, the legislator is taking a risk-based approach. This states that "the nature and content of such rules shall be tailored to the intensity and scale of the risks that AI systems may pose". This means that AI systems that pose an unacceptable risk to human safety would be strictly prohibited. This includes systems that use subliminal or intentionally manipulative techniques, exploit human vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status or personal characteristics)
AI systems with an unacceptable risk are therefore banned as they pose a safety threat to users. If a system poses a high risk, strict requirements must be met before it can be authorised on the market. In the case of a low risk, the regulation merely points out possible dangers; in the case of a minimal risk, there should be as little interference as possible in the free use of such systems. In the case of a specific transparency risk, users must be informed if biometric categorisation or emotion recognition systems are used.
In the new Comprmiss proposal of May 2023, MEPs expanded the classification of high-risk areas to include hazards to health, safety, fundamental rights or the environment. They also added AI systems used to influence voters in political campaigns and in recommendation systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the list of high-risk areas.
Companies that do not comply with the regulations will face fines. The fines amount to EUR 35 million or 7 % of annual global turnover (whichever is higher) for violations of prohibited AI applications, EUR 15 million or 3 % for violations of other obligations and EUR 7.5 million or 1.5 % for providing false information. For SMEs and start-ups, more proportionate upper limits for fines are provided for offences against the AI Act.
The AI Act introduces special rules for general purpose AI models to ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations in terms of risk management and monitoring of serious incidents. These new obligations will be operationalised through codes of conduct developed by industry, academia, civil society and other stakeholders together with the Commission.
The future AI Regulation and Machinery Regulation 2023/1230 are intended to complement each other. The AI Regulation primarily covers safety risks emanating from AI systems that control safety functions of a machine. As a complement, the Machinery Regulation is intended to ensure the integration of the AI system into the overall machine so as not to jeopardize machine safety as a whole.
Furthermore, the European Commission underlines that manufacturers must conduct only one declaration of conformity for both regulations.
Harmonized standards will obtain a key role in meeting the requirements of the AI regulation. An approach will be the provision of technical solutions for users.
An initial attempt to integrate the limits of mechanical learning into machinery is the recently published ISO/TR 22100-5:2021-01 „Safety of machinery - Relationship with ISO 12100 - Part 5: Implications of artificial intelligence machine learning “.
Certainly, further standards and technical specifications will follow once the European Organizations of standardizations are put in charge of the development of such special AI standards.
Following the political agreement on 13 March 2024, the final vote can now take place in Parliament. This is also scheduled for March 2024. The regulation will then enter into force 20 days after publication in the Official Journal of the EU; it will become binding two years after publication, i.e. from April 2026 at the earliest.
Posted on: 2024-03-19 (Last amendment)
Daniel Zacek-Gebele, MSc Product manager at IBF for additional products and data manager for updating standards data on the Safexpert Live Server. Studied economics in Passau (BSc) and Stuttgart (MSc), specialising in International Business and Economics. Email: daniel.zacek-gebele@ibf-solutions.com | www.ibf-solutions.com
Back to overview
General InformationRisk assessmentOur Products
CE software for systematic and professional safety engineering
Practical seminars on aspects of risk assessment and ce-marking
With the CE InfoService you stay informed about important developments in the field of product safety.