With the development of generative AI models, such as Bard and ChatGPT, the usage of AI has become the mainstream. It now targets global businesses and our daily lives. While companies compete to implement new AI models and innovate their standard processes, authorities are discussing the ways to protect individual rights and safety.
Many governments around the world have been leading the discussions on AI regulation for some time now. From Japan to Israel, from China to the UAE, authorities prepare their laws on limiting risks posed by AI. However, the European Union now prevails with their first EU regulation on AI Act approved this March 2024. What will the new legislation bring, and how will it influence the development of new AI applications?
What Is the European Union AI Act?
From the proposal to regulate AI back in 2021 to the final voting in the Parliament on March 13, 2024, Europe releases the first in history European Parliament AI Act that will come into force later this year. It aims to regulate AI and place the future of AI in a human-centric direction.
The main aim of the act is to protect the fundamentals of democracy and human rights. The EU Parliament wants to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly”. At the same time, since AI remains one of the key game players in EU’s digital transformation, the AI Act European Union implements leaves space for research and development.
Risks That the European Parliament AI Act Covers
The EU AI Act will impose different rules for different groups of AI service providers. Depending on the type of AI service, the EU Parliament targets unacceptable risk and high-risk AI systems:
Unacceptable risk AI systems. The unacceptable risk means that the AI systems pose a threat to people, particularly to their safety and privacy. This category covers social scoring, cognitive behavioral manipulation, real-time remote biometric identification, and categorization systems. It also involves predictive policing systems, emotion recognition systems in legal framework, and regulation in the border and work and education places.
High-risk AI systems. The assessment of high risk is based on the threats the AI system poses to people’s safety, health, environment, and basic rights. The AI EU regulation defines AI systems as high-risk, which are used in products defined by the EU’s product safety legislation. They also highlight AI applications in critical infrastructure, work management, in education, and law. Other examples include applications in migration and border control, access to private services and public benefits.
According to the EU AI Regulation, systems with unacceptable risk will be prohibited, while high-risk ones will need to be assessed before being implemented. Besides, citizens will now be able to file complaints to the authorities.
AI Applications under EU AI Regulation Ban
As soon as the European Parliament AI Act enters into force, a number of AI applications across various use cases will be under ban in the EU. Prompt engineers working on new developments will need to assess risks and adhere to the law before releasing the new product to the market.
Before voting for the European Union AI Act, EU Parliament agreed on the applications in the following areas to be banned:
Biometric systems. Particularly, categorization systems that use sensitive characteristics, such as race, sexual orientation, all beliefs, including political, religious, and philosophical.
Social scoring. Systems using high-quality data on personal features and behavior.
Emotion recognition. For now, the European Union AI Act specifies emotion recognition in institutions related to training and education and the working environment, but more can come.
Facial recognition. The usage of datasets created on the basis of untargeted scraping of images from open sources, such as CCTV footage or internet.
Human behavior. Applications that manipulate human behavior for further influence on their free will.
People’s vulnerabilities. The usage of any types of people’s vulnerabilities, such as age, disability, economic, and social situation.
However, the EU AI Act first regulation on artificial intelligence leaves space for some exemptions for real-time remote biometric identification (RBI). With prior judicial authorization, such applications will be allowed for law enforcement purposes and identification of crime.
What Does EU AI Regulation Mean for Tech Businesses?
Whether you want to start an AI company or are already hiring AI developers for the existing project, the assessment of risks is one of the extra steps you’ll need to add to your workflow. This especially applies if you’re based in the EU or will launch a new AI application in the European market.
For all types of AI applications, this EU generative AI regulation imposes transparency requirements that they should meet. Particularly, they target general-purpose AI (GPAI) systems and the models they are based on. This means you will need to adhere to EU copyright law and provide detailed summaries on the content used in the AI models.
News on artificial intelligence state that the more risk the AI application poses, the more additional assessment can come into play. For example, you may be required to assess and mitigate systemic risks, to perform model evaluations, and provide reports on incidents.
At the same time, the EU Parliament keeps in mind the digital transformation that is one of Europe’s strategies for the following years. It was also one of the discussion points at the Conference on the Future of Europe (COFE) held in 2021 and 2022. That’s why there are plans to release real-time testing and regulatory sandboxes for startups and SMEs. They will help to do the assessment before launching the AI product to the market.
For example, you’ll be able to use the EU AI Act Compliance Checker, which will help to assess whether your AI application falls under the new obligations. The checker is intended to receive updates as soon as new regulations come into place.
Finally, as soon as the law comes into force, the non-compliance will lead to further fines. The EU imposes fines from 35 million Euro (or 7% of a company’s global turnover) to 7.5 million Euro (1.5% of turnover) depending on the non-compliance and the size of the company.
Outstaff Your Team is a certified partner. While finding IT specialists for your team, we comply with all international regulations and laws, ensuring safety of your team’s expansion.
Hire a compliant AI devNext Steps
We’re still expecting the final checks for the law to become effective. It should undergo a final lawyer-linguist check and be formally validated by the European Council. As soon as the law is published, it becomes effective 20 days after the publication for the duration of 2 years.
More milestones of the law are available on the official website of the European Parliament, dedicated specifically to the AI Act.
Summing up
The new EU AI Act is a proof that there are concrete steps made in the direction of AI limitation, specifically with the aim to protect people’s rights. The global authorities try to find a path to balance between managing risks and enhancing the innovation. As this is the first AI global regulation, we may expect more to come.
The new Act means establishing the limitations on use cases and data used for feeding AI models. Such measures may limit the AI application and decrease concerns about programmers replaced by AI. With the new regulations, we will only see the EU’s long-term position on protecting Europe’s key values and vision related to AI.
FAQ
What spheres does the EU AI Act target?
The newly approved EU Act on artificial intelligence aims to protect safety, people’s rights, and democracy in such areas as health, safety, environment, and law. It targets AI systems that use sensitive data for AI models, e.g. biometric data, facial images, behavior manipulation, and causes threats in terms of safety.
Does EU AI Act impose any ban on general-purpose AI (GPAI) applications?
After the Act enters into force, the GPAI systems will need to follow the transparency requirements imposed by the Act. They include preparing the technical documentation and detailed summaries with information on data used to train AI models.
What is a high-risk AI application?
The EU AI Act defines the high-risk AI systems as those that pose potential threats to public and individual safety, health, environment, fundamental rights, democracy. Some high-risk AI systems are used in critical infrastructure, public services, such as banking or healthcare, employment, or democratic process (e.g. influencing the outcomes of elections).
Stay in tune
Curated Tech HR buzz delivered to your inbox