Artificial intelligence (AI)

6 MAY 2024 ⎯ 5 MINS READ

The EU Ushers in a New Era of AI Governance with the AI Act

insight

Pioneering Regulation for Responsible Innovation

After years of deliberation, the European Union has taken a monumental step towards regulating the rapidly evolving field of artificial intelligence. On March 13, 2024, the European Parliament formally adopted the AI Act, a landmark piece of legislation that establishes the world's first comprehensive legal framework governing AI systems.

As AI technologies continue to permeate various aspects of our lives, from healthcare and transportation to employment and education, the EU recognized the pressing need to ensure these systems are developed and deployed responsibly. The AI Act aims to strike a delicate balance between fostering innovation and mitigating potential risks posed by AI systems.

Key Principles and Provisions of the AI Act

1. Focus on Safety and Fundamental Rights At its core, the AI Act prioritizes ensuring AI systems are safe, respect fundamental rights, and adhere to ethical principles. It seeks to prevent AI from causing physical or psychological harm and to uphold values like privacy, non-discrimination, and human dignity.

2. Risk-Based Approach The act takes a risk-based approach, categorizing AI systems based on their potential level of risk. High-risk AI applications, such as those used in critical infrastructure, employment processes, or law enforcement, face the strictest regulations. These systems will be subject to rigorous testing, certification processes, and ongoing monitoring to mitigate potential risks.

3. Banned Applications Certain AI practices deemed unacceptable are outright banned under the act. This includes AI systems based on social scoring, biometric identification in public spaces, and emotion recognition systems used in contexts like workplaces or educational institutions. Additionally, AI that manipulates human behavior or exploits vulnerabilities of specific groups is prohibited.

4. Transparency and Explainability Transparency is a key principle emphasized in the AI Act. Developers must ensure that humans understand when they are interacting with an AI system and be able to explain how the system arrives at its decisions. This requirement aims to promote accountability and enable users to make informed choices.

5. Human Oversight The act underscores the importance of human oversight in the development and deployment of AI systems. It mandates that humans must be involved in monitoring and ultimately retaining control over high-risk AI applications, ensuring that critical decisions are not solely automated.

6. Fines for Non-Compliance To ensure compliance, the AI Act introduces significant penalties for violators. Failure to adhere to the act's requirements can result in fines of up to €35 million or 7% of a company's global annual revenue, whichever is higher.

The official text of the final draft AI Act can be found on the website of the Artificial Intelligence Act: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

Impact on SaaS Companies and Custom AI Developers

The AI Act will have far-reaching implications for companies developing and deploying AI-powered software, especially those operating within the EU or serving European customers. Here's how it may impact different types of AI software:

1. High-Risk AI Software - Companies providing AI solutions for high-risk applications will face the most stringent regulations. This includes extensive documentation, testing, certification processes, and ongoing monitoring. Developers of AI software for sectors like healthcare, transportation, employment, or law enforcementwill need to prioritize compliance with the act's requirements.

2. Low-Risk AI Software - While low-risk AI applications like chatbots or basic recommendation engines may face fewer requirements, companies will still need to adhere to principles of transparency, fairness, and responsible data practices outlined in the act.

3. Custom AI Development - Companies building custom AI models or software for specific clients within the EU will need to collaborate closely with those clients to ensure compliance with the AI Act's requirements. This may involve adapting development processes, implementing necessary safeguards, and providing documentation to demonstrate adherence to the act.

Increased Compliance Burden and Potential Benefits

Across the board, AI software companies can expect an increased compliance burden due to the AI Act. Key challenges include:

  • Risk Assessment: Classifying AI systems based on their risk level and implementing appropriate safeguards.
  • Transparency and Explainability: Developing user-friendly interfaces that explain how AI arrives at decisions and providing options for human review.
  • Data Governance: Demonstrating responsible data collection, storage, and usage practices for training and operating AI systems.
  • Documentation and Auditing: Maintaining comprehensive documentation and being prepared for potential audits or inspections to verify compliance.

While navigating the AI Act's requirements may be challenging, it also presents potential benefits for software companies:

  • Standardized Approach: A clear framework for developing and deploying AI reduces regulatory uncertainty across EU member states, providing a level playing field for companies operating in multiple European countries.
  • Enhanced Trust: By focusing on transparency and human oversight, the act can build user trust in AI technologies, potentially driving wider adoption and increasing the demand for compliant AI solutions.
  • Responsible Innovation: The act encourages the development and use of AI systems that mitigate risks like bias or discrimination, promoting responsible innovation and potentially reducing long-term liabilities.

Looking Ahead

The EU AI Act is still in its early stages, and member states have a grace period to transpose the act into their national laws. This means the specific timelines for implementation and enforcement will vary across Europe. However, companies should start preparing for compliance as soon as possible to avoid potential disruptions to their operations.

As the first comprehensive AI governance framework, the EU AI Act is likely to influence and shape regulations in other parts of the world. Companies with a global reach might need to consider how the act can inform their overall AI development practices, even for products and services not directly targeting the European market.

Consulting with legal and AI compliance experts, staying updated on regulatory developments, and fostering a culture of responsible AI development will be crucial for software companies navigating this new era of AI governance. While the road ahead may be challenging, the EU AI Act presents an opportunity for companies to prioritize ethical and trustworthy AI practices, positioning themselves as leaders in this rapidly evolving field.

For more information, refer to the resources provided by the EU Commission (https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence) and the European Parliament (https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law).

Hani Zahirović / Chief Technology Officer

Artificial intelligence (AI)

Governance

author

Author

Hani Zahirović

With over a decade in software development, Hani sees himself as a problem solver. He's led teams in planning, coding, testing, and fostering growth. Now, as a CTO, he shapes Bloomteq's tech direction for the future.

Latest news

Subscribe to Our Newsletter

Sign up for our newsletter to receive the latest updates, insights, and industry news.

light_logo

/ Kolodvorska 12, 71000 Sarajevo, BiH

/ E-mail: info@bloomteq.com

/ Call: +387 33 82 18 22

© 2024 Bloomteq. All Rights Reserved.

Privacy Policy