When Machines Take the Lead: The Case for Rules


India, as the world’s largest digitally connected democracy, is moving forward at an incredible pace—but with growth comes its own set of challenges. Take the case of Sunil Bharti Mittal, Chairman of Bharti Enterprises. He recently shared a startling experience reported by The Economic Times. A scammer used AI to clone his voice and called one of his executives in Dubai, trying to authorize a large money transfer. The voice was so convincing that even Mittal himself was left stunned when he heard the recording. Stories like this show just how urgently we need laws to protect us from the darker side of AI and other digital technologies, so they can truly serve us for the better.

Why the need for a legislation

AI is already part of our everyday lives, often working quietly behind the scenes without us even noticing. While this powerful technology brings incredible opportunities, it also comes with risks we can’t afford to ignore. From biased algorithms in hiring processes to surveillance misuse and AI-driven scams like voice cloning, the darker side of AI shows how easily it can be misused. To fully benefit from AI and prevent such harm, clear rules and safeguards are essential. That’s where legislation like the Digital India Act 2023 and the EU AI Act step in—to protect users and ensure AI serves us responsibly.

The EU AI Act: Shaping a Future of Safe and Responsible AI

The EU AI Act is a legislation recently passed by the European Union. The Act sets clear rules to ensure AI is used responsibly, protecting both businesses and individuals from legal issues while fostering trust and innovation. In this article, I’ve briefly covered the key aspects of the Act to help you understand its impact. Hope that it’s an interesting read! 

How is the EU AI Act organised

2 The act is organised into 12 main titles. Titles are like the chapters, broadly outlining the main topics, each addressing a specific aspect of AI regulation, such as risk classification, compliance requirements, and enforcement mechanisms. Every title is classified into articles. Overall there are 113 articles. Articles are the individual provisions or rules that detail specific legal requirements and obligations.The articles are to be read with the recitals. There are 180 recitals. Recitals are introductory statements that provide context, rationale, and guidance for interpreting the articles of the regulation. They can be thought of as the "why" behind the "what" of the legislation. 


Sorting AI by risk

(Articles 5 - 7)

Not all AI systems carry the same level of risk. A one-size-fits-all approach doesn’t work here. It’s crucial to address the dangers posed by high-risk AI systems while also fostering innovation. Regulations need to strike the right balance—protecting people from harm without discouraging developers who are creating AI systems that genuinely benefit society. To achieve this balance, the Act classifies AI systems based on their level of risk to individuals and businesses. The tiered approach recognizes that AI is not inherently good or bad, but its potential impact varies dramatically across different applications. Hence the regulations are proportionate to the potential harm posed by AI systems. Based on this rationale the act classifies the systems into four categories

1. Prohibited AI practices

These practices are outright banned because they pose serious threats to individuals or society, such as undermining human dignity, safety, or democracy. This is to prevent AI misuse that can cause irreversible harm or societal disruption. Some examples of unacceptable AI practices are systems that manipulate people, spy on them unfairly, rank citizens’ behavior, or secretly influence decisions in harmful ways.

2. High-Risk AI systems

These AI systems have significant potential to impact fundamental rights or safety if not used responsibly, like in critical sectors (healthcare, law enforcement, etc.). This is to ensure that systems in sensitive areas are safe, unbiased, and transparent to protect citizens and build trust. High-risk AI systems include tools used in hiring, medical diagnosis, loan approvals, or facial recognition for law enforcement.

3. Limited-Risk AI systems

AI systems with moderate impacts are subject to transparency requirements but fewer controls since they are unlikely to cause serious harm. The idea is to educate users without imposing burdensome regulations on low-impact systems, maintaining innovation. Limited-risk AI systems include chatbots, recommendation engines, or AI tools that generate simple text or images.

4. Minimal-Risk AI systems

These systems pose little to no risk, like spam filters or recommendation engines, and are not actively regulated.








High-Risk, High Responsibility: Obligations to ensure safety and fairness

Article 8-15

The EU AI Act imposes specific obligations on providers and deployers of high-risk AI systems to ensure their safety, fairness, transparency and accountability in their use. This is based on the recognition that high-risk AI systems can have significant societal impacts, and therefore, must be subject to rigorous regulation. Articles 8 to 15 of the EU AI Act set clear rules for high-risk AI systems. Article 8 requires a check to ensure the system meets safety and performance standards. Article 9 asks companies to have a risk management system to find and address problems throughout the system's life cycle. Article 10 focuses on using high-quality, unbiased data for training the AI. Article 11 requires detailed documentation about the system’s design, purpose, and compliance. Article 12 ensures that operational logs are kept to track and verify how the system works. Article 13 makes sure users are informed about the system's capabilities and limitations. Article 14 stresses the importance of human oversight to prevent mistakes and ensure safety. Lastly, Article 15 demands that AI systems be robust, accurate, and secure to protect against failures and risks.


Ensuring Safety and Compliance Throughout the AI Product Lifecycle: Articles 16-29

Every AI system, like any other software, goes through a journey - from its initial development to its deployment and ongoing use. For high-risk AI, this journey needs careful checks to keep it safe, clear, and compliant at every stage. Articles 16-29 of the EU AI Act set rules to make sure developers, sellers, and buyers follow these standards throughout the lifecycle of the AI system. The provisions are applicable for both developers, sellers and buyers of AI systems.

  • Market Readiness (Articles 16-20): Developers of high-risk AI systems must follow all legal standards before selling them (Article 16), monitor safety and fix any issues (Article 17), register their systems in the EU database to ensure transparency (Article 18), and make sure importers (Article 19) and distributors (Article 20) check and maintain compliance.

  • Post-Market Safety (Articles 21-23): Everyone involved must work with regulators during inspections (Article 21), report any serious risks or failures (Article 22), and keep track of how the system performs in the real world after it’s launched (Article 23).

  • Compliance and Oversight (Articles 24-27): High-risk systems must go through strict safety checks (Article 24) using internal reviews or third-party assessments (Article 25). Major updates need to be reported (Article 26), and independent auditors must meet high standards (Article 27).

  • Enforcement and Standards (Articles 28-29): Penalties are enforced for rule violations (Article 28), while unified EU standards ensure consistency and make compliance easier (Article 29).

Transparency for the people: Protecting the end users (Article 52-Article 57)

These provisions focus on transparency to ensure people understand when and how they are interacting with AI systems. They affect all businesses using or deploying AI, ensuring customers are informed about AI usage. Article 52 requires that users are informed when they are interacting with an AI system rather than a human, ensuring clarity and avoiding deception. Article 53 mandates disclosure when AI systems generate content, such as deepfakes, unless clearly identified for authorized purposes. Article 54 requires that people be notified if AI systems are used to detect emotions or categorize individuals based on biometric data. These rules help build trust by making AI interactions clear and transparent.


Keeping AI in Check: Market Surveillance and Enforcement (Articles 63-70)

Articles 63-70 set up a system to monitor and enforce compliance for AI systems in the market. National authorities can check, test, and ask for documents to make sure AI products follow the rules. If there are problems, they can take actions like suspending products or requiring fixes. These rules also encourage cooperation between EU countries, ensuring that enforcement is consistent and effective across the entire European market, keeping AI systems safe and fair for everyone.


Consequences of Breaking the Rules: Penalties for noncompliance: (Article 71)

 The Act lays down severe penalties for non compliance. The idea behind such huge penalties is to incentivise compliance, discourage violations, and ensure businesses take their obligations seriously. Penalties for Infringement are outlined in Article 71 of the EU AI Act. For serious breaches, such as failing to meet high-risk system requirements or prohibited practices, companies can face fines of up to €30 million or 6% of their global annual turnover, whichever is higher. Less critical violations, like failing to meet transparency or documentation obligations, may result in fines of up to €20 million or 4% of global turnover. Smaller breaches, such as not cooperating with regulatory authorities, can incur penalties of up to €10 million or 2% of global turnover. 

Why businesses should care

Whether you’re a developer or a business using AI, the EU AI Act affects you. I’ve split the responsibilities based on your role—developers must ensure their systems are safe and compliant, while businesses need to adopt trustworthy AI. With severe penalties for non-compliance, understanding both perspectives will help you navigate the evolving AI landscape and avoid costly mistakes and legal issues.

For AI System Developers:

  • Ensure Safety and Transparency: Developers must make sure their AI systems meet safety standards and are transparent, even during the development or beta stages.

  • Compliance with EU Standards: The EU AI Act sets the foundation for future regulations, making it crucial for developers to stay compliant not only within the EU but also for cross-border sales.

  • Balance Innovation with Responsibility: Developers need to continue innovating while ensuring that new AI systems align with ethical and regulatory standards to avoid legal risks and build consumer trust.

For Businesses Using AI Systems:

  • Ensure Safe and Compliant Systems: Businesses must ensure the AI systems they adopt comply with the EU AI Act to protect themselves from legal issues.

  • Improve Customer and Employee Experience: Businesses using AI to enhance experiences need to ensure their AI systems are safe, ethical, and transparent, in line with EU regulations.

  • Build Trust and Reputation: By using compliant AI systems, businesses can build trust with customers, employees, and stakeholders, positioning themselves as responsible and forward-thinking.

The EU AI Act is more than just a set of regulations—it’s a signal of how AI will be governed globally in the future. For businesses, it’s an opportunity to lead with transparency, build trust with customers, and gain a competitive edge. By understanding and aligning with these guidelines, you’re not just ensuring compliance—you’re preparing your business to thrive in an AI-driven world. Start today by evaluating your AI tools and practices, and position yourself as a responsible and forward-thinking leader in your industry.

Global implications

The EU AI Act sets a global precedent for AI regulation, influencing how countries around the world manage AI technologies. As the first comprehensive legal framework, it may inspire similar laws in regions like the US, UK, and Asia, impacting cross-border business and global market standards. Companies that comply with the EU Act could find it easier to enter international markets and gain trust, as the Act emphasizes transparency and ethical AI use. By balancing safety with innovation, the Act encourages responsible development of AI, potentially shaping the future of AI worldwide.

My thoughts

The EU AI Act is a big step in the right direction—an effort that aims to strike a balance between safety, fairness, and encouraging innovation in the AI world. It's definitely commendable. But, when we look at the bigger picture, there are still some important things that aren’t addressed. For example, the Act doesn’t touch on how AI affects sustainability—its carbon footprint or impact on climate change, which are becoming huge issues. And, while technology including AI brings the world closer together, they also seem to create a paradox: even though we're more connected, we are at the same time more isolated or lonely than ever. These feelings are hard to measure, let alone regulate, but they are a very real part of the human experience.

We have concepts like the Happiness Quotient to try and understand well-being, but it's not officially recognized in law, which makes it hard for regulations to address these subtler aspects of life. As AI laws continue to evolve, it would be interesting to see whether future regulations will start to focus on these broader, more human concerns—things that truly impact our quality of life.

References and readings

  1. https://economictimes.indiatimes.com/news/india/sunil-mittal-exposes-ai-scam-says-my-voice-was-perfectly-articulated-in-cloning-attempt/articleshow/114430557.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

  2. https://artificialintelligenceact.eu/ai-act-explorer/



For those who wish to have a quick look at what is covered, here's a link to the Youtube video: https://youtu.be/QoZuji3Y2YA

By Rajashree Rajadhyax
CoFounder, Cere Labs

Comments

Popular posts from this blog

Can language models reason?

AI implementation myths

Homework 2.0: The science behind homework and why it still matters in the AI age!