Risk-Based AI Regulation: The New Gold Standard for Ethical AI Governance?

As artificial intelligence (AI) continues to advance and become part of our lives, stronger governance is needed to ensure a high level of data protection, digital rights and ethical standards. One promising approach is risk-based AI regulation, which seeks to address the varied and evolving impacts of AI systems. Rather than applying a one-size-fits-all regulatory framework, risk-based regulation tailors oversight to the level of risk posed by a specific AI system.

Definition and Rationale for Risk-Based Regulation

At the core of risk-based regulation is the notion that AI systems should be regulated according to the potential harm they can cause, rather than the technology itself. For example, a chatbot or spam filter that operates with minimal risk to society should face far fewer regulatory hurdles than a biometric identification system or an AI-driven credit scoring model, which can have significant social and economic implications. This ensures that low-risk applications are not bogged down by heavy-handed regulation while focusing more resources on high-risk systems that could have harmful consequences if left unchecked.

This tailored approach not only supports proportional regulation but also encourages innovation. By distinguishing between high-risk and low-risk applications, developers are incentivized to create solutions that prioritize safety and ethical considerations while enjoying a more efficient regulatory environment. Importantly, this approach aligns with governance principles that allow for continued innovation in AI technology without unnecessary barriers to progress.

The EU AI Act as a Global Benchmark

One of the most significant advancements in AI regulation is the EU AI Act, which introduces a comprehensive risk-based framework for regulating AI. This regulation sets a four-tier risk classification for AI systems: unacceptable, high, limited, and minimal/no risk. Systems that present clear threats to safety or fundamental rights are outright banned. High-risk systems, such as those used in law enforcement, face strict requirements around data quality, transparency, and human oversight to ensure that they are deployed responsibly and ethically. The Act was enacted in July 2024 and entered into force on August 1, 2024. Its provisions are being phased in over time—for example, the ban on unacceptable AI systems took effect on February 2, 2025, with key governance and high-risk compliance obligations scheduled through August 2026 and August 2027.

EU AI Act timeline

The EU AI Act serves as a global benchmark, not only shaping EU policy but also influencing international regulatory efforts. The European Commission's proposal is often cited as an example of how nations can effectively regulate AI without stifling its growth. It offers a framework for countries to consider as they work toward their own AI governance models, aligning efforts across borders to manage AI’s risks while fostering global collaboration.

Benefits of Risk-Based Regulation

Risk-based regulation offers several key benefits, making it a compelling approach to AI governance.

  1. Promotes Ethical Use Without Barriers for Innovation: By focusing on the risks specific to each AI system, this approach enables developers to innovate without unnecessary regulatory interference. High-risk systems can be thoroughly regulated to mitigate harm, while lower-risk systems can be more flexible and adaptable. This fostered a more efficient regulatory environment.

  2. Improves Public Trust in AI: By ensuring that high-risk AI systems are subject to robust oversight and ethical considerations, risk-based regulation can increase public confidence in AI technologies. When people understand that potentially harmful AI systems are closely monitored and governed, they are more likely to trust AI’s broader applications.

Challenges and Critiques

While risk-based regulation has its merits, it also faces several challenges and critiques:

  1. Ambiguity in Defining and Measuring "Risk": One of the central issues with risk-based regulation is the lack of clarity in how to measure and define "risk." Different stakeholders may have differing views on what constitutes a significant risk, leading to inconsistencies in how AI systems are categorized. Therefore, there is a need for independent oversight to ensure that assessments are accurate and unbiased. It is also beneficial to take into consideration the cumulative impact of multiple AI systems operating together during these assessments.

  2. Global Disparities in Enforcement and Compliance: Enforcement of AI regulations can vary significantly across different regions, with some countries lacking the resources or infrastructure to ensure compliance with international standards. This discrepancy may undermine the effectiveness of global AI governance.

Conclusion

Risk-based AI regulation offers a promising framework to ensure that AI systems are developed and deployed responsibly. By focusing regulatory efforts on the potential harm of each system, it provides a balanced approach that supports innovation while safeguarding public welfare. However, the challenges mentioned above must be addressed for this model to truly succeed. As global AI governance continues to evolve, the EU AI Act and other regulatory frameworks will likely play a crucial role in shaping the future of AI ethics, with risk-based regulation potentially becoming the new gold standard for AI governance.

For more details, you can explore the EU AI Act here.

It’s important to note that AI regulation is evolving globally, and approaches vary significantly between countries. While the EU AI Act represents one of the most comprehensive and structured frameworks to date, other regions—including the United States, UK, China and Japan are pursuing different regulatory models, each shaped by their unique legal systems, cultural values, and policy priorities. Understanding these diverse approaches will be crucial for organizations operating across borders. In our next article, we’ll explore how AI regulation is taking shape around the world.

Do you think Malaysia should adopt a risk-based framework like the EU AI Act?

June 18, 2025 | Written by Shinyi Tan, Freddy Loo