Singapore pioneers framework for responsible Agentic AI
In the corporate world, when things go wrong, the investigation almost always boils down to a single, anchoring question: “Who approved this?”.
For decades, this question worked. Decisions were made by people, systems merely executed them, and responsibility could be traced back to a specific role or signature. But we are witnessing a fundamental shift in technology that unsettles this basic logic. We are moving from AI that advises to Agentic AI – systems that can plan, decide, and act on their own.
Singapore has just launched the Model AI Governance Framework for Agentic AI, the first in the world to specifically address the responsible deployment of these autonomous systems. It offers a pragmatic, albeit uncomfortable, answer to the accountability question: You can delegate the work, but you cannot delegate the blame.
To understand why this framework is necessary, we must distinguish Agentic AI from the tools we are used to. Traditional AI largely sat on the “advisory” side – flagging risks or generating insights while humans retained control over execution.
Agentic AI changes this dynamic. These systems do not just recommend; they are capable of:
- Chaining actions: Deciding which tools to invoke and interacting with other agents.
- Independent Execution: Triggering transactions, updating records, and sending communications without waiting for a human prompt.
- Operating at Speed: Acting at machine speed, often outside the visibility of any individual operator.
This creates “organizational ambiguity”. If an AI agent negotiates a contract or executes a payment that goes sideways, who is responsible? The developer? The vendor? The manager who deployed it?
One of the most insightful aspects of Singapore’s new framework is its rejection of the “human-in-the-loop” safety net as a catch-all solution.
For years, governance discussions have relied on the idea that a human should always be watching. The framework argues that this is no longer realistic. Expecting humans to approve every action of high-speed agentic systems does not scale. Worse, it creates a “false sense of control” and encourages automation bias, where overwhelmed humans simply rubber-stamp system decisions.
Instead, the framework advocates for risk-based oversight. It suggests that human intervention should be reserved for high-stakes moments, such as:
- Executing financial transactions.
- Deleting critical data.
- Communicating externally on behalf of the organization.
For everything else, the system acts, and the organization accepts the risk.
If humans are not watching every step, how do we ensure safety? The framework introduces the concept of “Meaningful Accountability”.
It is not enough for accountability to exist on paper. Organizations must be able to explain why an agent was given certain permissions and how its boundaries were defined. This treats AI agents less like software tools and more like employees with delegated authority.
Key governance measures include:
- Defined Identity: Agents must have clear identities and scoped permissions.
- Least Privilege: An agent should only access the data and tools strictly necessary for its function.
- Traceability: Organizations must be able to distinguish between actions taken by a human and those initiated independently by an agent.
This distinction is critical. If an incident occurs, the audit trail must reveal whether the system acted on its own or on behalf of a human.
Perhaps the most challenging pill for enterprises to swallow is the framework’s stance on third-party vendors. Agentic systems rarely exist in isolation; they rely on LLMs from one provider, platforms from another, and APIs from a third.
The framework is explicit: Accountability cannot be outsourced.
Organizations are expected to understand the limitations of their vendors and assess the controls available to them. If a gap in visibility exists that makes accountability impossible to establish, the framework suggests that deploying the agent may itself be irresponsible.
Singapore’s framework does not pretend that Agentic AI can be made perfectly safe. It operates on the assumption that systems will fail and behaviors will surprise us.
Instead of chasing perfect safety, it insists on clarity. As AI agents take on more responsibility within the enterprise, the framework forces leaders to confront a hard truth: If an outcome is unacceptable when a human causes it, it does not become acceptable just because an AI did it.
When the “Approve” button presses itself, the organization must still be ready to answer for the result.
