share on
The framework covers four key steps: assessing risks and setting limits on agents’ powers, ensuring human accountability at critical points, applying technical controls throughout the agent’s lifecycle, and promoting end-user responsibility through transparency and training.
Josephine Teo, Minister for Digital Development and Information announced the launch of Singapore’s new Model AI Governance Framework for Agentic AI (MGF for AI) at the World Economic Forum (WEF) on 22 January 2026. Developed by the Infocomm Media Development Authority (IMDA), this first-of-its-kind framework builds on the governance foundations of the 2020 Model AI Governance Framework and provides guidance to organisations on deploying AI agents responsibly, while emphasising that humans remain ultimately accountable.
Unlike traditional AI, agentic AI can reason and act on behalf of users, automating repetitive tasks and freeing employees for higher-value work. But with access to sensitive data and the ability to make changes — such as updating databases or processing payments, AI agents also introduce new risks, including errors, unauthorised actions, and over-reliance on automation.
To address these challenges, the MGF for Agentic AI offers a structured approach covering four key areas:
- Assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers such as agents’ autonomy and access to tools and data,
- Making humans meaningfully accountable for agents by defining significant checkpoints at which human approval is required,
- Implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services, and,
- Enabling end-user responsibility through transparency and education/training.
Assessing and bounding the risks upfront
When planning for the use of agentic AI, organisations should take the following considerations:
- Determine suitable use cases for agent deployment by considering agent-specific factors that can affect the likelihood and impact of the risk.
- Design choices to bound the risks upfront by applying limits on agent’s access to tools and systems and defining a robust identity and permissions framework.
Risk identification and assessment is the first step when considering if an agentic use case is suitable for development or deployment. Here are some non-exhaustive factors that may affect the level of risk of an agentic use case:
Threat modelling also makes risk assessment more rigorous by systematically identifying specific ways in which an attacker may take to compromise the system. Common security threats to agentic systems include memory poisoning, tool misuse, and privilege compromise.
In developing the framework, IMDA gathered feedback from government agencies and private sector organisations, ensuring it reflects practical, real-world needs. It will remain a living document, welcoming feedback and case studies that demonstrate responsible agentic AI deployment.
Upon selecting an appropriate agent use case, organisations can further bound the risks by defining appropriate limits and permission policies for each agent. Some considerations include:
- Defining policies that give agents only the minimum tools and data access needed for it to complete its task. For instance, a coding assistant may not require access to a web search tool, especially if it already has curated access to the latest software documentation.
- For process-driven tasks, SOPs and protocols are frequently used to improve consistency and reduce unpredictability. Define similar SOPs for agentic workflows that an agent is constrained to follow, rather than giving the agent the freedom to define every step of the workflow.
- Design mechanisms and procedures to take agents offline and limit their potential scope of impact when they malfunction. This can include running agents in self-contained environments with limited network and data access, particularly when they are carrying out high-risk tasks such as code execution.
Some best practices that organisations can consider to enable agent control and traceability include:
- Identification, in which an agent should have its own unique identity, such that it can identify itself to the organisation, its human user, or other agents. This identity should be linked to a supervising agent, a human user, or an organisational department to enable accountability and tracking. The role or capacity in which the agent operates — such as acting independently or on behalf of a specific human user — should also be clearly recorded.
- Authorisation ensures that an agent operates only within approved permissions. These permissions may be predefined based on the agent’s role or task, dynamically assigned by an authorised human user, or a combination of both. As a guiding principle, the human user should not be able to set permissions for the agent greater than what the human user is himself authorised to do. Such delegations of authority should be clearly recorded.
Making humans meaningfully accountable
The organisations that deploy agents and the humans who oversee them remain accountable for the agents’ behaviours and actions. But it can be challenging to fulfil this accountability when agent actions emerge dynamically and adaptively from interactions instead of fixed logic.
To address these challenges to human accountability, organisations can consider:
- Clear allocation of responsibilities within and outside the organisation, by establishing chains of accountability across the agent value chain and lifecycle, while emphasising adaptive governance, so that the organisation is set up to quickly understand new developments and update their approach as the technology evolves.
- Measures to enable meaningful human oversight of agents, such as requiring human approval at significant checkpoints, auditing the effectiveness of human approvals, and complementing these measures with automated monitoring.
As deployers, organisations and humans remain accountable for the decisions and actions of agents. However, as with AI, the value chain for agentic AI involves multiple actors. Organisations should consider the allocation of responsibility both within their organisation, and vis-à-vis other organisations along the value chain.
Within the organisation, organisations should allocate responsibilities for different teams across the agent lifecycle. While each organisation is structured differently, here is a guide of how such responsibilities can be allocated across various teams:
Externally, organisations may also need to work with external parties when deploying agents. In these cases, organisations should consider some measures to fulfil its own accountability. Some considerations may include:
- Organisations should consider provisions to address any security arrangements, performance guarantees, or data protection and confidentiality. Where there are gaps, the organisation should reassess if the agentic deployment meets its risk tolerance.
- They should also evaluate whether external solutions provide sufficient security and control. This includes strong authentication measures, such as scoped API keys and per-agent identity tokens, as well as robust monitoring through detailed logging of tool usage and access history. Where such features are lacking, organisations should consider alternative or in-house solutions, or scoping down the agentic use case, such as restricting access to sensitive data.
Organisations should define significant checkpoints or action boundaries that require human approval, especially before sensitive actions are executed. This can include:
- High-stakes actions and decisions e.g. editing of sensitive data, final decisions in high-risk domains (such as healthcare or legal), actions that may trigger liability,
- Irreversible actions e.g. permanently deleting data, sending communications, making payments,
- Outlier or atypical behaviour e.g. when agent accesses a system or database outside of its work scope, when agent selects a delivery route that is twice as long as the median distance, or
- User-defined. Agents may act on behalf of users who have different risk appetites. Beyond organisation-defined boundaries, users may be given the option to define their own boundaries e.g. requiring approval for purchases above a certain amount.
Apart from considering when approvals are required, organisations should also consider what form approvals should take. This includes:
- Keeping approval requests contextual and digestible, and
- Considering the form of human input required.
Organisations should also consider implementing the following measures to ensure continued effectiveness of human oversight, particularly as humans remain susceptible to alert fatigue and automation bias:
- Training humans to identify common failure modes e.g. inconsistent agent reasoning, agents referring to outdated policies, and
- Regularly auditing the effectiveness of human oversight.
Finally, human oversight should be complemented with automated real-time monitoring to escalate any unexpected or anomalous behaviour. This can be done by implementing alerts for certain logged events (e.g. attempted unauthorised access or multiple failed attempts to call a tool), using data science techniques to identify anomalous agent trajectories, or using agents to monitor other agents.
Implementing technical controls and processes
The agentic components that differentiate agents from simple LLM-based applications necessitate additional controls during the key stages of the implementation lifecycle.
Organisations should consider:
- During design and development, design and implement technical controls.
- Pre-deployment, test agents for safety and security.
- When deploying, gradually roll out agents and continuously monitor them in production.
Organisations should design and implement technical controls in the agentic AI system to mitigate identified risks. For agents specifically, in addition to baseline software and LLM controls, consider adding controls for:
- New agentic components, such as planning and reasoning and tools
- Increased security concerns from the larger attack surface and new protocols
Organisations should also test agents for safety and security prior to deployment in order to provide confidence that the agents work as expected and controls are effective. At the same time, organisations should adapt their testing approaches for agents. Some considerations include:
- Testing for new risks, such as:
- Overall task execution,
- Policy compliance,
- Tool calling, and
- Robustness.
- Testing entire agent workflows as at times, agents can take multiple steps in sequence without human involvement.
- Testing agents individually and together to understand any emergent risks and behaviours when agents collaborate, such as competitive behaviours or the impact on other agents when one agent has been compromised.
- Testing in real or realistic environments as agents may be expected to navigate real-world situations. Thus, testing should occur in a properly configured execution environment that mirrors production as closely as possible
- Testing repeatedly and across varied datasets as agent behaviour is inherently stochastic and context-dependent.
- Evaluating test results at scale. Organisations may consider using different evaluation methods for different parts of the agentic workflow (e.g. deterministic tests for structured tool calls vs LLM or human evaluation for unstructured agent reasoning). However, there is still a need to evaluate agents holistically, so that agent patterns across steps can be evaluated.
Organisations should continuously monitor and log agent behaviour post-deployment, and establish reporting and failsafe mechanisms for agent failures or unexpected behaviours. This allows the organisation to:
- Intervene in real-time. When potential failures are detected, stop agent workflow and escalate to a human supervisor
- Debug when incidents happen as tracing each step of an agent workflow and agent-to-agent interactions help to identify points of failure.
- Audit at regular intervals to ensure that the system is performing as expected.
Key considerations when setting up a monitoring system include:
- Determining the objectives for monitoring (e.g. real-time intervention, debugging, integration between components) to identify what to log. In doing so, prioritise monitoring for high-risk activities such as updating database records or financial transactions.
- How to effectively monitor logs: Organisations can consider approaches such as:
- Defining alert thresholds:
- Programmatic, threshold-based
- Outlier / anomaly detection
- Agents monitoring other agents
- Defining alert thresholds:
- Defining specific interventions: Consider what the level of intervention should be. Some degree of human review should be incorporated, proportionate to the risk level.
Finally, continuously test the agentic system even post-deployment to ensure that it works as expected and is not affected by model drift or other changes in the environment.
Enabling end-user responsibility
Ultimately, end users are the ones who use and rely on agents, and human accountability also extends to these users. Organisations should provide sufficient information to end users to promote trust and enable responsible use.
Organisations should take note that:
- Users should be informed of the agents’ capabilities (e.g. scope of agent’s access to user’s data, actions the agent can take) and the contact points whom users can escalate to if the agent malfunctions.
- Users should be educated on proper use and oversight of agents (e.g. training should be provided on an agent’s range of actions, common failure modes like hallucinations, usage policies for data), as well as the potential loss of trade craft i.e. as agents take over more functions, basic operational knowledge could be eroded.

Users who interact with agents act on behalf of the organisation. For these users, focus on transparency. Organisations should share pertinent information to foster trust and facilitate proper usage of agents. Such information can include:
- User’s responsibilities: Clearly define the user’s responsibilities, such as asking the user to double-check all information provided by the agent.
- Interaction: Declare upfront that the users are interacting with agents.
- Agents’ range of actions and decisions: Inform the users on the range of actions and decisions that the agent is authorised to perform and make.
- Data: Be clear on how user data is collected, stored, and used by the agents, in accordance with the organisation's data privacy policies. Where necessary, obtain explicit consent from users before collecting or using their data for the agents.
- Human accountability and escalation: Provide users with the respective human contact points who are responsible for the agents, whom the users can alert if the agents malfunction or if they are dissatisfied with a decision.
Users who integrate agents into their work processes acts for and on behalf of the user. For these users, in addition to the information in the previous section, layer on education and training so that users can use the agents responsibly. Key aspects include education and training on:
- Foundational knowledge on agents
- Relevant use cases, so that the users understand how to best integrate the agents into their day-to-day work.
- Instructing the agents e.g. general best practices in prompting, glossary of keywords to elicit specific responses.
- Agents’ range of actions, so that the user is aware of their capabilities and potential impact.
- Effective oversight of agents
- Common agent failure modes, such as hallucinations, getting stuck in loops after errors, so that the user can identify and flag out issues.
- Ongoing support, such as regular refreshers to update users on latest features and common user mistakes.
- Potential impact on tradecraft
- As agents take over entry level tasks, which typically serve as the training ground for new staff, this could lead to loss of basic operational knowledge for the users.
- Organisations should identify core capabilities of each job and provide sufficient training and work exposure so that users retain foundational skills.
READ MORE: Best countries in APAC to start a business: 2025 startup ranking
Lead image / Josephine Teo Facebook
share on
Follow us on Telegram and on Instagram @humanresourcesonline for all the latest HR and manpower news from around the region!
Related topics