How To Navigate the AI Agent Governance Gap

by Boomi
Published Oct 27, 2025

Companies have guardrails to help ensure a quality workforce. Rigorous background checks, training, performance reviews, and periodic drug testing are some of the ways that firms evaluate employees for ethics, accountability, and value to the business.

As organizations ratchet up use of AI agents with autonomous or semi-autonomous decision-making capabilities, it would make sense that AI agents be held to the same standards as the human workforce — but that is not the case.

Shockingly, just 2% of tech leaders say their AI agents are held fully accountable for their actions and governed in an always-on and consistent manner, according to new research conducted by FT Longitude on behalf of Boomi.

This huge governance gap introduces huge risk. Without proper governance, erratic AI agent actions could go undetected with potential implications for finance and operations, customer and partner relations, and public trust.

As agentic AI usage grows, ensuring sound governance of AI agents is emerging as one of the greatest challenges facing IT and business leaders.

What Is an AI Agent?

An AI agent is autonomous or semi-autonomous software that can make decisions without human input. These agents can “reason” based on data collected from internal or external sources — and, crucially, they can act on their conclusions.

The rise of agentic AI is on a par with other transformative trends such as cloud computing in the early 2000s, and digital transformation more recently. Our survey found that 73% of tech leaders believe that AI agents will be the biggest game-changer their organizations have seen in the past five years. As of today:

  • 45% of tech leaders are piloting AI agents with some decision-making capabilities
  • 10% have integrated AI agents into multiple business processes
  • 4% have fully embedded agents handling complex tasks across their business

AI agent use cases vary widely, with significant numbers of tech leaders saying that AI can be used “entirely” or “mostly” in these processes:

  • 31% — creating marketing content
  • 28% — monitoring regulatory or policy compliance
  • 26% — managing cybersecurity risk
  • 25% — forecasting demand or resource needs
  • 20% — screening job applicants for interviews

AI Agent Sprawl Poses a Risk

More than two-thirds (70%) of tech leaders say they’ve lined up multiple use cases for AI agents before putting in place a framework to govern and manage them responsibly. The Boomi survey found that less than one-third (32%) have an AI governance framework in place, and just 29% are regularly training staff and leadership on how to use AI agents responsibly.

As a result, leadership could be in the dark as to whether AI agents are using sensitive data, resulting in potential security or regulatory compliance breaches. Agents could be making counterproductive decisions that go unnoticed until damage becomes painfully apparent.

Risk inherent in AI agent usage without proper governance is exacerbated by speed of development and adoption, and scattershot complexity at organizations using AI agents from various providers. Agent sprawl can happen quickly — and so can the consequences of unanticipated behavior.

“You can define guardrails and policies at an agent level, and an agent development company can help you govern those agents,” says Ed Macosky, Boomi’s chief product and technology officer. “The issue comes when you have 10 different providers, because AI agents are developing and iterating too fast. How do you manage all of that?”

The Boomi Approach to AI Agent Governance

Boomi provides a robust AI agent framework with Boomi Agentstudio, a fully integrated environment for AI agent design, governance, and orchestration at scale. We have deployed more than 50,000 AI agents for customers to date — and unlike other vendors, we provide tools to ensure they operate securely, ethically, and efficiently.

Boomi Agentstudio includes:

Agent Designer: Enables users to create and deploy AI agents using intuitive no-code templates, ensuring they are grounded on trusted enterprise data with built-in security guardrails.

Agent Control Tower: Provides proactive, always-on monitoring, ensuring full visibility and control over both Boomi AI agents and third-party agents. Agent Control Tower integrates with Amazon Bedrock to help customers easily discover and manage their agents.

Agent Garden: A personal, unified space for users to interact with AI agents using natural language, streamlining collaboration and execution of AI-driven tasks. It contains AI agent design, testing, deployment, and tool development.

Agent Marketplace: Residing in Boomi Marketplace, Agent Marketplace is a central hub for organizations to access and discover off-the-shelf and customizable AI agents from Boomi and trusted AI partners.

With Boomi Agentstudio, you’re equipped to apply robust checks, balances, and controls to your AI agent usage — much as you do with your human workforce. As Sameer Vuyyuru, chief AI and product officer at Capita, a business process outsourcer says in our survey report:

“You spend so much time choosing the most ethical, principled people for your business, and you wouldn’t have untrained and untested people interacting with critical systems and processes. So it’s absolutely critical that you treat digital workers the same way.”

Download the Boomi research report, “Navigating the AI Agent Governance Gap” for more survey results, and explore the capabilities of Boomi Agentstudio.

On this page

On this page