2025 is set to be an interesting year for technology. AI agents are on the rise, and have tremendous potential to revolutionize processes in an enterprise. However, it is crucial to recognize that without proper oversight, these digital workers could quickly transition from valuable assets to significant liabilities.
AI Readiness in Action
In 2023, many generative AI projects were hindered by a critical oversight: the essential need for high-quality data to enable effective AI operation. Consequently, organizations encountered significant hurdles, including data privacy concerns, biases in AI models, and the risk of misleading AI outputs (known as hallucinations). This is not surprising, especially in the APJ region. In Australia for example, 68% of company data remains unused, according to the Seagate study, “Rethink Data: Put More of Your Data to Work From Edge to Cloud.”
As 2024 unfolded, businesses began to formulate comprehensive strategies for successful AI implementation. These strategies encompassed developing frameworks for Responsible and Ethical AI, establishing robust data governance and management practices. For instance, the Australian Red Cross implemented its own AI governance framework to balance trust and compliance for new AI initiatives within the business.
Many countries in the APJ region also began to launch AI governance initiatives, typically focusing on minimizing the risk of personal information misuse. These initiatives include the National AI Strategy and Smart Nation 2.0 in Singapore and guidelines such as Australia’s voluntary “10 AI Guardrails.”
With successful AI deployment relying on strong data governance, organizations also started addressing other key challenges. These included ensuring data liquidity, which refers to the seamless access and analysis of data from diverse sources, and upholding data quality, as legacy systems frequently undermine accuracy. Recognizing the critical role of data integration in preparing data for AI applications, Boomi acquired Rivery to ensure that the data informing these AI applications is current, delivering any data changes at high volumes and speed.
Consequently, I believe that in 2025 we will see many organizations thrive with generative AI and potentially AI agents, outpacing their competitors, delivering personalized customer experiences, and accelerating the launch of innovative products. This shift in focus will find them prioritizing practical AI applications that align with strategic business objectives to drive results, meticulously scoped to mitigate risks and free from the noise of exaggerated marketing claims.
AI Agents: The Next Evolution
The pace of AI innovation is rapidly accelerating, with AI agents emerging as the next phase in this evolution. Acting as virtual assistants, these agents are designed to work alongside us — working independently, learning from data, and making decisions in real time. One day they may even operate autonomously.
As their prevalence grows, effective management and governance of AI agents becomes essential. Without it, these digital workers could turn from assets to liabilities. In Australia, we’ve already seen the issues that can arise from allowing technology to run without monitoring or oversight, such as with the Robodebt scandal.
A report by Boomi, “A Playbook for Crafting AI Strategy,” produced in partnership with MIT Technology Review Insights, reveals that 45% of organisations see governance, security, and privacy issues as significant barriers to rapid AI deployment.
Boomi itself is not immune to this pace of change, having released seven Boomi AI Agents that autonomously perform tasks within the Boomi Enterprise Platform, simplifying and accelerating integration and automation for developers. Our agents also provide capabilities that allow users to install and use AI functions from Boomi’s network of partners or from AI agents they may create themselves.
Governance and Oversight
AI agents will eventually operate at a scale that current tools cannot effectively monitor. In 2025 we can expect the introduction of AI governance platforms that allow organizations to centrally oversee the operational aspects of their AI systems.
These advanced tools will be instrumental in establishing, managing, and enforcing policies to ensure that AI Agents are used transparently and responsibly. They will monitor the lifecycle of these agents — from deployment to decommissioning — while providing insights into model construction, data utilization, and the rationale behind their outputs. Given that many users currently express distrust in AI systems, fostering transparency through robust governance will be crucial to restoring confidence in these technologies.
AI governance platforms are also part of Gartner’s Top 10 Strategic Technology Trends for 20251 with an assumption that “by 2028, enterprises using AI governance platforms will achieve 30% higher customer trust ratings and 25% better regulatory compliance scores than their competitors, along with 40% fewer AI-related ethical incidents compared to those without such systems.”
Capabilities for Effective Governance
The initial use case for an AI governance platform will be to identify and register all AI agents operating within an organization. This could be effectively achieved through APIs that facilitate seamless communication and provide a standardized method for collecting the necessary registration information. Existing API management concepts can be adapted for AI governance, enabling developers to discover and integrate API-enabled AI agents into their applications.
Furthermore, an AI Agent Catalog can serve as an authoritative source for certified AI agents, potentially allowing for monetization through their API interfaces. Once these AI agents are identified and registered, a dedicated portal can track their activities, ensuring governance and responsible use. Operational features could include maintaining activity logs and providing insights into the decisions made by the AI agents, along with access control, policy management, risk management, and other measures to ensure transparency and build trust and accountability.
Security and Compliance
Compliance capabilities will also play a crucial role in highly regulated environments. It is possible that countries in the APJ region will eventually adopt similar regulations to those seen in other areas, such as the forthcoming enforcement of the EU AI Act in February 2025. This act imposes fines of up to 7% of global turnover and may introduce additional obligations for APJ organizations identified as “providers” or “deployers” of AI systems if operating in the EU. Regulations like this will further drive investments in technologies that offer AI governance to mitigate risks.
As deployments of multi-agent systems become more common, the associated security risks will also increase. These systems communicate through APIs, which could be exploited by malicious actors. Therefore, robust security monitoring must also be an integral part of any AI governance platform to enable businesses to limit or halt any adverse agent behaviour.
The Human Element
AI governance also faces obstacles that cannot be addressed by technology alone. Organizations must tackle cultural resistance, and gaining support from all stakeholders necessitates understanding and navigating cultural perceptions of new technologies. This is crucial for standardizing AI governance practices within a diverse and fragmented environment.
The Importance of Technology in AI-Readiness
Utilizing technologies that enhance AI governance is vital for effectively managing the risks associated with AI agents. Organisations must also remain mindful of evolving guidelines and approaches to AI governance. Smaller enterprises, despite having limited resources, can still establish robust AI governance by partnering with technology providers.
Furthermore, it is essential not to overlook data governance, which plays a critical role in minimizing bias in AI models by promoting transparency in data sources. Training AI systems on high-quality, representative data is crucial for generating fair and accurate recommendations that support informed decision-making. This approach fosters transparency and ultimately builds trust with stakeholders.
Read “A Playbook for Crafting AI Strategy” for insights into how businesses like yours are advancing on their AI adoption journeys.
——–
1. Gartner Top 10 Strategic Technology Trends for 2025, Gene Alvarez, 21 October, 2024 https://www.gartner.com.au/en/articles/top-technology-trends-2025