
A report from International Data Corp. (IDC) projects that spending on agentic artificial intelligence initiatives will reach $1.3 trillion by 2029. Over the next five years, developers, and organizations will increase the number and complexity of third-party and custom-built AI agents tenfold.
The rapid increase in autonomous AI agents, essentially a new class of digital end users, has profound implications for managed service providers (MSPs). These providers are already struggling to support the growing number of human users accessing applications and services, and the added complexity will only intensify the challenge. In the near future, enterprise networks may host more than a dozen AI agents for every human end user. This dramatic shift will redefine how organizations manage digital interactions and system oversight.
AI agent protocols and policies
It’s expected that most of these AI agents will be accessing data via the Model Context Protocol (MCP). Originally developed by Anthropic, the MCP protocol is rapidly emerging as a de facto standard that, to varying degrees, is already supported by nearly every IT vendor.
Additionally, there is a broad push to spur adoption of the agent-to-agent (A2A) protocol developed by Google. Designed to provide a communications framework between AI agents, the initiative is now being advanced under the auspices of The Linux Foundation.
IT teams must authenticate each AI agent before deployment, just as they would with any human end user. After authentication, they apply guardrail policies to ensure the agent operates within its assigned permissions. Without clearly defined policies, AI agents will attempt to incorporate all accessible data into their workflows. This unchecked behavior could lead to the inclusion of large volumes of sensitive information, potentially violating numerous existing compliance mandates.
AI agents raise urgent security concerns
Many of these AI agents will operate with a high level of autonomy. As a result, it’s only a matter of time before cybercriminals begin targeting them. Cybercriminals no longer need to stop at stealing data from an IT environment. By targeting AI agents, they can compromise entire business processes and manipulate outcomes at scale. For example, someone could theoretically reprogram an AI agent managing a customer service interaction to redirect sales to a rival organization.
Organizations assume they will resolve these issues, despite the concerns highlighted in the IDC report. Most organizations will soon realize that the scale at which AI agents access data and interact in near real time presents serious security and management challenges. To address these complexities, they will need to rely more heavily on MSPs. The need for external expertise is quickly becoming a matter of when, not if. According to IDC and other reports, organizations will begin adopting AI agents more pervasively starting next year. Shortly thereafter, a catastrophic event involving one or more AI agents seems inevitable. There have already been examples where a rogue AI agent deleted an entire database in a production environment. The potential damage from a compromised agent could be incalculable.
MSPs face both a challenge and an opportunity: they must begin discussing agentic AI with customers now. AI agents may pose serious risks if deployed without oversight, but MSPs have a unique opportunity to guide organizations toward secure, scalable adoption.
Photo: Stock-Asso / Shutterstock
This post originally appeared on Smarter MSP.