MCP is becoming the plug-and-play standard for agentic AI apps to pull in data in real time from multiple sources. The Pulse MCP server directory currently lists more than 4,300 active MCP servers allowing LLMs to connect with data feeds ranging from Spotify and YouTube, to Salesforce and GitHub. The momentum behind the protocol will likely accelerate adoption as developers coalesce around a common standard, and alternatives struggle to gain traction.
However, this also makes it more attractive for malicious actors looking to exploit weaknesses in how MCP has been deployed. This is especially relevant where MCP is being used to access external third party data sources. Although not related to this particular technology, the recent cyber attack on UK retailer Marks & Spencer was due to a weakness in one of its supplier’s IT systems. The error wiped almost nearly a billion dollars (£750 million) for the company’s market capitalization, and is expected to knock nearly half a billion off operating profits this year.
A question of security
So what are some of the key vulnerabilities that MCP presents and how might they be addressed? MCP is designed to operate in a more dynamic way than traditional APIs where manual oversight in setting up data feeds is more the norm. For agentic AI to truly benefit from the advantages of MCP, the dynamic discovery of data sources and real-time access will often be required. Unfortunately, in its current form, MCP doesn’t have sufficient security capabilities baked in for enterprises to deploy it without taking additional precautions.
MCP also operates on a client-server basis with MCP clients connecting to data sources via MCP servers to allow agentic AI apps to perform autonomous or semi-autonomous actions. But like attack vectors, this opens up a range of possible exploitations, many of which will be familiar to IT and security managers.
MCP servers are an obvious target for attacks as they act as gateways between the clients operating within enterprises and the data sources being pulled into the LLMs on the client side. Dor Sarig, CEO at Pillar Security points out that the authentication tokens typically stored on MCP servers and their ability to execute actions across connected services present a potential “keys to the kingdom” scenario where a single compromised MCP server could expose an enterprise’s core digital assets. The use of token-based authentication to restrict data access only to authorized users of MCP servers is an option to increase security, but isn’t a requirement of the protocol. Even when implemented, it’s relatively easy to intercept such tokens if the LLM being used is trained to generate tokens with appropriate permissions.
AI assistants present another vulnerability whereby they might misinterpret natural language commands that contain hidden embedded commands. These prompt injection attacks could result in the assistant inadvertently leaking sensitive data to third parties.
So called rug-pull updates, familiar to anyone managing multiple application plug-ins, are another cause for concern. Safe and legitimate MCP servers, and the data and applications they connect to, may become compromised following updates.
And there’s MCP server spoofing, another cause for concern, where a fake server uses the name and tool list of a trusted one, which can trick unwary and less vigilant developers. This can then harvest a range of sensitive enterprise data to be used to instigate future attacks.
Safer agentic AI
While some vulnerabilities that MCP exposes are unique to the technology, the solutions are often the tried and tested approaches performed by any sensible organization. Security starts with awareness and a rigorous and documented set of rules and procedures.
Undertaking an audit of where MCP is being used or is in the planning stages is key, as well as understanding what data sources and LLMs are being connected via MCP. All MCP endpoints should be secured with strong authentication. Context tokens and the API keys used by MCP should be set up with short viable lifespans and the minimum scope needed.
Data is the lifeblood of agentic AI systems and any data used should be carefully validated to prevent injection attacks. Also, all third party tools should be vetted and then monitored for changes and updates to protect against rug-pull attacks.
Plus, a program of ongoing logging and monitoring of MCP activities should be undertaken that includes tracking data access requests, authentication failures, and changes to configuration settings. Regular inspections of these logs is essential to identify suspicious activities and make appropriate changes.
MCP will be a key element of the coming agentic AI revolution, and it’s essential all those involved build in appropriate security from the outset of any deployment. Gil Feig, co-founder and CTO of API integrator Merge, sees MCP as driving a paradigm shift in data-driven applications, but urges caution with any deployments, “There are risks in letting AI choose where to take and send data,” he says. His company’s recent launch of an MCP server that allows any LLM to access their integrations’ endpoints was built with security in mind, including a data loss prevention tool (DLP).
The benefits offered by MCP as a common standard for plumbing the emerging agentic AI world outweigh the risks, but like SMTP and HTTP before it, secure deployments will be vital.
Read More from This Article: MCP is enabling agentic AI, but how secure is it?
Source: News