MCP Protocol Reveals Design-Level RCE Vulnerability; Anthropic Refuses Architecture Changes

iconKuCoinFlash
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
A design-level RCE vulnerability has been disclosed in the Model Context Protocol (MCP), an open protocol led by Anthropic. The flaw enables attackers to execute arbitrary commands on systems using vulnerable implementations. The issue arises from the default behavior of Anthropic’s official SDK when handling STDIO transmission, impacting multiple programming languages. OX Security reported over 150 million downloads of affected packages and thousands of exposed instances. Anthropic has declined to modify the protocol or SDK defaults, stating the behavior is “by design.” This vulnerability underscores the risks associated with current protocol updates.

ME News reports that on April 21 (UTC+8), according to monitoring by Beating, security firm OX Security recently disclosed a design-level remote code execution vulnerability in the Model Context Protocol (MCP), an open protocol led by Anthropic and serving as the de facto standard for AI agents to invoke external tools. Attackers can execute arbitrary commands on any system running a vulnerable MCP implementation, gaining access to user data, internal databases, API keys, and chat logs. The vulnerability does not stem from implementation coding errors, but from the default behavior of Anthropic’s official SDK when handling STDIO transport—affected across Python, TypeScript, Java, and Rust versions. STDIO is one of MCP’s transport methods, enabling local processes to communicate via standard input and output. The StdioServerParameters in the official SDK directly spawns child processes using command parameters defined in configuration; if developers do not perform additional input sanitization, any user input reaching this stage becomes executable system commands. OX Security categorized the attack surface into four types: direct command injection via configuration interfaces; bypassing sanitization by appending flags to whitelisted commands (e.g., `npx -c `); injecting prompts within IDEs to rewrite MCP configuration files, enabling tools like Windsurf to launch malicious STDIO services without user interaction; and covertly embedding STDIO configurations via HTTP requests through the MCP marketplace. OX Security reported that affected packages have been downloaded over 150 million times collectively, with more than 7,000 publicly accessible MCP servers exposing up to 200,000 instances across more than 200 open-source projects. The team has submitted over 30 responsible disclosures and obtained more than 10 high-severity or critical CVEs, covering AI frameworks and IDEs such as LiteLLM, LangFlow, Flowise, Windsurf, GPT Researcher, Agent Zero, and DocsGPT; of the 11 MCP package repositories tested, nine could be compromised using this method. Following disclosure, Anthropic responded that this is “by design,” asserting that the STDIO execution model constitutes a “secure default design,” and shifted responsibility for input sanitization onto developers, refusing to modify the protocol or official SDK. Vendors such as DocsGPT and LettaAI have released their own patches, but Anthropic’s reference implementation remains unchanged. MCP has become the de facto standard for AI agents connecting to external tools, with OpenAI, Google, and Microsoft all adopting it. Without addressing the root issue, any MCP service relying on the official SDK’s default STDIO handling—even if not a single line of code was written incorrectly—may still serve as an entry point for attacks. (Source: BlockBeats)

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.