Google patches vulnerability in Antigravity AI coding platform

icon币界网
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Google has patched a vulnerability in its Antigravity AI coding platform that could allow attackers to execute malicious code on developers’ machines. The issue, discovered by Pillar Security, involved the find_by_name tool passing unvalidated input to a command-line utility, enabling remote code execution. Risk-on assets experienced a slight decline as the vulnerability underscored potential risks in AI tools. The flaw was reported on January 7, 2026, and resolved by February 28. MiCA regulations may encourage firms to address similar security gaps. A test script triggered via the tool opened the calculator application, demonstrating real-world execution risks.
CoinDesk reports:

Google has patched a vulnerability in its Antigravity AI coding platform, which researchers say could allow attackers to execute commands on developers' computers via a Quick Injection attack.

According to report by cybersecurity firm Pillar Security, Antigravity’s find_by_name file search tool contains a vulnerability that directly passes user input to underlying command-line utilities without any validation. This allows malicious input to turn file searches into command execution tasks, enabling remote code execution.

Combined with Antigravity’s ability to create files, this allows the attack chain to be fully executed: deploying a malicious script first, then triggering it through what appears to be a legitimate search; once prompt injection succeeds, no further user interaction is required.

Antigravity, launched in November last year, is Google’s AI-powered development environment designed to help programmers write, test, and manage code using autonomous software agents. Pillar Security disclosed the issue to Google on January 7, and Google confirmed receipt of the report the same day, marking the issue as fixed on February 28.

Google has not yet responded to this matter.Decrypt.

Prompt injection attacks involve hidden instructions embedded in content that cause AI systems to perform unintended actions. Since AI tools often process external files or text as part of normal workflows, the system may interpret these instructions as legitimate commands, allowing attackers to trigger operations on a user’s computer without direct access or additional interaction.

Last summer, an incident involving OpenAI, the developer of ChatGPT, reignited concerns about large language models falling victim to prompt injection attacks. Warning its new ChatGPT agent may have been compromised.

OpenAI wrote in a blog post: “When you log a ChatGPT agent into a website or enable connectors, it will be able to access sensitive data from these sources, such as emails, files, or account information.”

To demonstrate the gravity-defying issue, researchers created a test script in the project workspace and triggered it using the search tool. After the script executed, the computer’s calculator application opened, demonstrating that the search function could be converted into a command execution mechanism.

The report states: "The key point is that the vulnerability bypassed Antigravity's security mode, the most restrictive security configuration of this product."

The findings highlight the broader security challenges faced by AI-driven development tools when beginning to autonomously execute tasks.

Pillar Security states: "The industry must move beyond cleanup-based controls and adopt execution isolation. Every native tool parameter reaching a shell command can become an injection point. Auditing for such vulnerabilities is no longer optional—it is a prerequisite for securely releasing agent functionality."

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.