Google has patched a vulnerability in its Antigravity AI coding platform, which researchers say could allow attackers to execute commands on developers' computers via a Quick Injection attack.
According to report by cybersecurity firm Pillar Security, Antigravity’s find_by_name file search tool contains a vulnerability that directly passes user input to underlying command-line utilities without any validation. This allows malicious input to turn file searches into command execution tasks, enabling remote code execution.
Combined with Antigravity’s ability to create files, this allows the attack chain to be fully executed: deploying a malicious script first, then triggering it through what appears to be a legitimate search; once prompt injection succeeds, no further user interaction is required.
Antigravity, launched in November last year, is Google’s AI-powered development environment designed to help programmers write, test, and manage code using autonomous software agents. Pillar Security disclosed the issue to Google on January 7, and Google confirmed receipt of the report the same day, marking the issue as fixed on February 28.
Google has not yet responded to this matter.Decrypt.
Prompt injection attacks involve hidden instructions embedded in content that cause AI systems to perform unintended actions. Since AI tools often process external files or text as part of normal workflows, the system may interpret these instructions as legitimate commands, allowing attackers to trigger operations on a user’s computer without direct access or additional interaction.
Last summer, an incident involving OpenAI, the developer of ChatGPT, reignited concerns about large language models falling victim to prompt injection attacks. Warning its new ChatGPT agent may have been compromised.
OpenAI wrote in a blog post: “When you log a ChatGPT agent into a website or enable connectors, it will be able to access sensitive data from these sources, such as emails, files, or account information.”
To demonstrate the gravity-defying issue, researchers created a test script in the project workspace and triggered it using the search tool. After the script executed, the computer’s calculator application opened, demonstrating that the search function could be converted into a command execution mechanism.
The report states: "The key point is that the vulnerability bypassed Antigravity's security mode, the most restrictive security configuration of this product."
The findings highlight the broader security challenges faced by AI-driven development tools when beginning to autonomously execute tasks.
Pillar Security states: "The industry must move beyond cleanup-based controls and adopt execution isolation. Every native tool parameter reaching a shell command can become an injection point. Auditing for such vulnerabilities is no longer optional—it is a prerequisite for securely releasing agent functionality."
