The post Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report appeared on BitcoinEthereumNews.com. In brief Researchers found a promptThe post Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report appeared on BitcoinEthereumNews.com. In brief Researchers found a prompt

Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • Researchers found a prompt injection vulnerability in Google’s Antigravity AI coding platform.
  • The flaw could allow attackers to execute commands even with the platform’s Secure Mode enabled.
  • Google fixed the issue Feb. 28 after researchers disclosed it in January, Pillar Security said.

Google has patched a vulnerability in its Antigravity AI coding platform that researchers say could allow attackers to run commands on a developer’s machine through a prompt injection attack.

According to a report by Cybersecurity firm Pillar Security, the flaw involved Antigravity’s find_by_name file search tool, which passed user input directly to an underlying command-line utility without validation. That allowed malicious input to convert a file search into a command execution task, enabling remote code execution.

“Combined with Antigravity’s ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger it through a seemingly legitimate search, all without additional user interaction once the prompt injection lands,” Pillar Security researchers wrote.

Launched last November, Antigravity is Google’s AI-powered development environment designed to help programmers write, test, and manage code with the assistance of autonomous software agents. Pillar Security disclosed the issue to Google on January 7, and Google acknowledged the report the same day, marking the issue as fixed on February 28.

Google did not immediately respond to a request for comment by Decrypt.

Prompt injection attacks occur when hidden instructions embedded in content cause an AI system to perform unintended actions. Because AI tools often process external files or text as part of normal workflows, the system may interpret those instructions as legitimate commands, allowing an attacker to trigger actions on a user’s machine without direct access or additional interaction.

The threat of prompt injection attacks for large language models came into renewed focus last summer when ChatGPT developer OpenAI warned that its new ChatGPT agent could be compromised.

“When you sign ChatGPT agent into websites or enable connectors, it will be able to access sensitive data from those sources, such as emails, files, or account information,” OpenAI wrote in a blog post.

To demonstrate the Antigravity issue, the researchers created a test script inside a project workspace and triggered it through the search tool. When executed, the script opened the computer’s calculator application, showing that the search function could be turned into a command execution mechanism.

“Critically, this vulnerability bypasses Antigravity’s Secure Mode, the product’s most restrictive security configuration,” the report said.

The findings highlight a broader security challenge facing AI-powered development tools as they begin to execute tasks autonomously.

“The industry must move beyond sanitization-based controls toward execution isolation. Every native tool parameter that reaches a shell command is a potential injection point,” Pillar Security said. “Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/365068/google-fixes-ai-coding-tool-flaw-attackers-execute-malicious-code

Market Opportunity
Prompt Logo
Prompt Price(PROMPT)
$0.03397
$0.03397$0.03397
-5.40%
USD
Prompt (PROMPT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!