Researchers have identified a widespread class of vulnerabilities, collectively named “IDEsaster,” affecting modern AI-powered developer tools and coding assistants. These flaws impact products such as GitHub Copilot, Gemini CLI, Claude Code, Cursor, Zed.dev, and several other AI-integrated IDE solutions. The vulnerabilities arise from how AI agents interact with underlying IDE features, enabling attackers to manipulate the agent’s behavior using malicious inputs. If exploited, these weaknesses can lead to sensitive data leakage, unauthorized file modifications, or even remote code execution on developer systems. The findings highlight systemic security gaps stemming from the rapid integration of AI automation within traditional development environments. IDEsaster vulnerabilities originate from an architectural issue: AI assistants often assume that IDE tools and workspace operations are inherently safe, when in reality these features were designed for deliberate human driven actions. When an attacker introduces malicious content through project files, configuration data, file names, documentation, or deceptive prompts the AI assistant may process it as legitimate instructions. Because these tools possess capabilities such as reading and writing files, editing settings, fetching schemas, or interacting with external resources, a prompt injection can escalate into a full tool-abuse scenario. The attack chain typically follows three stages. First, the attacker injects malicious text or metadata into resources the AI tool will analyze. Second, the AI assistant interprets this embedded instruction and uses its built-in capabilities to perform unsafe operations, such as modifying workspace configuration files or initiating unintended network requests. Finally, the underlying IDE features designed without strong security boundaries for automated use process the altered files or triggered actions, enabling information exfiltration or code execution. The research demonstrates that this issue is not isolated to one vendor but is a systemic result of giving AI systems broad autonomous access to core development functions. Virtually all tested AI-IDE integrations exhibited exploitable behavior, proving that current designs do not sufficiently sandbox AI-driven operations or validate their intent.
SonicWall has released security fixes for an actively exploited vulnerability, CVE-2025-40602, affecting its Secure Mobile Access (SMA) 100 series appliances. The flaw allows attac...
French prosecutors and intelligence agencies are probing a suspected cyberattack on the Fantastic, a passenger ferry operated by Italian company GNV, after malware was discovered a...
Cisco disclosed an actively exploited zero-day vulnerability affecting its AsyncOS software used in Secure Email Gateway (SEG) and Secure Email and Web Manager (SEWM) appliances. T...