Skip to content
MagnaNet Network MagnaNet Network

  • Home
  • About Us
    • About Us
    • Advertising Policy
    • Cookie Policy
    • Affiliate Disclosure
    • Disclaimer
    • DMCA
    • Terms of Service
    • Privacy Policy
  • Contact Us
  • FAQ
  • Sitemap
MagnaNet Network
MagnaNet Network

Google Fixes CVSS 10 Gemini CLI CI RCE and Cursor Flaws Enable Code Execution

Cahyo Dewo, May 4, 2026

The Gemini CLI Vulnerability: A Maximum Severity Threat

The most pressing of these recent revelations involves a critical security flaw in Google’s Gemini CLI—specifically within the @google/gemini-cli npm package and the associated google-github-actions/run-gemini-cli GitHub Actions workflow. This vulnerability, which does not yet have a CVE identifier but has been assigned a maximum CVSS score of 10.0, could have permitted unprivileged external attackers to execute arbitrary commands on host systems. Such a perfect CVSS score indicates that the flaw is remotely exploitable with low complexity, requires no privileges or user interaction, and has a complete impact on confidentiality, integrity, and availability.

Overview of the Flaw

According to a detailed report published by Novee Security on a Wednesday, the vulnerability stemmed from an inadequate trust mechanism within the Gemini CLI. "The vulnerability allowed an unprivileged external attacker to force their own malicious content to load as Gemini configuration," Novee Security stated. This critical oversight bypassed crucial security layers. The report further elaborated, "This triggered command execution directly on the host system, bypassing security before the agent’s sandbox even initialized." This means that an attacker could inject malicious configuration files that would be processed by the CLI before any protective sandboxing mechanisms could even come into play, granting them immediate control over the host system.

Impact on CI/CD Workflows

Google, in its security advisory published last week, clarified that the impact of this flaw was primarily limited to workflows utilizing Gemini CLI in "headless mode." This operational mode is particularly prevalent in Continuous Integration/Continuous Deployment (CI/CD) environments, where automated processes run without direct human oversight. In previous versions, the Gemini CLI, when operating in headless mode within CI environments, automatically trusted the current workspace folders for loading configuration files and environment variables.

This automatic trust mechanism, while convenient for developers, introduced a severe security risk. Google acknowledged this, stating, "This is potentially risky in situations where Gemini CLI runs on untrusted folders in headless mode (e.g., CI workflows that review user-submitted pull requests). If used with untrusted directory contents, this could lead to remote code execution via malicious environment variables in the local .gemini/ directory." An attacker could exploit this by submitting a pull request containing a specially crafted .gemini/ directory with malicious configuration or environment variables. When the CI pipeline processed this pull request using the vulnerable Gemini CLI, the malicious content would be automatically trusted and executed, effectively turning the CI/CD pipeline into a vector for a supply-chain attack. Such attacks are highly prized by adversaries as they allow them to inject malicious code into widely used software at its source, affecting potentially thousands or millions of users downstream.

Google’s Remediation and Enhanced Security

To address this critical flaw, Google has implemented significant changes, primarily focusing on enforcing explicit trust. The update now mandates that folders must be explicitly trusted before configuration files within them can be accessed by the Gemini CLI. This crucial change requires users to review and modify their existing workflows. Google has outlined two primary approaches for users to adopt this new trust mechanism, ensuring that all configurations are consciously approved rather than implicitly trusted.

Furthermore, the tech giant has taken steps to harden tool allowlisting when Gemini CLI is configured to run in --yolo mode. The --yolo (You Only Live Once) mode is designed for scenarios where the CLI operates with minimal user interaction, often in automated scripts. Previously, this mode could ignore any allowlist specified in ~/.gemini/settings.json, automatically running all tool calls, including potentially dangerous ones like run_shell_command, without requiring user confirmation. This behavior could have been exploited through prompt injection, where untrusted inputs (e.g., user-submitted GitHub issues processed by an AI agent) could trick the system into executing arbitrary commands.

Google Fixes CVSS 10 Gemini CLI CI RCE and Cursor Flaws Enable Code Execution

In version 0.39.1, Google states, "the Gemini CLI policy engine now evaluates tool allowlisting under –yolo mode, which is useful for CI workflows that allowlist a few safe commands to run when processing untrusted inputs." This enhancement means that even in --yolo mode, only explicitly allowed commands will be executed, significantly reducing the risk of remote code execution via malicious prompts. Google also cautioned that some existing workflows might silently fail if their tool allowlists are not updated to conform to the new, stricter policy.

Chronology and Disclosure

The vulnerability was first detailed by Novee Security in their report, with Google subsequently publishing its advisory on GitHub last week. This rapid disclosure and remediation highlight the collaborative efforts between security researchers and major tech companies to secure the software ecosystem. The absence of a specific CVE identifier for such a critical vulnerability is unusual but not unheard of, often occurring when the fix is deployed very quickly, or the scope is highly specific to a particular package/action.

The Broader Implications for Software Supply Chain Security

The Gemini CLI vulnerability serves as a stark reminder of the inherent risks in modern software supply chains. The reliance on open-source packages (like npm packages) and automated CI/CD pipelines, while accelerating development, simultaneously expands the attack surface. Attackers increasingly target these stages, knowing that a single successful compromise can propagate malicious code across numerous downstream projects and organizations.

The concept of "folder trust" or "workspace trust" is paramount in this context. Implicit trust, as demonstrated by the Gemini CLI’s previous behavior, is a critical weakness. Best practices now dictate an explicit "zero-trust" approach, where every component, every input, and every execution environment must be verified before being granted permissions. This incident underscores the necessity for developers and organizations to meticulously review third-party dependencies, scrutinize automated workflows for untrusted inputs, and enforce stringent security policies throughout their development lifecycle. The hardening of --yolo mode also highlights a growing concern: the security of AI agents themselves and how they interpret and execute commands based on potentially malicious prompts, blurring the lines between data input and command execution.

AI-Powered Cursor IDE Under Scrutiny

In parallel to the Google Gemini CLI issues, the AI-powered development tool Cursor has also been grappling with multiple high-severity vulnerabilities, signaling a broader trend of security challenges emerging with the integration of AI into developer tools. Cursor aims to revolutionize coding by leveraging AI to assist developers, but these advanced capabilities also introduce novel attack vectors.

CVE-2026-26268: Git Hook Sandbox Escape

Novee Security also brought to light a high-severity vulnerability in Cursor IDE, identified as CVE-2026-26268, affecting versions prior to 2.5. This flaw, carrying a CVSS score of 8.1, could lead to arbitrary code execution through a sophisticated prompt injection technique.

Cursor, in its own alert released in February 2026 (a future date that likely indicates a placeholder or a typo, though the CVE ID itself specifies 2026), described the vulnerability as a "sandbox escape through .git configurations." The attack mechanism is particularly insidious: a rogue AI agent could be tricked into setting up a bare repository (.git) with a malicious Git hook. Git hooks are scripts that Git automatically executes before or after events like commit, push, or receive. The critical aspect of this vulnerability is that this malicious hook could be automatically fired every time a commit operation runs within the embedded repository context, without requiring any explicit user interaction or even the user’s awareness.

Google Fixes CVSS 10 Gemini CLI CI RCE and Cursor Flaws Enable Code Execution

Security researcher Assaf Levkovich of Novee Security elaborated on the root cause, stating, "The root cause is not a flaw in Cursor’s core product logic, but rather a consequence of a feature interaction in Git, one that becomes exploitable the moment an AI agent starts autonomously executing Git operations inside a repository it doesn’t control." The sequence of actions leading to auto-approved arbitrary code execution involves:

  1. An attacker planting a specially crafted bare Git repository within a project.
  2. This repository contains a malicious pre-commit hook.
  3. The AI agent, as part of its routine operations (e.g., executing git checkout), interacts with this repository.
  4. The malicious pre-commit hook is silently triggered, executing arbitrary code on the victim’s machine.

Levkovich highlighted the stealthy nature of the attack: "When the agent runs git checkout as part of fulfilling a routine request, it is not doing anything the user didn’t implicitly authorize. But neither the user nor the agent has visibility into what the repository’s Cursor Rules have set in motion. A malicious pre-commit hook embedded in a nested bare repository executes silently, outside the agent’s reasoning chain and outside the user’s field of view." This emphasizes the challenge of securing autonomous AI agents that perform actions on behalf of the user, as their operational logic might not fully account for all potential side effects or "feature interactions" within complex underlying systems like Git.

CursorJacking: Unpatched API Key Theft

Adding to Cursor’s security woes, another high-severity access control vulnerability (CVSS score: 8.2) was discovered by LayerX, dubbed "CursorJacking." This flaw could allow any installed extension to access sensitive API keys and credentials stored locally in an SQLite database. Exploitation of this vulnerability could lead to severe consequences, including account takeover, unauthorized data exposure, and financial losses from illicit API usage. Crucially, this issue remains unpatched.

LayerX researcher Roy Paz explained the core problem: "Cursor does not enforce access control boundaries between extensions and this database." This lack of segmentation means that a rogue or compromised extension, once installed, could freely access a treasure trove of sensitive information. Paz warned that "Exploitation of this vulnerability can lead to exposure of session tokens and API keys, unauthorized access to Cursor backend services, and data theft via user impersonation."

Cursor has acknowledged the vulnerability but maintains that the access is limited to the local machine where the user has already installed and granted permissions to the extension. Their position is that any rogue extension with local file system access could potentially extract valuable information from various application data stores, not just Cursor’s. To mitigate this threat, Cursor advises users to exercise extreme caution and only download and install extensions from trusted sources. However, the fact that an internal database holding such critical credentials lacks robust access control, even against other installed components of the same application, is a significant security concern. It underscores the principle of least privilege, which dictates that even internal components should only have access to the resources they absolutely need, and no more.

The Evolving Threat Landscape for AI in Software Development

These recent security incidents involving both Google Gemini CLI and Cursor IDE illuminate a critical juncture in cybersecurity: the security implications of integrating artificial intelligence into core development workflows. AI tools, designed to enhance productivity and automate complex tasks, also introduce entirely new attack surfaces and vectors.

Prompt Injection: The hardening of Gemini CLI’s --yolo mode and the Git hook vulnerability in Cursor are direct examples of prompt injection attacks, albeit in different forms. In traditional software, inputs are typically data; in AI systems, inputs can be interpreted as instructions, leading to unintended and malicious command execution. This paradigm shift requires a re-evaluation of how user inputs are sanitized, validated, and processed by AI agents.

Google Fixes CVSS 10 Gemini CLI CI RCE and Cursor Flaws Enable Code Execution

Autonomous Actions and Implicit Trust: Both vulnerabilities exploit scenarios where AI agents or automated tools operate with implicit trust or make autonomous decisions based on untrusted inputs. The Git hook vulnerability, in particular, showcases how an AI agent performing "routine" operations can inadvertently trigger malicious code due to complex interactions within the underlying system (Git). This raises fundamental questions about the level of autonomy granted to AI agents and the need for rigorous auditing of their decision-making processes and interactions with the host environment.

Supply Chain Risks Extended to AI: The vulnerabilities extend the well-known risks of software supply chain attacks to AI-driven tools. Malicious npm packages, compromised GitHub Actions, or rogue IDE extensions can now leverage AI capabilities to achieve more sophisticated and stealthy compromises. This necessitates a holistic approach to supply chain security that encompasses not only traditional code dependencies but also AI models, prompts, and the tools that facilitate their integration.

Mitigation Strategies and Best Practices for Developers

In light of these disclosures, developers and organizations leveraging AI tools and automated CI/CD pipelines must adopt a proactive and multi-layered security posture:

  1. Prioritize Updates: Immediately update Google Gemini CLI to version 0.39.1 or later to implement the explicit folder trust mechanism and the hardened --yolo mode. Cursor IDE users should update to version 2.5 or later to mitigate the Git hook vulnerability, and closely monitor for a patch addressing the CursorJacking flaw.
  2. Review CI/CD Workflows: Conduct a thorough audit of all CI/CD pipelines that use Gemini CLI in headless mode. Explicitly configure folder trust as recommended by Google. Ensure that --yolo mode is used with carefully defined tool allowlists to prevent prompt injection-based RCE.
  3. Exercise Caution with Extensions and Dependencies: For tools like Cursor IDE, strictly adhere to the principle of "trusted sources" for extensions. Before installing any extension, verify its publisher, review its permissions, and assess its necessity. Regularly audit installed extensions for any suspicious behavior.
  4. Implement Least Privilege: Ensure that automated tools and AI agents operate with the minimum necessary permissions. Limit their access to sensitive files, network resources, and commands to only what is absolutely required for their function.
  5. Enhance Input Validation and Sanitization: Develop robust mechanisms to validate and sanitize all inputs, especially when dealing with AI agents that can interpret inputs as instructions. This includes user-submitted code, prompts, and configuration files.
  6. Adopt Zero-Trust Principles: Extend zero-trust principles to development environments. Never implicitly trust any component, whether it’s a code dependency, an external API, or an internal configuration file.
  7. Regular Security Audits: Conduct regular security audits and penetration testing of AI-powered tools and CI/CD pipelines to identify and address vulnerabilities proactively.
  8. Stay Informed: Continuously monitor security advisories and reports from vendors and security researchers. The landscape of AI security is evolving rapidly, and staying informed is crucial for timely response.

Conclusion

The recent critical vulnerabilities discovered in Google Gemini CLI and Cursor IDE serve as a potent reminder that innovation, particularly in AI, must be accompanied by rigorous security practices. While AI-driven development tools promise unprecedented gains in productivity, they also introduce sophisticated new attack vectors that challenge traditional security paradigms. The maximum severity RCE in Gemini CLI highlights the persistent threat of supply chain attacks, while Cursor’s vulnerabilities underscore the unique risks associated with autonomous AI agents and the critical need for robust access controls within integrated development environments. As AI continues to embed itself deeper into software development, a collective commitment from developers, vendors, and security researchers will be essential to build secure and resilient digital ecosystems.

Cybersecurity & Digital Privacy codecursorcvssCybercrimeenableexecutionfixesflawsgeminigoogleHackingPrivacySecurity

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

The Evolving Landscape of Telecommunications in Laos: A Comprehensive Analysis of Market Dynamics, Infrastructure Growth, and Future ProspectsTelesat Delays Lightspeed LEO Service Entry to 2028 While Expanding Military Spectrum Capabilities and Reporting 2025 Fiscal PerformanceThe Internet of Things Podcast Concludes After Eight Years, Charting a Course for the Future of Smart HomesOxide induced degradation in MoS2 field-effect transistors
Betterleaks Emerges as a Successor to Gitleaks, Aiming to Fortify Secret Management in the AI EraTemporal Unveils Groundbreaking Durability Innovations to Fortify AI-Powered Production SystemsOracle Issues Urgent Security Update for Critical Identity and Web Services Manager Flaws Allowing Remote Code ExecutionSamsung Galaxy Watch Transforms into Universal Smart Home Controller: A Deep Dive into Wearable Integration with SmartThings
AWS Recognizes Three Exemplary Leaders as Latest Heroes for Global Community ContributionsSuccessful Portability Threat Unveils Telecom Operators’ Hidden Discount Structures, Prompting Industry Scrutiny on Pricing TransparencyCritical Vulnerabilities ‘Bleeding Llama’ and Persistent Code Execution Flaws Expose Over 300,000 Ollama Servers to Remote AttacksAmazon Web Services Marks Two Decades of Cloud Innovation, Reshaping Global Technology Landscape.

Categories

  • AI & Machine Learning
  • Blockchain & Web3
  • Cloud Computing & Edge Tech
  • Cybersecurity & Digital Privacy
  • Data Center & Server Infrastructure
  • Digital Transformation & Strategy
  • Enterprise Software & DevOps
  • Global Telecom News
  • Internet of Things & Automation
  • Network Infrastructure & 5G
  • Semiconductors & Hardware
  • Space & Satellite Tech
©2026 MagnaNet Network | WordPress Theme by SuperbThemes