Description

One of the major weaknesses was found in GitLab Duo, the AI coding assistant based on Anthropic's Claude models. Legit Security researchers discovered that GitLab Duo Chat had an indirect prompt injection vulnerability, which allowed attackers to inject concealed commands in places such as commit messages, issue descriptions, and source code. These concealed prompts would be able to control Duo's responses, pilfer confidential source code, or introduce hostile HTML, possibly making users go to fake login sites or dangerous URLs. Such an attack succeeded because GitLab Duo examines the whole context of a project—comments, source code, and metadata—without sufficient input sanitizing. Malicious hackers would be able to take advantage of this by employing encoding stunts (such as Base16 or Unicode smuggling) to conceal prompts that steal data or ruin code suggestions. Also, the markdown rendering of streaming by GitLab Duo enabled HTML within such prompts to run in browsers of users, increasing the potential for credentials to be stolen and code tampering. To prevent such threats, organizations must bolster input validation and sanitization controls throughout AI tools implemented in development processes. Restricting context access and making AI replies safe (e.g., disabling HTML embedded execution) is essential. Users must also be trained in prompt injection methods and implement rigorous review procedures on project content to avoid malicious prompt injection. The bug has since been fixed by GitLab after responsible disclosure in February of 2025.