Description

Cybersecurity researchers have identified a high-severity security flaw in the Vanna.AI library, tracked as CVE-2024-5565 with a CVSS score of 8.1, that can be exploited for remote code execution via prompt injection techniques. JFrog, a supply chain security firm, disclosed that the vulnerability lies in the "ask" function of Vanna, a Python-based machine learning library used to query SQL databases using natural language prompts. This flaw allows attackers to execute arbitrary commands by manipulating the prompt inputs. Prompt injection, a type of AI jailbreak, poses significant risks as malicious actors can bypass safety mechanisms built into AI models by providing adversarial inputs. These attacks can occur indirectly, such as through data controlled by third parties, or through multi-turn strategies like Crescendo and Skeleton Key, where the attacker gradually manipulates the AI model to disregard its guardrails and execute prohibited commands. The Skeleton Key technique is particularly dangerous as it puts the model in a mode where it complies with any instructions, regardless of the ethical and safety guidelines. Following responsible disclosure, Vanna has issued a hardening guide advising users to run the Plotly integration in a sandboxed environment to mitigate the risk. Shachar Menashe, senior director of security research at JFrog, emphasized the need for robust security mechanisms when interfacing large language models (LLMs) with critical resources, warning that the dangers of prompt injection are not yet widely recognized but are easy to exploit.