Description

A critical security vulnerability has been identified in LangChain, a widely used open-source framework for building Large Language Model (LLM) agents. The flaw, tracked as CVE-2025-68664 with a CVSS score of 9.3, allows attackers to manipulate LangChain’s data serialization and deserialization process. Due to improper handling of a specific internal dictionary key ("lc"), malicious user-controlled data can be misinterpreted as a trusted LangChain object. This can result in the execution of unintended internal logic during deserialization, potentially exposing sensitive information or triggering unsafe system behavior. The root cause lies in LangChain’s dumps() and dumpd() functions, which fail to properly escape dictionaries containing the "lc" key. This key is normally used internally by LangChain to mark serialized objects. Attackers can exploit this weakness through prompt injection, manipulating LLM responses such as additional_kwargs or response_metadata to return specially crafted JSON structures. When processed by the application, these structures are treated as legitimate objects rather than untrusted input. Successful exploitation can lead to environment variable disclosure, including API keys or secrets, especially when legacy configurations like secrets_from_env=True are enabled. In some cases, attackers may also instantiate arbitrary classes, causing side effects such as network calls or file access. To mitigate this risk, developers must immediately upgrade to the patched versions: LangChain 1.2.5 and LangChain Core 0.3.81. Organizations should also harden LLM pipelines against prompt injection, restrict deserialization of untrusted data, and review secret-handling configurations. Proactive patching and secure input handling are essential to prevent exploitation of this high-impact vulnerability.