Three LangChain and LangGraph Flaws Leak Files, Secrets, and Chat History
TechnologyMarch 28, 2026· 5 min read

Three LangChain and LangGraph Flaws Leak Files, Secrets, and Chat History

Omar RashidBy Omar RashidAI-GeneratedAnalysisAuto-published5 sources cited

Three security vulnerabilities in LangChain and LangGraph, disclosed today by Cyera researcher Vladimir Tokarev, expose filesystem data, environment secrets, and conversation history through three independent attack paths. The affected packages were downloaded over 84 million times last week on PyPI. Patches are available now.

What happened

Tokarev published research Thursday under the name "LangDrained," detailing three classic vulnerability classes hiding inside the most popular AI framework family on the planet. These aren't exotic AI-specific attacks. They're path traversal, deserialization injection, and SQL injection: old-school bugs living in new-school infrastructure.

Here's the breakdown:

CVE-2026-34070 (CVSS 7.5, High): A path traversal bug in langchain-core's prompt-loading API. The legacy load_prompt() function reads files from paths in deserialized config dicts without validating against directory traversal or absolute path injection. An attacker who controls prompt configuration can read arbitrary .txt, .json, and .yaml files on the host filesystem. That includes Docker configs, Azure access tokens, Kubernetes manifests, and cloud credentials. The affected functions are undocumented legacy APIs, but they still ship in every langchain-core install.

CVE-2025-68664 (CVSS 9.3, Critical): A serialization injection flaw in LangChain's dumps() and dumpd() functions. These functions don't escape dictionaries containing lc keys, which LangChain uses internally to mark serialized objects. An attacker can inject data structures through user-controlled fields like metadata or response_metadata that get treated as legitimate LangChain objects during deserialization rather than plain user data. This leaks API keys and environment secrets. The attack surface is broad: astream_events(version="v1"), Runnable.astream_log(), RunnableWithMessageHistory, InMemoryVectorStore.load(), and several other common code paths are all vulnerable. Cyata first flagged this vulnerability in December 2025, giving it the name LangGrinch.

CVE-2025-67644 (CVSS 7.3, High): An SQL injection vulnerability in LangGraph's SQLite checkpoint implementation. An attacker can manipulate SQL queries through metadata filter keys and run arbitrary queries against the checkpoint database, which stores conversation histories. If your agent workflows handle sensitive data, that data sits in those checkpoints.

Timeline

  • Late 2025: Cyera begins auditing LangChain and LangGraph frameworks
  • December 2025: CVE-2025-68664 details first shared by Cyata under the "LangGrinch" name
  • March 27, 2026: Tokarev publishes full LangDrained research covering all three CVEs
  • March 27, 2026: Patches released for all three vulnerabilities

How bad is it, really

The blast radius is significant but bounded. LangChain, LangChain-Core, and LangGraph collectively saw over 84 million PyPI downloads last week (52 million for LangChain, 23 million for LangChain-Core, 9 million for LangGraph). The 847 million lifetime downloads mean this framework is embedded in enterprise AI stacks everywhere, often as a transitive dependency teams don't even know they're running.

But context matters. CVE-2026-34070 targets legacy undocumented APIs that LangChain has already deprecated. If you're using the newer dumpd/dumps/load/loads serialization APIs with the allowlist-based security model, you're not exposed to that specific path traversal. CVE-2025-68664 is the one that warrants real urgency at CVSS 9.3, because the attack surface spans multiple commonly used APIs including streaming event handlers and message history. CVE-2025-67644 requires the attacker to reach the LangGraph SQLite checkpoint layer, which limits the attack surface to deployments using that specific storage backend.

As Cyera noted in their research: "LangChain doesn't exist in isolation. It sits at the center of a massive dependency web that stretches across the AI stack. When a vulnerability exists in LangChain's core, it doesn't just affect direct users. It ripples outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path."

This disclosure also lands during a rough stretch for the AI supply chain. Days earlier, a critical Langflow flaw (CVE-2026-33017, CVSS 9.3) came under active exploitation within 20 hours of public disclosure. And the same day these LangChain CVEs dropped, researchers found that TeamPCP had pushed malicious versions of the telnyx Python package to PyPI, hiding a credential stealer inside WAV files using audio steganography. The pattern is clear: AI infrastructure is getting hit from every angle right now.

What to do right now

  1. Patch immediately. Update to:

    • langchain-core >= 1.2.22 (fixes CVE-2026-34070)
    • langchain-core 0.3.81 or 1.2.5 (fixes CVE-2025-68664)
    • langgraph-checkpoint-sqlite 3.0.1 (fixes CVE-2025-67644)
  2. Audit your dependency tree. LangChain often shows up as a transitive dependency. Run pip list | grep langchain and pip list | grep langgraph across all environments. You may have it in places you don't expect.

  3. Migrate off legacy APIs. If you're still using load_prompt() or load_prompt_from_config(), move to the dumpd/dumps/load/loads serialization APIs in langchain_core.load. The legacy functions are deprecated and will be removed in 2.0.0.

  4. Review serialization handling. After patching CVE-2025-68664, note that load() and loads() now default to secrets_from_env=False and enforce an allowlist via allowed_objects="core". If you were relying on the old defaults, test your deserialization paths.

  5. Check your streaming code. If you use astream_events(version="v1"), migrate to version="v2", which is not vulnerable to the serialization injection.

  6. Inventory your checkpoint storage. If you're running LangGraph with SQLite checkpoints, understand what conversation data lives in those databases and who can reach the metadata filter interface.

Three vulnerability classes. Three data types exposed. All patched. The fix is straightforward. Do it today.

Omar Rashid covers cybersecurity for The Daily Vibe.

This article was AI-generated. Learn more about our editorial standards

Share:

Report an issue with this article