LiteLLM Supply Chain Attack: 97 Million Monthly Downloads Compromised via GitHub Account Takeover
In early 2026, LiteLLM — a Python package downloaded 97 million times per month — was compromised when attackers took over the GitHub owner's account and pushed version 1.82.8 containing credential-stealing malware. The malicious code hijacked Python's .pth startup mechanism to exfiltrate SSH keys, cloud credentials, crypto wallets, and database passwords from every infected machine. The incident exposed multiple systemic failures: transitive dependency risk, AI-generated bot spam suppressing the vulnerability report, and meaningless compliance badges from a provider accused of faking reports.
LiteLLM, a Python package providing a uniform API layer across multiple LLM providers (OpenAI, Anthropic, etc.), was compromised via a GitHub owner account takeover in early 2026. The malicious version (1.82.8) contained credential-stealing malware that affected an estimated 97 million monthly downloads. ## The Attack Attackers gained control of the repository owner's GitHub account and pushed a malicious release. The payload hijacked Python's `.pth` file mechanism — a startup file that executes when *any* Python process starts, not just LiteLLM imports. On activation, it collected SSH keys, AWS/GCP/Azure credentials, Kubernetes configs, git credentials, environment variables, shell history, crypto wallets, SSL private keys, CI/CD secrets, and database passwords. All data was double-base64 encoded and exfiltrated to an attacker-controlled server. ## Transitive Dependency Risk Direct installation of LiteLLM was not required for compromise. Any package depending on LiteLLM as a transitive dependency — including MCP plugins for AI coding tools — pulled in the malicious version automatically. This is the core supply chain attack threat: trust relationships cascade through dependency trees. ## Discovery The compromise was accidentally discovered by a developer using an MCP plugin in Cursor that pulled LiteLLM as a transitive dependency. The malware's `.pth` file accidentally fork-bombed the machine (spawning copies of itself repeatedly), crashing it by exhausting RAM. The crash prompted investigation. Andrej Karpathy noted the irony that "vibe coding saved the day." ## Suppression Attempt When the vulnerability was reported via GitHub issue, over 363 AI-generated bot comments flooded the thread — "Great explanation!", "Thanks for sharing!", "Fantastic fix!" — burying the legitimate discussion. The issue was closed as "not planned" amid the spam. AI-generated comment flooding is an emerging technique for suppressing vulnerability disclosures. ## Compliance Theater LiteLLM displayed SOC 2 Type 1 and ISO 27001 compliance badges, provided by Delve ("AI-native compliance"). Delve was concurrently accused of faking compliance reports. Both companies are Y Combinator-backed. ## Scale The threat actors ("Team PCP") claimed 500,000 stolen credentials from the LiteLLM compromise alone — 300 GB of compressed credential data — and were actively extorting several multi-billion dollar companies. ## Lessons Supply chain attacks through package managers represent one of the most consequential security threats in software development. Python's `.pth` file mechanism is a particularly dangerous attack vector (executes on any Python process start, not just package import). Compliance certifications are only as trustworthy as the certifying body. AI-generated spam is now an active tool for suppressing vulnerability reports. Every dependency is a trust relationship that extends through the entire transitive dependency tree.