SAN FRANCISCO — An extortion hacking crew breached AI recruiting startup Mercor by first compromising LiteLLM, the widely used open-source proxy tool that sits between applications and large language models, the company confirmed late Monday. The attackers claim they walked out with company data. Mercor says it is investigating.
The hit lands like a brick through a plate-glass window. Mercor is no back-alley operation — the startup has raised north of $100 million to match job candidates with employers using AI-driven assessments. Now its name sits on a hacker crew's trophy shelf, and the weapon of choice wasn't some exotic zero-day. It was a poisoned link in the open-source supply chain that half the AI industry depends on every morning before coffee.
LiteLLM acts as a universal switchboard. Developers use it to route calls to OpenAI, Anthropic, Cohere, and dozens of other model providers without rewriting code. Thousands of companies have it wired into their stacks. A compromise at that level is not a picked lock — it is a master key.
Mercor has not disclosed what data the attackers accessed or how many users are affected. The extortion crew, whose identity has not been independently verified, posted claims of the theft online in the manner now standard for ransomware outfits looking to pressure victims into paying. Mercor said it engaged outside security consultants and notified law enforcement.
The breach raises hard questions for every AI company shipping code built on open-source foundations. LiteLLM's GitHub repository shows tens of thousands of downloads. Most shops treat it as furniture — always there, rarely inspected. That trust just became a liability.
For firms in the AI-powered hiring game — a crowded field that includes Trilogy International's Crossover platform, which operates across 130-plus countries — the incident is a flashing red signal. Recruiting platforms sit on oceans of personal data: resumes, assessments, compensation figures, identity documents. A breach doesn't just embarrass a company. It exposes the people who trusted it with their careers.
Security researchers have warned for years that the open-source AI toolchain was growing faster than anyone could audit it. LiteLLM is maintained by a small team. The project's popularity outstripped its resources long ago. That gap between adoption and oversight is exactly where attackers like to work.
Meanwhile, Anthropic is nursing its own operational bruises — the Claude maker confirmed a second human-caused incident this week, capping a rough month for a company that sells itself on safety and reliability.
The lesson from the Mercor breach is older than the transistor: the strength of a chain is the strength of its weakest link. In 2026, that link is open-source code nobody bothered to audit.
Mercor declined further comment. The investigation continues.