- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI
- All maintainer accounts have been changed
- All keys for github, docker, circle ci, pip have been deleted
We are still scanning our project to see if there's any more gaps.
If you're a security expert and want to help, email me - krrish@berri.ai
This must be super stressful for you, but I do want to note your "I'm sorry for this." It's really human.
It is so much better than, you know... "We regret any inconvenience and remain committed to recognising the importance of maintaining trust with our valued community and following the duration of the ongoing transient issue we will continue to drive alignment on a comprehensive remediation framework going forward."
Kudos to you. Stressful times, but I hope it helps to know that people are reading this appreciating the response.
the developer has made a new github account and linked their new github account to hackernews and linked their hackernews about me to their github account to verify the github account being legitimate after my suggestion
Perhaps it's too obvious but ... just running the publish process locally, instead of from CI, would help. Especially if you publish from a dedicated user on a Mac where the system keychain is pretty secure.
I'm not sure how. Their local system seems just as likely to get compromised through a `pip install` or whatever else.
In CI they could easily have moved `trivy` to its own dedicated worker that had no access to the PYPI secret, which should be isolated to the publish command and only the publish command.
The decision to block all downloads is pretty disruptive, especially for people on pinned known good versions. Its breaking a bunch of my systems that are all launched with `uv run`
> Its breaking a bunch of my systems that are all launched with `uv run`
From a security standpoint, you would rather pull in a library that is compromised and run a credential stealer? It seems like this is the exact intended and best behavior.
You should be using build artifacts, not relying on `uv run` to install packages on the fly. Besides the massive security risk, it also means that you're dependent on a bunch of external infrastructure every time you launch. PyPI going down should not bring down your systems.
This is the right answer. Unfortunately, this is very rarely practiced.
More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.
There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.
That's a good thing (disruptive "firebreak" to shut down any potential sources of breach while info's still being gathered). The solve for this is artifacts/container images/whatnot, as other commenters pointed out.
That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response). I know that's toeing the rules line, but I think it's important to observe.
increasing the (social) pressure on maintainers to get PRs merged seems like the last thing you should be doing in light of preventing malicious code ending up in dependencies like this
i'd much rather see a million open PRs than a single malicious PR sneak through due to lack of thorough review.
We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
This is the security shortcuts of the past 50 years coming back to bite us. Software has historically been a world where we all just trust each other. I think that’s coming to an end very soon.
We need sandboxing for sure, but it’s much bigger than that. Entire security models need to be rethought.
The NIH syndrome becoming best practice (a commenter below already says they "vibe-coded replacements for many dependencies") would also save quite a few jobs, I suspect. Fun times.
I've been thinking the same thing. And it's somewhat parallel to what happened to meditation vs. drugs. In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem. Mostly self-harm problem in that case. But the dynamic is somewhat similar to what's happening now with LLMs and coding.
Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.
This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.
The secure boot "shim" is a project like this. Perhaps we need more core projects that can be simple and small enough to reach a "finished" state where they are unlikely to need future upgrades for any reason. Formal verification could help with this ... maybe.
> This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
For the most part you can. Just version pin slightly-stale versions of dependencies, after ensuring there are no known exploits for that version. Avoid the latest updates whenever possible. And keep aware of security updates, and affected versions.
Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.
Sure, and that is basically what sane people do now, but that only works until something needs a security patch that was not provided for the old version, and changing one dependency is likely to cascade so now I am open to supply chain attacks in many dependencies again (even if briefly).
To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.
Virtual machines are that - tiny surfaces to access the host system (block disk device, ...). Which is why virtual machine escape vulnerabilities are quite rare.
>Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in.
This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.
I would split the agent loop totally from the main project of tightbeam, no one wants yet another new agent harness we need to focus on the operational problems. Airlock seems interesting in theory but its really hard to believe this could capture every single behaviour of the native local binaries, we need the native tools with native behaviour otherwise might as well use something like MCP. I would bet more on a git protocol proxy and native solutions for each of these.
Java had that from v1.2 in the 1990s. It got pulled out because nobody used it. The problem of how to make this usable by developers is very hard, although maybe LLMs change the equation.
just sandbox the interpreter (in this case), package manager and binaries.
u can run in chroot jail and it wouldnt have accessed ssh keys outside of the jail...
theres many more similar technologies aleady existing, for decades.
doing it on a per language basis is not ideal. any new language would have to reinvent the wheel.
better to do it at system level. with the already existing tooling.
openbsd has plege/unveil, linux chroot, namespaces, cgroups, freebsd capsicum or w/e. theres many of these things.
(i am not sure how well they play within these scenarios, but just triggering on the sandboxing comment. theres plenty of ways to do it as far as i can tell...)
What if I wanted to write a program that uses untrusted libraries, but also does some very security sensitive stuff? You are probably going to suggest splitting the program into microservices. But that has a lot of problems and makes things slow.
The problem is that programs can be entire systems, so "doing it at the system level" still means that you'd have to build boundaries inside a program.
So... I'm working on an open source technology to make a literal virtual machine shippable i.e. freezing everything inside it, isolated due to vm/hypervisor for sandboxing, with support for containers too since it's a real linux vm.
The problems you mentioned resonated a lot with me and why I'm building it, any interest in working to solve that together?: https://github.com/smol-machines/smolvm
Thanks for the pointer! Love the premise project. Just a few notes:
- a security focused project should NOT default to train people installing by piping to bash. If i try previewing the install script in the browser it forces download instead of showing as plain text. The first thing i see is an argument
# --prefix DIR Install to DIR (default: ~/.smolvm)
that later in the script is rm -rf deleting a lib folder. So if i accidentally pick a folder with ANY lib folder this will be deleted.
- Im not sure what the comparison to colima with krunkit machines is except you don't use vm images but how this works or how it is better is not 100% clear
- Just a minor thing but people don't have much attention and i just saw aws and fly.io in the description and nearly closed the project. it needs to be simpler to see this is a local sandbox with libkrun NOT a wrapper for a remote sandbox like so many of the projects out there.
Will try reaching you on some channel, would love to collaborate especially on devX, i would be very interested in something more reliable and bit more lightweight in placce of colima when libkrun can fully replace vz
Love this feedback, agree with you completely on all of it - I'll be making those changes.
1. In comparison with colima with krunkit, I ship smolvm with custom built kernel + rootfs, with a focus on the virtual machine as opposed to running containers (though I enable running containers inside it).
What is the alternative to bash piping? If you don't trust the project install script, why would you trust the project itself? You can put malware in either.
It turns out that it's possible for the server to detect whether it is running via "| bash" or if it's just being downloaded. Inspecting it via download and then running that specific download is safer than sending it directly to bash, even if you download it and inspect it before redownloading it and piping it to a shell.
The server can also put malware in the .tar.gz. Are you really checking all the files in there, even the binaries? If you don't what's the point of checking only the install script?
Probably on the side of your project, but did you try SmolBSD? <https://smolbsd.org>
It's a meta-OS for microVMs that boots in 10–15 ms.
It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.
Overall, it fits into the "VM is the new container" vision.
Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.
At a glance, it's a matter of compatibility, most software has first class support for linux. But very interesting work and I'm going to follow it closely
Run locally on macs, much easier to install/use, and designed to be "portable" meaning you can package a VM to preserve statefulness and run it somewhere else.
worked in AWS and specifically with firecracker in the container space for 4 years - we had a very long onboarding doc to dev on firecracker for containers... So I made sure to focus on ease of use here.
> We just can't trust dependencies and dev setups.
In one of my vibe coded personal projects (Python and Rust project) I'm actually getting rid of most dependencies and vibe coding replacements that do just what I need. I think that we'll see far fewer dependencies in future projects.
Also, I typically only update dependencies when either an exploit is known in the current version or I need a feature present in a later version - and even then not to the absolute latest version if possible. I do this for all my projects under the many eyes principal. Finding exploits takes time, new updates are riskier than slightly-stale versions.
Though, if I'm filing a bug with a project, I do test and file against the latest version.
> In one of my vibe coded personal projects (Python and Rust project) I'm actually getting rid of most dependencies and vibe coding replacements that do just what I need. I think that we'll see far fewer dependencies in future projects.
No free lunch. LLMs are capable of writing exploitable code and you don’t get notifications (in the eg Dependabot sense, though it has its own problems) without audits.
strongly agree. we keep giving away trust to other entities in order to make our jobs easier. trusting maintainers is still better than trusting a clanker but still risky. We need a sandboxed environment where we can build our software without having to worry about these unreliable factors.
On a personal note, I have been developing and talking to a clanker ( runs inside ) to get my day to day work done. I can have multiple instances of my project using worktrees, have them share some common dependencies and monitor all of them in one place. I plan to opensource this framework soon.
That's no solution. If you can't trust and/or verify dependencies, and they are malicious, then you have bigger problems than what a sandbox will protect against. Even if it's sandboxed and your host machine is safe, you're presumably still going to use that malicious code in production.
That's exactly what a sandbox is designed for. I think you're overly constraining your view of what sort of sandboxing can exist. You can, for example, sandbox code such that it can't do anything but read/write to a specific segment of memory.
I'm supportive of going further - like restricting what a library is able to do. e.g. if you are using some library to compute a hash, it should not make network calls. Without sub-processes, it would require OS support.
It's a language/compiler/function call stack feature, not existing as far as I know, but it would be awesome - the caller of a function would specify what resources/syscalls could be made, and anything down the chain would be thusly restricted. The library could try to do its phone home stats and it would fail. Couldn't be C or a C type language runtime, or anything that can call to assembly of course. @compute_only decorator. Maybe could be implemented as a sys-call for a thread - thread_capability_remove(F_NETWORK + F_DISK)? Wouldn't be able to schedule any work on any thread in that case, but Go could have pools of threads for coroutines with varying capabilities. Something to put the developer back in charge of the mountain of dependencies we are all forced to manage now.
Except that LiteLLM probably got pwned because they used Trivy in CI. If Trivy ran in a proper sandbox, the compromised job could not publish a compromised package.
(Yes, they should better configure which CI job has which permissions, but this should be the default or it won't always happen)
Containers can mean many things, if you mean plain docker default configured containers then no, they are a packaging mechanism not safe environment by themselves.
Just because this attack example did not contain container escape exploits does not mean this is safe. Its better than nothing but nothing that will save us.
This stuff already exists - mobile phone sandboxed applications with intents (allow Pictures access, ...)
But mention that on HN and watch getting downvoted into oblivion: the war against general computation, walled gardens, locked down against device owners...
You are not being downvoted because the core premise is wrong but because your framing as a choice between being locked out of general purpose computing vs security is repeating the brainwashing companies like apple and meta do to justify their rent-seeking locking out out of competitors and user agency. We have all the tools to build safe systems that don't require up front manifest declaration and app store review by the lord but give tools for control, dials and visibility to the users themselves in the moment. And yes, many of these UIs might look like intent sheets. The difference is who ultimately controls how these Interfaces look and behave.
This is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:
Over the last ~15 years I have been shocked by the amount of spam on social networks that could have been caught with a Bayesian filter. Or in this case, a fairly simple regex.
Well, large companies/corporations don't care about Spam because they actually benefit from spam in a way as it boosts their engagement ratio
It just doesn't have to be spammed enough that advertisers leave the platform and I think that they sort of succeed in doing so.
Think about it, if Facebook shows you AI slop ragebait or any rage-inducing comment from multiple bots designed to farm attention/for malicious purposes in general, and you fall for it and show engagement to it on which it can show you ads, do you think it has incentive to take a stance against such form of spam
Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
this kind of compromise is why a lot of orgs have internal mirrors of repos or package sources so they can stay behind few versions to avoid latest and compromise. seen it with internal pip repos, apt repos etc.
some will even audit each package in there (kind crap job but it works fairly well as mitigation)
It will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.
Reflect in what way? The primary focus of that talk is that it’s possible to infect the binary of a compiler in a way that source analysis won’t reveal and the binary self replicates the vulnerability into other binaries it generates. Thankfully that particular problem was “solved” a while back [1] even if not yet implemented widely.
However, the broader idea of supply chain attacks remains challenging and AI doesn’t really matter in terms of how you should treat it. For example, the xz-utils back door in the build system to attack OpenSSH on many popular distros that patched it to depend on systemd predates AI and that’s just the attack we know about because it was caught. Maybe AI helps with scale of such attacks but I haven’t heard anyone propose any kind of solution that would actually improve reliability and robustness of everything.
I believe the issue is if an exploit is somehow injected into AI training data such that the AI unwittingly produces it and the human who requested the code doesn't even know.
That’s a separate issue and specifically not what OP was describing. Also highly unlikely in practice unless you use a random LLM - the major LLM providers already have to deal with such things and they have decent techniques to deal with this problem afaik.
To slightly rephrase a citation from Demobbed (2000) [1]:
The kernel is not just open source, it's a very fast-moving codebase. That's how we win all wars against AI-authored exploits. While the LLM trains on our internal APIs, we change the APIs — by hand. When the agent finally submits its pull request, it gets lost in unfamiliar header files and falls into a state of complete non-compilability. That is the point. That is our strategy.
If that would happen, The worry I would have is of all the sensitive Government servers from all over the world which might be then exploited and the amount of damage which can be caused silently by such a threat actor or something like AWS/GCP/these massive hyperscalers which are also used by the governments around the globe at times.
The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.
But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.
LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.
Yeah, and they can write code with vulnerabilities by accident. But this is a new class of problem, where a known trusted contributor can accidentally allow a vulnerability that was added on purpose by the tooling.
But now you have compromise _at scale_. Before poor plebs like us had to artisinally craft every back door. Now we have a technology to automate that mundane exploitation process! Win!
I just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.
Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
Same experience with browser-use, it installs litellm as a dependency. Rebooted mac as nothing was responding; luckily only github and huggingface tokens were saved in .git-credentials and have invalidated them. This was inside a conda env, should I reinstall my os for any potential backdoors?
A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
Everyone’s (well, except Anthropic, they seem to have preserved a bit of taste) approach is the more data the better, so the databases of stolen content (erm, models) are memorizing crap.
This was a compromise of the library owners github acccounts apparently, so this is not a related scenario to dangerous code in the training data.
I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?
Interesting tool, will definitely try - just curious, is there a tool (hexora checker) that ensures that hexora itself and its dependencies are not compromised ?
And of course if there is one, I'll need another one for the hexora checker....
This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.
This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
This is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of bracketing r-w-x mod package permissions for group id credential where litellm was installed.
> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically
AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".
Valid, but for all the crap that LangChain gets it at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise (unless you're using the optional langchain-litellm package). DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.
This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
Yep, I think the worst impact is going to be from libraries that were using LiteLLM as just an upstream LLM provider library vs for a model gateway. Hopefully, CrewAI and DSPy can get on top of it soon.
I completely removed nanobot after I found that. Luckily, I only used it a few times and inside a docker container. litellm 1.82.6 was the latest version I could find installed, not sure if it was affected.
Does anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.
Virtual Keys is an Enterprise feature. I am not going to pay for something like this in order to provide my family access to all my models. I can do without cost control (although it would be nice) but I need for users to be able to generate a key and us this key to access all the models I provide.
Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.
If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack
it does a lot of CPU intensive work
spawn background python
decode embedded stage
run inner collector
if data collected:
write attacker public key
generate random AES key
encrypt stolen data with AES
encrypt AES key with attacker RSA pubkey
tar both encrypted files
POST archive to remote host
Not just as a gateway in a lot cases, but CrewAI and DSPy use it directly. DSPy uses it as its only way to call upstream LLM providers and CrewAI falls back to it if the OpenAI, Anthropic, etc. SDKs aren't available.
Do you feel as if people will update litellm without looking at this discussion/maybe having it be automatic which would then lead to loss of crypto wallets/ especially AI Api keys?
Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?
I wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.
It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
A question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?
But, one of the arguments that I saw online from this was that when a security researcher finds a bug and reports it to the OSS project/Company they then fix the code silently and include it within the new version and after some time, they make the information public
So if you run infrequently updated versions, then you run a risk of allowing hackers access as well.
(An good example I can think of is OpenCode which had an issue which could allow RCE and the security researcher team asked Opencode secretly but no response came so after sometime of no response, they released the knowledge in public and Opencode quickly made a patch to fix that issue but if you were running the older code, you would've been vulnerable to RCE)
I was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.
I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
github, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.
Basically, have all releases require multi-factor auth from more than one person before they go live.
A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
They would have to find someone else if they grew too big.
Though, the secondary doesn't necessarily have to be a maintainer or even a contributor on the project. It just needs to be someone else to do a sanity check, to make sure it is an actual release.
Heck, I would even say that as the project grows in popularity, the amount of people required to approve a release should go up.
So if I'm developing something I want to use and the community finds it useful but I take no contributions and no feature requests I should have to find another person to deal with?
How do I even know who to trust, and what prevents two people from conspiring together with a long con? Sounds great on the surface but I'm not sure you've thought it through.
It wouldn't prevent a project that has a goal of being purposely malicious, just from pushing out releases that aren't actually releases.
As far as who to trust, I could imagine the maintainers of different high-level projects helping each other out in this way.
Though, if you really must allow a single user to publish releases to the masses using existing shared social infrastructure. Then you could mitigate this type of attack by adding in a time delay, with the ability for users to flag. So instead of immediately going live, add in a release date, maybe even force them to mention the release date on an external system as well. The downside with that approach is that it would limit the ability to push out fixes as well.
But I think I am OK with saying if you're a solo developer, you need to bring someone else on board or host your builds yourself.
When something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?
First line of defense is the git host and artifact host scrape the malware clean (in this case GitHub and Pypi).
Domains might get added to a list for things like 1.1.1.2 but as you can imagine that has much smaller coverage, not everyone uses something like this in their DNS infra.
Edit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.
It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
Yeah, fair enough, the problem here is that the credentials were stolen, the fact that the exploit was packaged into a .pth is just an implementation detail.
Yep, DSPy and CrewAI have direct dependencies on it. DSPy uses it as its primary library for calling upstream LLM providers and CrewAI falls back to it I believe if the OpenAI, Anthropic, etc. SDKs aren't available.
Our modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.
First Trivy (which got compromised twice), now LiteLLM.
I have created an comment to hopefully steer the discussion towards hackernews if the threat actor is stifling genuine comments in github by spamming that thread with 100's of accounts
I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.
The Python ecosystem provides too many nooks and crannies for malware to hide in.
Am I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys?
And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?
LangChain at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise. DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
> Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.
I'd get it if it were a hassle that could be avoided, but it feels like you are trying to avoid the very work you are being paid for, like if a MCD employee tried to pay a kid with Happy Meal toys to work the burger stand.
Another red flag, although a bit more arguable, is that by 'abstracting' the api into a more generic one, you achieving vendor neutrality, yes, but you also integrate much more loosely with your vendors, possibly loose unique features (or can only access them with even more 'hassle' custom options, and strategically, your end product will veer into commodity territory, which is not a place you usually want to be.
Comparing this project to is-odd seems very disingenuous to me. My understanding is this was the only way you could use llama.cpp with Claude Code for example, since llama.cpp doesn't support the Anthropic compatible endpoint and doing so yourself isn't anywhere near as trivial as your comparison. Happy to be corrected if I'm wrong.
That's a correct example, and I agree, it is disingenuous to just trivially call this an `is-odd` project.
Back in the days of GPT-3.5, LiteLLM was one of the projects that helped provide a reliable adapter for projects to communicate across AI labs' APIs and when things drifted ever so slightly despite being an "OpenAI-compatible API", LiteLLM made it much easier for developers to use it rather than reinventing and debugging such nuances.
Nowadays, that gateway of theirs isn't also just a funnel for centralizing API calls but it also serves other purposes, like putting guardrails consistently across all connections, tracking key spend on tokens, dispensing keys without having to do so on the main platforms, etc.
There's also more to just LiteLLM being an inference gateway too, it's also a package used by other projects. If you had a project that needed to support multiple endpoints as fallback, there's a chance LiteLLM's empowering that.
Hence, supply chain attack. The GitHub issue literally has mentions all over other projects because they're urged to pin to safe versions since they rely on it.
1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09
2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt
3. The package is in quarantine on pypi - this blocks all downloads.
We are investigating the issue, and seeing how we can harden things. I'm sorry for this.
- Krrish
- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI - All maintainer accounts have been changed - All keys for github, docker, circle ci, pip have been deleted
We are still scanning our project to see if there's any more gaps.
If you're a security expert and want to help, email me - krrish@berri.ai
What about the compromised accounts(as in your main account)? Are they completely unrecoverable?
Were you not aware of this in the short time frame that it happened in? How come credentials were not rotated to mitigate the trivy compromise?
It is so much better than, you know... "We regret any inconvenience and remain committed to recognising the importance of maintaining trust with our valued community and following the duration of the ongoing transient issue we will continue to drive alignment on a comprehensive remediation framework going forward."
Kudos to you. Stressful times, but I hope it helps to know that people are reading this appreciating the response.
the developer has made a new github account and linked their new github account to hackernews and linked their hackernews about me to their github account to verify the github account being legitimate after my suggestion
Worth following this thread as they mention that: "I will be updating this thread, as we have more to share." https://github.com/BerriAI/litellm/issues/24518
EDIT: no, it's compromised, see proxy/proxy_server.py.
Was your account completely compromised? (Judging from the commit made by TeamPCP on your accounts)
Are you in contacts with all the projects which use litellm downstream and if they are safe or not (I am assuming not)
I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.
We have deleted all our pypi publishing tokens.
Our accounts had 2fa, so it's a bad token here.
We're reviewing our accounts, to see how we can make it more secure (trusted publishing via jwt tokens, move to a different pypi account, etc.).
In CI they could easily have moved `trivy` to its own dedicated worker that had no access to the PYPI secret, which should be isolated to the publish command and only the publish command.
Token in CI could've been way too broad.
Write a detailed postmortem, share it publicly, continue taking responsibility, and you will come out of this having earned an immense amount respect.
From a security standpoint, you would rather pull in a library that is compromised and run a credential stealer? It seems like this is the exact intended and best behavior.
More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.
That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response). I know that's toeing the rules line, but I think it's important to observe.
No one initially knows how much is compromised
i'd much rather see a million open PRs than a single malicious PR sneak through due to lack of thorough review.
or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...
or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...
Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.
Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.
https://wiki.debian.org/SecureBoot#Shim
Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.
To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.
This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.
https://github.com/calebfaruki/tightbeam https://github.com/calebfaruki/airlock
This is literally the thing I'm trying to protect against.
u can run in chroot jail and it wouldnt have accessed ssh keys outside of the jail...
theres many more similar technologies aleady existing, for decades.
doing it on a per language basis is not ideal. any new language would have to reinvent the wheel.
better to do it at system level. with the already existing tooling.
openbsd has plege/unveil, linux chroot, namespaces, cgroups, freebsd capsicum or w/e. theres many of these things.
(i am not sure how well they play within these scenarios, but just triggering on the sandboxing comment. theres plenty of ways to do it as far as i can tell...)
The problem is that programs can be entire systems, so "doing it at the system level" still means that you'd have to build boundaries inside a program.
you can use OS apis to isolate the thing u want to use just fine..
and yes, if you mix privilege levels in a program by design then u will have to design your program for that.
this is simple logic.
a programming language can not decide for you who and what you trust.
For the sake of the argument, what if I wanted to isolate numpy from scipy?
Would you run numpy in a separate process from scipy? How would you share data between them?
Yes, you __can__ do all of that without programming language support. However, language support can make it much easier.
We should have sandboxing in Rust, Python, and every language in between.
The problems you mentioned resonated a lot with me and why I'm building it, any interest in working to solve that together?: https://github.com/smol-machines/smolvm
- a security focused project should NOT default to train people installing by piping to bash. If i try previewing the install script in the browser it forces download instead of showing as plain text. The first thing i see is an argument
# --prefix DIR Install to DIR (default: ~/.smolvm)
that later in the script is rm -rf deleting a lib folder. So if i accidentally pick a folder with ANY lib folder this will be deleted.
- Im not sure what the comparison to colima with krunkit machines is except you don't use vm images but how this works or how it is better is not 100% clear
- Just a minor thing but people don't have much attention and i just saw aws and fly.io in the description and nearly closed the project. it needs to be simpler to see this is a local sandbox with libkrun NOT a wrapper for a remote sandbox like so many of the projects out there.
Will try reaching you on some channel, would love to collaborate especially on devX, i would be very interested in something more reliable and bit more lightweight in placce of colima when libkrun can fully replace vz
1. In comparison with colima with krunkit, I ship smolvm with custom built kernel + rootfs, with a focus on the virtual machine as opposed to running containers (though I enable running containers inside it).
The customizations are also opensource here: https://github.com/smol-machines/libkrunfw
2. Good call on that description!
I've reached out to you on linkedin
It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.
Overall, it fits into the "VM is the new container" vision.
Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.
More here <https://hn.algolia.com/?q=smolbsd>
At a glance, it's a matter of compatibility, most software has first class support for linux. But very interesting work and I'm going to follow it closely
worked in AWS and specifically with firecracker in the container space for 4 years - we had a very long onboarding doc to dev on firecracker for containers... So I made sure to focus on ease of use here.
Also, I typically only update dependencies when either an exploit is known in the current version or I need a feature present in a later version - and even then not to the absolute latest version if possible. I do this for all my projects under the many eyes principal. Finding exploits takes time, new updates are riskier than slightly-stale versions.
Though, if I'm filing a bug with a project, I do test and file against the latest version.
No free lunch. LLMs are capable of writing exploitable code and you don’t get notifications (in the eg Dependabot sense, though it has its own problems) without audits.
On a personal note, I have been developing and talking to a clanker ( runs inside ) to get my day to day work done. I can have multiple instances of my project using worktrees, have them share some common dependencies and monitor all of them in one place. I plan to opensource this framework soon.
Making this work on a per-library level … seems a lot harder. The cost for being very paranoid is a lot of processes right now.
(Yes, they should better configure which CI job has which permissions, but this should be the default or it won't always happen)
But mention that on HN and watch getting downvoted into oblivion: the war against general computation, walled gardens, locked down against device owners...
https://ramimac.me/trivy-teampcp/#phase-09
I've been through SOC2 certifications in a few jobs and I'm not sure it makes you bullet proof, although maybe there's something I'm missing?
I would expect better spam detection system from GitHub. This is hardly acceptable.
Worked like a charm, much appreciated.
This was the answer I was looking for.
Thanks, that helped!
Thanks for the tip!
Great explanation, thanks for sharing.
This was the answer I was looking for.
It just doesn't have to be spammed enough that advertisers leave the platform and I think that they sort of succeed in doing so.
Think about it, if Facebook shows you AI slop ragebait or any rage-inducing comment from multiple bots designed to farm attention/for malicious purposes in general, and you fall for it and show engagement to it on which it can show you ads, do you think it has incentive to take a stance against such form of spam
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
some will even audit each package in there (kind crap job but it works fairly well as mitigation)
However, the broader idea of supply chain attacks remains challenging and AI doesn’t really matter in terms of how you should treat it. For example, the xz-utils back door in the build system to attack OpenSSH on many popular distros that patched it to depend on systemd predates AI and that’s just the attack we know about because it was caught. Maybe AI helps with scale of such attacks but I haven’t heard anyone propose any kind of solution that would actually improve reliability and robustness of everything.
[1] Fully Countering Trusting Trust through Diverse Double-Compiling https://arxiv.org/abs/1004.5534
The kernel is not just open source, it's a very fast-moving codebase. That's how we win all wars against AI-authored exploits. While the LLM trains on our internal APIs, we change the APIs — by hand. When the agent finally submits its pull request, it gets lost in unfamiliar header files and falls into a state of complete non-compilability. That is the point. That is our strategy.
1 - https://en.wikipedia.org/wiki/Demobbed_(2000_film)
The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.
But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.
LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.
Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
how do you do that? have Activity Monitor up at all times?
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?
https://news.ycombinator.com/item?id=47475888
Run all your new dependencies through static analysis and don't install the latest versions.
I implemented static analysis for Python that detects close to 90% of such injections.
https://github.com/rushter/hexora
This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
> ### Software Supply Chain is a Pain in the A*
> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically
AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".
[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...
LiteLLM wouldn't be my top choice, because it installs a lot of extra stuff. https://news.ycombinator.com/item?id=43646438 But it's quite popular.
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.
[1]: https://pypi.org/project/litellm/#history
https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
it does a lot of CPU intensive work
We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.
Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?
Irrevocable transfers... What could go wrong?
It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
But, one of the arguments that I saw online from this was that when a security researcher finds a bug and reports it to the OSS project/Company they then fix the code silently and include it within the new version and after some time, they make the information public
So if you run infrequently updated versions, then you run a risk of allowing hackers access as well.
(An good example I can think of is OpenCode which had an issue which could allow RCE and the security researcher team asked Opencode secretly but no response came so after sometime of no response, they released the knowledge in public and Opencode quickly made a patch to fix that issue but if you were running the older code, you would've been vulnerable to RCE)
I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.
find / -name "litellm_init.pth" -type f 2>/dev/null
find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null
no i don't let it connect to web...
The package was directly compromised, not “by supply chain attack”.
If you use the compromised package, your supply chain is compromised.
Basically, have all releases require multi-factor auth from more than one person before they go live.
A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
Though, the secondary doesn't necessarily have to be a maintainer or even a contributor on the project. It just needs to be someone else to do a sanity check, to make sure it is an actual release.
Heck, I would even say that as the project grows in popularity, the amount of people required to approve a release should go up.
How do I even know who to trust, and what prevents two people from conspiring together with a long con? Sounds great on the surface but I'm not sure you've thought it through.
As far as who to trust, I could imagine the maintainers of different high-level projects helping each other out in this way.
Though, if you really must allow a single user to publish releases to the masses using existing shared social infrastructure. Then you could mitigate this type of attack by adding in a time delay, with the ability for users to flag. So instead of immediately going live, add in a release date, maybe even force them to mention the release date on an external system as well. The downside with that approach is that it would limit the ability to push out fixes as well.
But I think I am OK with saying if you're a solo developer, you need to bring someone else on board or host your builds yourself.
Domains might get added to a list for things like 1.1.1.2 but as you can imagine that has much smaller coverage, not everyone uses something like this in their DNS infra.
if you have tips i am sure they are welcome. snark remarks are useless. dont be a sourpuss. if you know better, help the remediation effort.
This would also disable site import so not viable generically for everyone without testing.
https://github.com/krrishdholakia/blockchain/commit/556f2db3...
It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
That's why I'm building https://github.com/kstenerud/yoloai
First Trivy (which got compromised twice), now LiteLLM.
https://github.com/BerriAI/litellm/issues/24512#issuecomment...
[1]: https://pypi.org/project/litellm/
The Python ecosystem provides too many nooks and crannies for malware to hide in.
The previous version triggers on `import litellm.proxy`
Again, all according to the issue OP.
[1] https://docs.python.org/3/library/site.html
I'm sensing a pattern here, hmm.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.
That's what they pay us for
I'd get it if it were a hassle that could be avoided, but it feels like you are trying to avoid the very work you are being paid for, like if a MCD employee tried to pay a kid with Happy Meal toys to work the burger stand.
Another red flag, although a bit more arguable, is that by 'abstracting' the api into a more generic one, you achieving vendor neutrality, yes, but you also integrate much more loosely with your vendors, possibly loose unique features (or can only access them with even more 'hassle' custom options, and strategically, your end product will veer into commodity territory, which is not a place you usually want to be.
This is like a couple hours of work even without vibe coding tools.
Back in the days of GPT-3.5, LiteLLM was one of the projects that helped provide a reliable adapter for projects to communicate across AI labs' APIs and when things drifted ever so slightly despite being an "OpenAI-compatible API", LiteLLM made it much easier for developers to use it rather than reinventing and debugging such nuances.
Nowadays, that gateway of theirs isn't also just a funnel for centralizing API calls but it also serves other purposes, like putting guardrails consistently across all connections, tracking key spend on tokens, dispensing keys without having to do so on the main platforms, etc.
There's also more to just LiteLLM being an inference gateway too, it's also a package used by other projects. If you had a project that needed to support multiple endpoints as fallback, there's a chance LiteLLM's empowering that.
Hence, supply chain attack. The GitHub issue literally has mentions all over other projects because they're urged to pin to safe versions since they rely on it.