Three large-scale campaigns have targeted Docker Hub users, planting millions of repositories designed to push malware and phishing sites since early 2021.
Given the number of posts I’ve seen online and adding PPAs or RPM repos, or installing things from source, I’d say that number is a lot higher than 0.
Docker contains that nonsense in a way that’s easy to update. And it allows developers to test production dependencies on a system without those production libs.
something we call a package manager.
Package managers don’t provide a sandbox. You can get one with SELinux, but that requires proper configuration, and then we’re back to the original issue.
Docker provides a decent default configuration. So I think the average user who doesn’t run updates consistently, may add sketchy dependencies, and doesn’t audit things would be better off with Docker.
So yeah, you may have more vulnerabilities with Docker, but they’re less likely to cause widespread issues since each is in its own sandbox. And due to the way its designed, it’s often easier to do things “the right way” (separate concerns) than the wrong way (relax security until it works).
adding PPAs or RPM repos, or installing things from source, I’d say that number is a lot higher than 0.
Nothing wrong with that. Unlike docker that’s cryptographically protected toolchain/buildchain/depchain. Thus, a PPA owner is much less likely to get compromised.
Installing things from source in a secure environment is about as safe as you can get, when obtaining the source securely.
Docker contains that nonsense in a way that’s easy to update.
Really? Ist there already a builtin way to update all installed docker containers?
What’s uneasy about apt full-upgrade?
Package managers don’t provide a sandbox.
I didn’t say that.
average user who doesn’t run updates consistently, may add sketchy dependencies, and doesn’t audit things would be better off with Docker.
That’s false.
but they’re less likely to cause widespread issues since each is in its own sandbox.
Also false. Sandbox evasion is very easy and the next local PE kernel vulnerability only weeks away. Also VM evasion is a thing.
Basically one compromised container giving local execution is enough to pwn your complete host.
Thus, a PPA owner is much less likely to get compromised.
Again, I have yet to see evidence of a Docker repo being compromised.
Repos are almost never compromised. Most issues are from users making things insecure because it’s easier that way. As in:
opening too many ports
enabling passwordless sudo
removing SELinux rules instead of just the parts they need
And so on. Docker makes it really easy to throw away the entire VPS and redeploy, which means those bad configs are less likely to persist in production.
Installing things from source in a secure environment is about as safe as you can get, when obtaining the source securely.
Sure, but that’s only true from the moment you release.
Source-compiled packages rarely get updated, at least in my experience. However, putting source packages in a Docker container increases those chances because it’s really easy to roll back if something goes wrong, and if you upgrade the base, it rebuilds the source with those new libraries, even if statically compiled.
If you’re talking about repository packages built from source (I’m not, I’m talking about side-loading packages), that’s not really a thing anymore with reproducible builds.
Ist there already a builtin way to update all installed docker containers?
In our CI pipeline, yes, I just run a “rebuild all” script and it’s ready for the next deploy to production. Or the script run could be totally automated (I’m currently fighting our devOPs to enable it). We do “:-” pattern, so worse case, we update from “bullseye” to “bookworm” or whatever separately from the version and ship that. Total process is like 20s per repo (edit Dockerfile out docker-compose.yml, make PR, then ship), then whatever time it takes to build and deploy.
But I don’t necessarily want to upgrade everything. If I upgrade the OS version, I need to check that each process I use is compatible, so I’m more likely to delay doing it. With Docker, I can update part of my stack and it won’t impact anything else. At work, we have dozens of containers, all at various stages of updates (process A is blocked by dependency X, B is blocked by Y, etc), so we’re able to get most of them updated independently and let the others lag until we can fix whatever the issues are.
When installing to the host, I’d need to do all of it at once, which means I’m more likely to stay behind on updates.
That’s false.
Source? I’m coming from ~15 years of experience, and I can say that servers rarely see updates. Maybe it happens in larger firms, but not in smaller shops. But then, larger firms can also run security audits of docker images and whatnot.
Basically one compromised container giving local execution is enough to pwn your complete host.
Maybe. That depends on if the attacker is able to get out of the sandbox. If it was a vulnerability in a process on the host, there’s no sandbox to escape.
So while your Docker containers will probably lag a bit, they come with a second layer of protection. If your host lags a bit, there’s probably no second layer of protection.
Given the number of posts I’ve seen online and adding PPAs or RPM repos, or installing things from source, I’d say that number is a lot higher than 0.
Docker contains that nonsense in a way that’s easy to update. And it allows developers to test production dependencies on a system without those production libs.
Package managers don’t provide a sandbox. You can get one with SELinux, but that requires proper configuration, and then we’re back to the original issue.
Docker provides a decent default configuration. So I think the average user who doesn’t run updates consistently, may add sketchy dependencies, and doesn’t audit things would be better off with Docker.
So yeah, you may have more vulnerabilities with Docker, but they’re less likely to cause widespread issues since each is in its own sandbox. And due to the way its designed, it’s often easier to do things “the right way” (separate concerns) than the wrong way (relax security until it works).
Nothing wrong with that. Unlike docker that’s cryptographically protected toolchain/buildchain/depchain. Thus, a PPA owner is much less likely to get compromised.
Installing things from source in a secure environment is about as safe as you can get, when obtaining the source securely.
Really? Ist there already a builtin way to update all installed docker containers?
What’s uneasy about
apt full-upgrade
?I didn’t say that.
That’s false.
Also false. Sandbox evasion is very easy and the next local PE kernel vulnerability only weeks away. Also VM evasion is a thing.
Basically one compromised container giving local execution is enough to pwn your complete host.
Again, I have yet to see evidence of a Docker repo being compromised.
Repos are almost never compromised. Most issues are from users making things insecure because it’s easier that way. As in:
And so on. Docker makes it really easy to throw away the entire VPS and redeploy, which means those bad configs are less likely to persist in production.
Sure, but that’s only true from the moment you release.
Source-compiled packages rarely get updated, at least in my experience. However, putting source packages in a Docker container increases those chances because it’s really easy to roll back if something goes wrong, and if you upgrade the base, it rebuilds the source with those new libraries, even if statically compiled.
If you’re talking about repository packages built from source (I’m not, I’m talking about side-loading packages), that’s not really a thing anymore with reproducible builds.
In our CI pipeline, yes, I just run a “rebuild all” script and it’s ready for the next deploy to production. Or the script run could be totally automated (I’m currently fighting our devOPs to enable it). We do “:-” pattern, so worse case, we update from “bullseye” to “bookworm” or whatever separately from the version and ship that. Total process is like 20s per repo (edit Dockerfile out docker-compose.yml, make PR, then ship), then whatever time it takes to build and deploy.
But I don’t necessarily want to upgrade everything. If I upgrade the OS version, I need to check that each process I use is compatible, so I’m more likely to delay doing it. With Docker, I can update part of my stack and it won’t impact anything else. At work, we have dozens of containers, all at various stages of updates (process A is blocked by dependency X, B is blocked by Y, etc), so we’re able to get most of them updated independently and let the others lag until we can fix whatever the issues are.
When installing to the host, I’d need to do all of it at once, which means I’m more likely to stay behind on updates.
Source? I’m coming from ~15 years of experience, and I can say that servers rarely see updates. Maybe it happens in larger firms, but not in smaller shops. But then, larger firms can also run security audits of docker images and whatnot.
Maybe. That depends on if the attacker is able to get out of the sandbox. If it was a vulnerability in a process on the host, there’s no sandbox to escape.
So while your Docker containers will probably lag a bit, they come with a second layer of protection. If your host lags a bit, there’s probably no second layer of protection.
*ouch*