I’ll start:
When I was first learning to use Docker, I didn’t realize that most tutorials that include a database don’t configure the database to persist. Imagine my surprise when I couldn’t figure out why the database kept getting wiped!
- Docker swarm does not respect its own compose spec, exposes services on all interfaces and bypasses firewall rules [1], [2]
- 1 million SLOC daemon running as root [1]
- Buggy network implementation, sometimes requires restarting the daemon to release bridges [1]
- Requires frequent rebuilds to keep up to date with security patches [1] [2] [3]
- No proper support for external config files/templating, not possible to do graceful reloads, requires full restarts/downtime for simple configuration changes [1]
- Buggy NAT implementation [1]
- Buggy overlay network implementation, causes TCP resets [1]
- No support for PID limits/fork bomb protection [1], no support for I/O limits [2]
- No sane/safe garbage collection mechanism,
docker system prune --all
deletes all unused volumes - including named volumes which are unused because the container/swarm service that uses them is stopped at that particular moment for whatever reason. Eats disk space like mad [1] [2] - Requires heavy tooling if you’re serious about it (CI, container scanning tools, highly-available registry…) [1], Docker development and infrastructure is fully controlled by Docker Inc. [1] [2] [3] [4] [5] [6]
The biggest footgun I encounter every time I set up a raspberry pi or other linux host for a side project is forgetting that Docker doesn’t do log rotation for containers’ logs by default, which results in the service going down and seeing a sweat inducing ENOSPC error when you ssh in to check it out.
You can configure this by creating
/etc/docker/daemon.json
and either setting up log rotation withlog-opts
or using thelocal
logging driver (it defaults tojson
) if you’re not shipping container logs anywhere and just read the logs locally. Thelocal
driver compresses the logs and automatically does log rotation:{ "log-driver": "local", "log-opts": { "max-size": "10m", "max-file": "3" } }
Using Docker Desktop at work without a license. Use Rancher Desktop instead. It’s essentially like what Oracle did with the Oracle JDK. To my knowledge they haven’t gone after anyone but it is technically a license violation to use it for work without a license. I could not (easily) find a way to install Docker on Mac without using Docker Desktop but Rancher Desktop worked fine.
Also, podman exists as a drop in replacement for Docker for the curious. I haven’t tried it myself though so this isn’t a recommendation.
I can vouch for podman. It can run daemonless and rootless, symlinks to docker.sock and the ui works with both kubernetes (kind & minikube) and most of the docker desktop extensions.
Podman is great and now is compatible with the docker engine. Having rootless containers by default is awesome! There’s also a utility called podman-compose that I also highly recommend.
Interesting — coming from the Linux world where docker is an ‘apt install’ away, I struggled with docker installation on Mac and settled on their client because of various “gotchas” I saw online. And even then got pissed because the client overwrote my local ‘kubectl’ bin.
Guess I’ll have to reevaluate.
Podman is just as easy to install–admittedly they give way more support for Ubuntu and Fedora than other platforms (unfortunately). But once you’ve switched, you won’t go back; it really is a ‘seemless’ transition, and you can use the same dockerfiles and docker-compose files with it.
I still don’t really know how to get mounted folders to not wreck permission on the files. You can use a non root user, but that requires users have UID 1000 when you distribute an image.
The closest thing I’ve found is to allow users to specify the UID and GID to run with, but there’s no good way to just auto detect that upon runtime unfortunately