I’ve had an Ubuntu 22.04 setup going for around a year, and over that year I’ve had to increase the size of the partition holding my /var folder multiple times. I’m now up to 20GB and again running into problems, mainly installing new apps, because that partition is again nearly full. I’ve used commands sudo apt clean and sudo journalctl --vacuum-size=500 to temporarily clear up some space, but it doesn’t take long to fill back up, and gets less effective with time, til I have no choice but to expand the partition again.
Am I doing something wrong? Is it normal to need 20GB+ for var? Is there a way to safely reclaim space I don’t know about?
If you suspect that the issue is
journald
, you can use the following command to check how much space it is using:journalctl --disk-usage
Rather than periodically running
journalctl --vacuum-size=500
to free up space, you can just limit the journal by adding the following to a new file such as/etc/systemd/journald.conf.d/size.conf
:[Journal] SystemMaxUse=512M
This will limit the journal from using more than
512MB
. That said, ifjournald
is filling up fast, then something is spamming your logs and you could runjournalctl -a -f
to get a sense of what is being written to your logs.We can’t really help until you tell us what’s using the space. Could be databases, could be docker, could be logs.
I use… du -xh --max-depth 2 /var/ | sort -h
Install ncdu, then
sudo ncdu -x /var
it’ll tell you what is taking up space, then if you tell us, we can help you identify how to minimize it and keep it low.You can use
du -sh
to figure out what’s using most of the space. Something along the line of:sudo -i du -sh /home /usr /var du -sh /var/* du -sh /var/log/* # etc
If it’s one of your log files (likely), you can run something like
tail -n 100 /var/log/[culprit]
ortail -F /var/log/[culprit]
to see what is being flooded in this log file exactly. Then you can try to fix it.There’s a way to figure out what is responsible for using up all of that space. A couple of ways, really. Here’s the one I use, though: du -s -h -x /path/to/ | sort -h -r | head -n 10*
- du
- -s - display only a total for each argument
- -h - human readable values
- -x - do not cross file systems (in case you have another directory tree mounted under /var, which’ll complicate figuring out what’s in there for this purpose)
- sort
- -h - compare human readable numbers (e.g., 1G, 2T)
- -r - reverse sort (biggest first)
- head
- -n 10 (first ten lines or less)
ncdu
makes it even easier if you want to interactively browse through folders to see which files exactly are eating up space
- du
There’s probably some program that’s filling up /var/log. Check that directory.
If I had to guess, I’d say it’s probably snaps. I’ve had the same issue and they’ve slowly been taking up more and more of my space, often with new gnome snaps being installed but the old ones not removed.
Try “snap list” to see what’s installed as snap
See what’s using the space. This will list any dirs using >100MiB:
sudo du -h -d 5 -t 100M /var
If vacuuming journalctl helps, something is spamming the logs. That can be something silly {a Gnome extension managed to fill up the logs for me once) but it can also be a hard drive or motherboard on the verge of death throwing tons of errors everywhere.
If it’s a log file, find it and see what’s being spammed. You may need to check the connections inside your PC or start saving up for replacement hardware.
It’s also possible that you use docker. Docker works by downloading entire copies of OS file systems (the “it works on my machine, so we’re shipping my machine” approach of software deployment). A few updates can easily eat up gigabytes of space.
I’ll occasionally run
docker system prune
once I’ve verified that all my docker images are running, that gets rid of most of the exces.Do you know what takes up the space? Something like gdu or ncdu will help you analyze the problem.
I had the same problem on my PC. Journald was spamming PCI errors all the time and the disk was filling up quite quickly. I ended up disabling journald and rsyslogd and the problem was fixed. You can delete the log files, if you find them
and the problem was fixed
Umm…
Removed by mod
This is how my friend fixed her check engine light. Just put the official Car Talk electrical tape over it and problem solved.
And the problem was fixed. FIXED. No more error logs.
Are you using docker on BTRFS?
Docker makes use of BTRFS snapshots, but it snapshots the whole volume. That means as other programs delete/rewrite files, the old copies still exist in the snapshot. I’ve ended up putting
/var/lib/docker
on it’s own filesystem.What is the filesystem type? If it’s btrfs, maybe it’s the snapshots.
deleted by creator
20gb for a Linux workstation is embarrassingly tiny in 2023… Hell it’s barely passable for a phone