Alt Text
A screenshot of a file manager preview window for my ~/.cache folder, which takes up 164.3 GiB and has 246,049 files and 15,126 folders. The folder was first created about 1.75 years ago with my system
You don’t have to clean your ~/.cache every now and then. You have to figure out which program eats so much space there, ensure that it is not misconfigured and file a bugreport.
So OP’s headline should be saying instead:
Reminder to CHECK your ~/.cache folder every now and then
just symlink ~/.cache to /dev/null
Lmao some malicious ass advise here
Cache exists for a reason, that sounds like itd break programs, a safer method is probably having it be a ramdisk
Check? Why?
% du -sh ~/.cache 1,6G /home/bizdelnick/.cache
I don’t remember if I ever cleaned it up. Probably a couple years ago when I moved my old HDD to new PC with freshly installed OS. It does not grow accidentally. Only in some very rare cases. As well as some other dirs under
~
andvar
. If it is a critical system, set up monitoring of free filesystem space. If not, you will notice if it becomes full (I can’t remember when this happened to me last time, maybe ~15 years ago when some log file started to grow because of endless error messages).Because some users experienced accidential grows like OP had 160 Gbyte. So general advice for linux users can be stated as: Check your ~/.cache every now and then
Critical systems/servers shall better be monitored as you suggest.
Some users experienced accidential growth of /var/log. Some users experienced accidential growth of /var/cache. Some users experienced accidential growth of /var/lib. Some users experienced accidential growth of ~/.xsession-errors. Shall I continue?
Does every user need to begin his day checking all that places? No, he does not. It is waste of time. Such situations are extremely rare. If you are paranoid, check
df
to see if you have enough free space, and only if it unpredictably shrinked begin to ivestigate which directory has grown.I don’t get your point. Why should somebody do this every day?
As the experience from other users in this thread, it seems not extremely rare to have an overgrown ~/.cache/ folder. So checking it from time to time is a good advice. If we all do this for a time, and create bug tickets for software which is not cleaning up. Then this problem will hopefully go away with future software releases.
Why should somebody do this every day?
Why should somebody do this ever?
As the experience from other users in this thread, it seems not extremely rare to have an overgrown ~/.cache/ folder.
It is the first thread about overgrown
~/.cache
directory I see since I use Linux (~16 years or so). But, as I wrote above, this sometimes (rarely) happens with log files and some other directories. Checking each of them is a waste of time, if not automated, checking just one or few of them makes sense only if you are testing some app and looking for files it creates.
That’s not very cache money of you
I did this and now my games have no icons in lutris, some of my gnome settings got reset and my proton email bridge stopped working
Time to write some bug reports. ~/.cache is supposed to be disposable.
So the apps are broken. Cache is meant to be deleted at any time
not necessarily during runtime
But a restart of an app should fix it.
For some reason devs can’t wrap their head around cache being temporary.
You shouldn’t have done that Dave.
Cannot this be caused by deleting the folder and not just everything inside?
It’s likely. mkdir fails to create a subdirectory such as ~/.cache/mozilla/ if ~/.cache/ doesn’t exist, unless
-p
is explicitly passed to mkdirOf course, not everything is a shell script, but I imagine the directory creation functions in many languages work similarly
The contents were deleted
Even better: mount ~/.cache as ramfs. It will also speed up some apps significantly.
I always felt that there should be some user directory like
/tmp/
which will be wiped regularly./run/ contains such a directory
/tmp and /var/tmp are writable to regular users on most distributions
Because of excessive RAM I symlink
~/.cache
to/tmp
. Additionally installingzramswap
helps for this scenario.Benefits are faster access, automatc purging between reboots and no wear to the NMVe drive.
Yes, this is a single user scenario.
Isn’t most of what’s in there just filters downloaded from the internet? Python packages, browser cache, etc? Your system confirms you to redownloading everything all the time, no?
This seems like a filename conflict waiting to happen. Why not just mount a tmpfs there?
Like I said it’s a cheap solution for a single user system. Ofc tmpfs would be better but has to be done for every user again
You: It’s a single user system
Also you: Tmpfs would have to be done for every userAnd a /tmp/ symlink would have to be created for every user too, so I don’t get your point
Tmpfs is just as easy as making a symlink, but without the filename conflicts between files in ~/.config/ and /tmp/. You just need to add a line to /etc/fstab
/usr/local/sbin/adduser.local
One line in there and you can make it add a new line with appropriate /home/userX/.cache tmpfs line to fstab.
Or, maybe a cleaner way, you might make a init/systemd service that, when booting, would run something like
for each dir in /home do
mount dir/.tmp -type tmpfs
doneI’m not at the computer now and I’m lazy to Google it, so this above is just a pseudo code and probably won’t run.
Neat, thanks for sharing
Here’s the above pseudocode in bash:
find /home/ -mindepth 1 -maxdepth 1 -type d -exec mount none {}/.cache/ -t tmpfs -o size=16G \;
for
doesn’t work here because it uses spaces to delimit strings, which could cause issues with filenames that contain spacesYou can also create a systemd user service, which is useful if you don’t have root access. The above mount command requires root, but the following doesn’t and is more robust than symlinking to /tmp/:
ln -s $(mktemp -dp /var/tmp/) ~/.config/
Once I get more than 16GB of ram I’ll definitely try that
Thats not very secure. /tmp/ is usually 777
I don’t think I’ve ever seen .cache get bigger than 10GB
It looks like yay was storing AUR build files there, that folder took up about 160 of the 164GiB
You can use
yay -Sc
to clean the cache. It’ll also ask you if you want to clean the pacman cache, which I’m assuming you also haven’t cleaned (check the size of/var/cache/pacman
).One would just need to modify the pacman cache hook for yay. I’m too lazy tho.
If it is true, it is a bug in yay. Cashe should not grow without limit.
You should try using paru, might be better off with it.
it doesn’t matter if you use paru, yay or heck makepkg if you are compiling packages with hilariously large sources like for example webbrowser (librewolf, brave, ungoogled-chromium, firedragon take each like ~30 GB) without pruning the build cache afterwards
Something I noticed was that in this case it was mostly binary AUR programs taking up the space.
I think maybe since yay/AUR use cloned git repos, and old versions of binaries get stored in the git diff and then add up because different versions of the binary are basically like keeping multiple copies of it instead of just the changes to the source code.
Paru cache is huge and you have to delete it manually with something like paru -Sc i think
My update script handles mirrors, updates and cleans the cache automatically. I’d definitely recommend creating one. It’s aliased to sysupdate for me and I also check if it’s a debian or arch based distro so the command works on my servers and desktop
What is your update script? Where did you post it?
I don’t think I’ve posted it before, but here it is. If you use different utilities you’d have to swap those out. Also excuse the comments, I had GH Copilot generate this script
I highly recommend topgrade. You can add custom commands so clearing paru’s cache shouldn’t be a problem. I just do it by hand as I’m ok with it.
I’ve heard of tools like that, but this works fine for me. This way I’m not dependent on it being packaged for my distro and having to install it through other means. I’m fine running things manually, this is just for convenience
Shouldn’t it store that stuff in data-home or state-home? Pikaur compiles in cache and stores it in data-home after.
Depends on the distributions and default settings. In arch, by default, pacman doesn’t delete cache.
Pacman’s cache isn’t in ~/.cache though, it’s in /var/cache. So whatever is taking up this much space isn’t the package manager.
That being said, I think the arch devs should add a config option to automatically delete old packages without having to run paccache manually and have it default to the last 2 versions of a package or so. It can grow quite big over time.
You can set a hook to do it automatically or use this, but I agree that this should be default behaviour
You can also just do
systemctl enable paccache.timer
to automatically run paccache once a week.
Your Distro should normally do that for you.
Advising for this means people will delete random cache and download stuff always.
Are multiple files in there? If yes you could add a script that only deletes files of certain age.
I’m not aware of any distro that automatically clears a user’s .cache in their home directories. Maybe you’re thinking of /var/cache?
No way. If i clean up my .cache directory my precious cached with sccache rust deps would be very upset. >:[
Question, could you have cron/crontab do it monthly or something? Do it monthly meaning delete everything in ~/.cache every month or so?
[This comment has been deleted by an automated system]
This is the good shit I miss from reddit. Thank you for posting a systemd service config, I’m going to implement this.
Thanks for this! I’ve been meaning to start getting into learning more about systemd and making services, this is super detailed and gives me a pretty good starting point!
Don’t. You don’t need to clean it unless cache of some buggy program grows uncontrollable.
I just found this today, I don’t really know anything about cron jobs but this will probably incentive me to learn lol
Did you happen to see which subdirectory was using up this much space? I don’t think I’ve ever seen .cache go above 10GB, so this may be a bug in a piece of software you use.
Running
ncdu
on it would’ve been cool to see.Looks like yay is storing every previous binary for AUR bin packages (also excuse the unreadable terminal theme, it doesn’t play very well with a lot of TUI apps unless they support custom theming)
You should run
yay -Sc
from time to time. This cleans a) your pacman cache (which is normally done by executingpacman -Sc
) b) your AUR build cache, which is what’s taking up 160GB. But this one seems rather unusual, I use paru (which also has the commandparu -Sc
), so I can’t really tell if this is normal with yay.The command also asks you for every directory if you want to delete it or not, so it’s completely save to run that command.
Something I noticed was that it was mostly the binary packages that were taking up so much space, it may be because of how yay stores the programs (does it use git?), the ones that were compiled from source code usually took up the least amount of space, while the binary programs were the ones taking up tens of gigabytes
Indeed, yay utilizes the AUR, which essentially serves as a Git repository for each package. These repositories typically include a PKGBUILD file and a .SRCINFO file, along with possible additional files like patches, desktop, or service files.
For example, take a look at IntelliJ Ultimate: [https://aur.archlinux.org/cgit/aur.git/tree/?h=intellij-idea-ultimate-edition]. It contains the .SRCINFO and PKGBUILD, as well as a .desktop file. These files themselves do not occupy much space.
The PKGBUILD specifies the sources for dependencies. For instance:
source=("https://download.jetbrains.com/idea/ideaIU-$pkgver.tar.gz" "jetbrains-idea.desktop")
The PKGBUILD is essentially a Bash script with predefined functions and variables. You can learn more about it here: [https://wiki.archlinux.org/title/PKGBUILD].
This script primarily downloads and extracts the tar file. In this specific case, it only relocates the files to their intended installation locations, like moving the desktop file to /usr/share/applications.
With such packages, there’s a possibility of wasting significant space since the tar file is downloaded and possibly retained in the cache.
However, other packages, especially those compiled from source, usually involve Git clones. These clones bring the Git repository into a subdirectory of the already cloned AUR package Git repo. Some might also have source tarballs. These types of packages generally do not consume much space in the cache, as they are often just text files, like C source code or Python scripts. These packages frequently rely on external libraries and packages, which are not included in this package’s cache.
While binary packages often bundle all necessary libraries and other components in their source tarballs.
The AUR cache is mostly beneficial if you’re rebuilding the same version or can reuse components from a previous version. For example, a package might depend on a large, static file that doesn’t change often.
In Paru, I’ve enabled the “CleanAfter” option to prevent my cache from overflowing. Given my relatively fast internet speed, redownloading large files isn’t a major concern for me.
Wow, I’ve never seen something like this.
Is it" allowed"? I mean, there are quotas for user homes.
Haven’t deleted it yet actually, looks like most of it is from yay
You could have a cronjob run something like
find /home/user/.cache -type f -atime +30 -delete
, which would find files that haven’t been accessed in the last 30 days and delete them. Make sure your home partition is not mounted with thenoatime
option though.Just mount it into your RAM
You can also setup a cron job to periodically clean oldest files for you.
Example: @weekly find ~/.cache -type f -mtime +7 -delete
This will delete everything older than 7 days inside your cache folder.
I guess you could also Mount a tmpfs to that directory
Doesn’t Steam store the game library there?
No, .cache is similar to a temporary directory (or at least in theory) where important data isn’t supposed to be stored there, instead only temporary files that might speed things up (e.g. images in a browser or thumbnails in a file manager). In this case it looks like all of my AUR packages had their source files cached, which added up over the ~1.75 years that I’ve been running this distro
Yep my bad! I mis-remembered .local/share/steam as . cache/share/steam. :)
it stores it in ~/.steam
Ah I was getting it confused. At one point Steam stored everything in ~/.local/share/steam and symlinked ~/.steam to it. Doesn’t appear to be the case on Ubuntu 22.04, though I used to use Debian and grab the .deb from Valve’s website. My bad! :)
seems like a bug in one of rhe programs you’re using.
modt software automatically manages it’s cache…
are you using build caching tools such as Mozilla sccache? These tend to create 20gb+ cache directories, especially if used with debug builds…yeah let me go check that…
13,574 totaling 1.7gb, not too bad. Hey OP how do you get to this view? It looks like we both use nautilus but when I select “properties” on the .cache folder it looks different.
the screenshot does not look like nautilus, maybe xfce?
I use thunar (with ePapirus-Dark icons which is probably what makes it look like nautilus), I liked nautilus when I used it but thunar has a bit more functionality that I like
Ah thanks!
NEVER
du -sh ~/.cache/* | sort -h
ncdu ~/.cache/