Turns out we’re already doing a lot of compression at the file and filesystem level.
Not necessarily. For example, you can’t really compress encrypted files. You can certainly try but the result will likely be what the meme portrays.
Turns out pseudo random byte streams don’t really repeat that often.
Well, yeah. The real advantage is only having a single file to transfer, makes eg. SFTP a lot less annoying at the command line.
Lossless compression works by storing redundant information more efficiently. If you’ve got 50 GB in a directory, it’s going to be mostly pictures and videos, because that would be an incredible amount of text or source code. Those are already stored with lossy compression, so there’s just not much more you can squeeze out.
I suppose you might have 50 GB of logs, especially if you’ve a logserver for your network? But most modern logging stores in a binary format, since it’s quicker to search and manipulate, and doesn’t use up such a crazy amount of disk space.
haha. I spent so much time looking for an efficient compression algorithm for audio and movies … until I finally understood there are already compressed X) However it allow me to discover zst, wich is uncredible with text :
35M test.log 801K test.7z 32 test.log.7z 46 test.log.tar.bz2 45 test.log.tar.gz 108 test.log.tar.xz 22 test.log.tar.zst 2,1M test.tar.bz2 8,1M test.tar.gz 724K test.tar.xz 1,1M test.tar.zst 8,1M test.zipI dont remember him crushing a tank, is this an edit or did they release another watchmen movie?
It’s from the 2009 movie, during the montage where he is on Mars and is recounting to himself how he got his powers and his relationship with Janey. Timestamp is 1:10:08.




