Zstandard (ZSTD) is a relatively new compression method introduced to ZFS.
Below are test results from a Proxmox host with kernel 5.4.106-1-pve
. The pool consisted of 8 drives in RAID-Z3 configuration.
The comparison is between default compression=on
(presumably LZ4) vs compression=zstd
The test was simply making a tar archive of the /usr
partition from a Ubuntu 20.04 Desktop installation and then making more copies of it to fill about 100GB of space.
This was done on a VM running within Proxmox with virtual disks of different volblocksize mounted on different paths.
There is small discrepancy between filesystems. It could be due to fstrim not cleaning all the blocks.
Here is the results with compression=on
volblocksize | ratio | volblocksize | used | refer | lused | lrefer | actual savings |
8K | 1.28x | 8k | 168G | 168G | 94.6G | 94.6G | -1.77x |
16K | 1.54x | 16k | 102G | 102G | 95.1G | 95.1G | -1.07x |
32K | 1.75x | 32k | 65.9G | 65.9G | 95.9G | 95.9G | 1.46x |
64K | 1.89x | 64k | 54.9G | 54.9G | 96.2G | 96.2G | 1.75x |
128K | 1.96x | 128k | 49.4G | 49.4G | 96.5G | 96.5G | 1.95x |
256K | 2.01x | 256k | 46.0G | 46.0G | 96.8G | 96.8G | 2.10x |
The ratio is the ratio reported by zfs list
command. The actual savings is the calculated savings base on diffeence between used and lused.
Similarly below is the results with compression=zstd
volblocksize | ratio | volblocksize | used | refer | lused | lrefer | actual savings |
8K | 1.49x | 8k | 145G | 145G | 94.6G | 94.6G | -1.53x |
16K | 1.90x | 16k | 92.0G | 92.0G | 95.1G | 95.1G | 1.03x |
32K | 2.21x | 32k | 58.1G | 58.1G | 95.9G | 95.9G | 1.66x |
64K | 2.43x | 64k | 44.2G | 44.2G | 96.2G | 96.2G | 2,17x |
128K | 2.58x | 128k | 38.3G | 38.3G | 96.5G | 96.5G | 2,51x |
256K | 2.73x | 256k | 34.3G | 34.3G | 96.8G | 96.8G | 2,82x |
The results are quite interesting as the default 8k size seems to be performing negatively and 16k is a borderline case. It is not clear why the default is 8k for volumes.
In general it looks like zstd
seems to be providing better than the default compression and 128k and 256k block sizes seem to give good results.
In some of my future articles I will have a table going up to 1024k volblocksize.
No comments:
Post a Comment