Thank you for reading and your critique! What you're describing is definitely a real problem, but I'd challenge slightly and suggest the outcome is usually the inverse of what you might expect.
One of the counterintuitive things here is that _having_ disk swap can actually _decrease_ disk I/O. In fact this is so important to us on some storage tiers that it is essential to how we operate. Now, that sounds like patent nonsense, but hear me out :-)
With a zram-only setup, once zram is full, there is nowhere for anonymous pages to go. The kernel can't evict them to disk because there is no disk swap, so when it needs to free memory it has no choice but to reclaim file cache instead. If you don't allow the kernel to choose which page is colder across both anonymous and file-backed memory, and instead force it to only reclaim file caches, it is inevitable that you will eventually reclaim file caches that you actually needed to be resident to avoid disk activity, and those reads and writes hit the same slow DRAMless SSD you were trying to protect.
In the article I mentioned that in some cases enabling zswap reduced disk writes by up to 25% compared to having no swap at all. Of course, the exact numbers will vary across workloads, but the direction holds across most workloads that accumulate cold anonymous pages over time, and we've seen it hold on constrained environments like BMCs, servers, desktop, VR headsets, etc.
So, counter-intuitively, for your case, it may well be the case that zswap reduces disk I/O rather than increasing it with an appropriately sized swap device. If that's not the case that's exactly the kind of real-world data that helps us improve things on the mm side, and we'd love to hear about it :-)
1. Thanks for partially (in paragraph 4 but not paragraph 5) preempting the obvious objection. Distinguishing between disk reads and writes is very important for consumer SSDs, and you quoted exactly the right metric in paragraph 4: reduction of writes, almost regardless of the total I/O. Reads without writes are tolerable. Writes stall everything badly.
2. The comparison in paragraph 4 is between no-swap and zswap, and the results are plausible. But the relevant comparison here is a three-way one, between no-swap, zram, and zswap.
3. It's important to tune earlyoom "properly" when using zram as the only swap. Setting the "-m" argument too low causes earlyoom to miss obvious overloads that thrash the disk through page cache and memory-mapped files. On the other hand, with earlyoom, I could not find the right balance between unexpected OOM kills and missing the brownouts, simply because, with earlyoomd, the usage levels of RAM and zram-based swap are the only signals available for a decision. Perhaps systemd-oomd will fare better. The article does mention the need for tuning the userspace OOM killer to an uncomfortable degree.
I have already tried zswap with a swap file on a bad SSD, but, admittedly, not together with earlyoomd. With an SSD that cannot support even 10 MB/s of synchronous writes, it browns out, while zram + earlyoomd can be tuned not to brown out (at the expense of OOM kills on a subjectively perfectly well performing system). I will try backing-store-less zswap when it's ready.
And I agree that, on an enterprise SSD like Micron 7450 PRO, zswap is the way to go - and I doubt that Meta uses consumer SSDs.
nextaccountic
today at 3:42 PM
It's very rare for disk reads to hang your UI (you would need to be running blocking operations ij the UI thread)
But a swap witu high latency will occasionally hang the interface, and with it hang any means to free memory manually