Hello there, in these days where “cloud” is more important than “sun” (what a terrible joke ;-), storage seems to be the big problem to solve. It’s easy to think about memory on demand, CPU, network, and storage capacity. But that is just one dimension of storage, and a filesystem like ZFS handles that as easy as just one command. But what about scale the operations/second one application needs?
As we like to talk about ZFS ;-), let’s see what we have in terms of performance:
1 – RAM
2 – Number of disks
3 – SLOGS
4 – L2ARC
Well, i think is that… oh, first of all: that is a lot of flexibility and i don’t know any other filesystem with so many options. That said, i guess we have just one point that can give us an quick response on performance, and is SLOG. If we have an application that is using one slog, writing 100MB/s, we can add another one, and ZFS will scales without problems to 200MB/s. Ok, not talking about other problems we can have with this procedure (like the picket fencing probem), but is something i can think as IO on demand. But the problem here is that if our problem is just on monday, we will need to live with that new slog for the eternity. So, we need to create our pool with 2 slogs to use just one day for week. Or worst, we can have an workload that needs two slogs one day (not fixed) in the whole month.
Storm - Wikipedia

Storm - Wikipedia


For writes, the example above i think is the better choice, because the Number of disks is even worst. Another (bad) option would be disable the ZIL to handle the storm, but…
For reads we have the L2ARC, that we can add and remove but the effect is not so quick as i think “on demand” requires. For RAM we need downtime, and is expensive for no much help… So, we are in the scenario with two options:
a) Pre allocate resources (idle and expansive);
b) Pray in the storm time, and hope you are alive in the next;
ZFS is a great filesystem, but how we can get on demand IO?