Hello there…
I think you agree that the storage’s problem is the READ requests, synchronous by nature. And, as i said many times before, i think the solution for all problems (the answer for all questions ;-) is cache. Many levels, many flavors.
I did read many times about the recommended redundancy on the ZFS slog devices. In the past, earlier days of ZFS, we had a serious availability problem if we lose a slog device. So, mirror was the way to go.
But thinking about the ZIL concept, we need two failures in a row to make sense to have a mirrored slog device. And we will lose many IOPS doing so…
On the other hand, the MTTR for a slog device is the better one, in comparison with a regular vdev or a L2ARC.
Everything will be fine at the moment you replace the slog device (eg.: SSD).
And the L2ARC? Here we need time… and believe me, can be a long time.
We do configure a 100GB SSD device, delivering a lot of IOPS, very good latencies… and crash! We lose it!
Do you think the applications will be happy with the SATA latencies? We will have a performance or an availability problem? Or we will not have problems at all?
Well, as I did say at the beginning of this post, no one thinks that a failure of a warmed L2ARC device is a big deal. I would like to agree, but I don’t. And as I really like the ZFS architeture, I would guess the vdev concept for redundancy should be independent of the physical vdev. So, we could mirror the L2ARC… but no, we can’t.
So, I can understand that I’m the only one that do think about mirror a cache device, but the fact that we can not create a mirror (logical vdev) from a physical vdev, seems like a ZFS bug.