Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

yeah, I am exaggerating on the port cost some, but the big advantage of hardware raid over software isn't the hardware calculation of parity. A CPU can calculate parity so much faster than you can write to a drive that it doesn't matter at all that special hardware can do so even faster still.

The advantage of the hardware raid card is the battery backed cache. If it doesn't have a BBU and a fair amount of cache, as far as I am concerned, you might as well be using MD.

Hardware RAID cards have improved quite a lot recently; some of the stuff now has reasonably sized caches, so perhaps I should revisit my assumptions in this area. Of course, I'm planning on using ZFS on my storage servers, so even if hardware raid cards are now a reasonably good deal, they won't do me a whole lot of good.



As much as I like ZFS, I do feel a little misled by the rhetoric about replacing "expensive" BBUs with slog SSDs... that are actually much more expensive.


Until quite recently, I would not have understood what you meant. The cost per gigabyte for even really fast SSDs is lower than the cost per gigabyte of RAID cache ram, so I'd have said "what are you on about?"

But, I think I understand what you are on about now.

Most of us (well, speaking for myself, but I think this is true of most SysAdmins) have very strong experience telling us "more read cache is better" - I mean, more read cache, up until you can cache everything the server commonly reads, makes an absolutely huge difference in performance.

So we look for big caches.

The problem is that most of us don't have the same intuitive grasp of where the benefits stop coming when adding more space to the write cache, as most of us don't have a whole lot of experience with large write-cache systems (outside of netapp/emc type boxes, and I personally attribute their superior performance in part to their gigabytes of ram that can be safely used as write-back cache.)

So if write cache works the same way as read cache? yes I will pay the premium for the fastest 32GiB SSD I can find, if I can use it all as write cache.

The thing is, I'm told, that after a few gigabytes, the returns to adding more write cache fall off sharply; and if that's true, then yeah, you are right, 'cause you are wasting most of the SSD.

I mean, the real question here is "how much write-cache do I need before I stop seeing significant benefit to adding more write-cache?" and if that number is much above what you can get in a RAID card, then the zfs/ssd setup starts looking pretty good.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: