Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i can't find "zfs" mentioned once in this guy's doc so my first question is... why not?


Same reason he doesn’t mention BtrFS and a bunch of other file systems: Because he’s running OpenBSD which doesn’t support ZFS.


From the page:

> I decided it was finally time to build a file server to centralize my files and guard them against bit-rot. Although I would have preferred to use OpenBSD due to its straightforward configuration and sane defaults, I was surprised to find that none of the typical NAS filesystems were supported.

OpenBSD does not support ZFS.


Theo is, perhaps rightfully so, against importing what is effectively a paravirtualized Solaris kernel into the OpenBSD source code in order to run a file system.


Too sad, because the partitioning of OpenBSD is why i don't use it, with ZFS you could just do a dataset throw x^w,nosuid etc on them and give them a quota, with ffs one can bet that you run out of space (earlier or later), in one of the partitions (Workstation NOT Server).


You can use your own partitioning though? One for /, one for swap. Done.


Yes you can but you cant set stuff lime nosuid etc on /


I doubt it. Even for ports you can still symlink /usr/ports to $HOME/ports, for Scummvm with --enable-all-engines or Eduke32 (Build/GPLv2 license clash, can't be shared as a binary).

/usr/local is not small at all by default.


[flagged]


I think they're trying to say is you can just link stuff to $HOME if some filesystem runs out of space (not an endorsement of that view, just an explanation).


Because ZFS is not supported on OpenBSD. In fact he does mention in the beginning of the article that he was surprised that none of the NAS related file-systems are not supported by OpenBSD.

On the other side, ZFS is an overly complicated behemoth, that wants direct access to the block device. Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums. So both serve different use-cases.


Non ZFS filesystems are overly simplified, ignoring the problems they ought to be solving.


ZFS is not the simplest solution to the problems that "ought to be solved", and the implementation can be rather annoying - lacking support for hardware configuration changes, using its own cache system sidestepping the one in the kernel, and generally not fitting in with normal filesystem paradigms. And that's not even addressing that incremental sends - a huge feature - was (is?) broken due to holebirth, making it unreliable.

btrfs kinda blew up, but it would be nice to have a good and simple reliable filesystem that actually fits in with the others. ZFS is what we're stuck with till then.


Some of them are grounded in Linux politics. I have hope in bcachefs but best not rush, we've all seen btrfs.


ZFS was never any prettier on BSD either, so wouldn't blame Linux in that.

But yes, bcachefs is somewhat interesting. Or maybe btrfs manages to clean up their act one day.


ZFS on FreeBSD was very pretty, till Linux guys added adb and other Linux-specific features.


In what way? Back when I used it it seemed just as misplaced. Its own mount semantics, its own caches, all that.


Blazingly fast, robust and rock-solid.

Miles better than all that geom_xxx RAIDs (I've been maintainer of geom_raid5, mind you), chipset RAIDs and FFS2 SU+J stuff, which is not completely stable even right now — I've got a ton of erros from forced foreground fsck after "normal" background fsck completion as soon as 2019 (I don't have single R/W FFS2 after that, and I'm happy!)

With all my due respect to McKusick, all this modernization of FFS2 (SU, snapshots, SU+J) always was very fragile, and software RAIDs implemented as GEOMs is much worse than ZFS VDEV layer.

Linux-induced changes downgrade ZFS performance a lot, though :-( Another level of indirection in ARC is really big deal.


What's wrong with btrfs?


The parity-based RAID levels still are officially not safe for production, and overall many people don't quite trust it in more complex setups due to past bugs.


It depends on your use case. BTRFS still has deficiencies in how it does quotas (significantly slows down fs if you enable quotas). BTRFS raid 5 has a write hole like traditional raid 5. And, there is a problem that sometimes occurs with individual extents when you use dedup. One of the dedup tools predicts that the issue will occur and will skip dedupe on such extents.

There were RFCs on proposals to address both the write hole and quota issues on LWN this year, with the write hole fix already having draft patches see "raid tree".

BTRFS is fine if you are not doing things that can hit those edges.


Here’s an excellent run down that’s somewhat recent: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...


as with all things GPL, victim of FUD campaigns by corporate america.


excuse me what


> ZFS is not the simplest solution to the problems that "ought to be solved"

What are simpler solutions to the problems that ought to be solved?


A CoW filesystem itself is not much more complicated than a plain filesystem. Or maybe a CoW softraid, with the filesystem existing at a different layer.

ZFS has countless bonus tunables, several types of caching distinct from the kernel VFS cache, its own write logs and special devices, multiple levels of topology (datasets in a pool consisting of vdevs consisting of drives), deduplication, compression, etc.

It is also not at all user friendly. When set up right (and no changing your mind on setup), and when fed enough resources, it does it's job well, but simple or elegant cannot describe it.


You didn't answer the question. You can't claim that there are simpler solutions then not actually provide a simpler solution. Which handwaving is not.

> A CoW filesystem itself is not much more complicated than a plain filesystem.

And yet there isn't one out there, there's pretty much only ZFS and BTRFS, the latter having been in a state of almost-but-not-actually-working for over a decade now.

bcachefs is the only contender and it remains a single-developer effort with little mainlining progress in the last few years.


> You didn't answer the question. You can't claim that there are simpler solutions then not actually provide a simpler solution.

And what was would be a reasonable response to you? Pasting a novel filesystem implementation as proof that the existing ones are overcomplicated? Opinions do not have a burden of evidence.

There is a handful of CoW filesystems out there, showing that it is certainly not an insurmountable issue to write one. Rather, the problem is stopping people from doing more at this point, keeping the design simple instead.

That we don't have something better yet is likely a result of writing filesystems in general being rather laborous to do right regardless of CoW, and being incredibly unrewarding - few care about filesystems unless it's broken.


There’s also hammerfs. I’ve not used that though.

I think the complaints levelled against ZFS is a little unfair though. I agree that there are more elegant ways to implement ZFS but actually what we have already works really damn well. And the comments about the CLI being hard to use is weird because having used a hell of a lot of different file systems over the years (including BtrFS), I’ve found ZFS to be remarkably easy.

ZFS has saved me from a number of hardware failures. If it really were as bad as the comments on here have made out, it’s have lost data several times over.


I completely agree with that assessment. The core ideas behind how the CoW work are elegant, but the implementation is anything but elegant.


Depends what problems you're looking to solve; you don't need a CoW filesystem if you just want data integrity features for example, and you don't need data integrity features if you're just looking for something with quick and efficient snapshots.

ZFS tries to solve every filesystem problem and actually doesn't even do a terrible job at it, but it can be a bit of a beast due to its high complexity and that it doesn't integrate well with the rest of the system.


>On the other side, ZFS is an overly complicated behemoth, that wants direct access to the block device. Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums. So both serve different use-cases.

Na...it's not overly complicated for what it is, but yes it is a behemoth.

>that wants direct access to the block device.

Yes for high-performance "enterprise"-setup's it is preferable, but absolutely not needed.

> Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums.

That i think is the winning point here, just add bit-rot protection to ffs.


> that wants direct access to the block device

You can set up a ZFS pool backed by files[1]. Probably not something you should do with data you really care about, but it's possible.

[1]: https://linux.die.net/man/8/zpool (Virtual Devices)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: