> I decided it was finally time to build a file server to centralize my files and guard them against bit-rot. Although I would have preferred to use OpenBSD due to its straightforward configuration and sane defaults, I was surprised to find that none of the typical NAS filesystems were supported.
Theo is, perhaps rightfully so, against importing what is effectively a paravirtualized Solaris kernel into the OpenBSD source code in order to run a file system.
Too sad, because the partitioning of OpenBSD is why i don't use it, with ZFS you could just do a dataset throw x^w,nosuid etc on them and give them a quota, with ffs one can bet that you run out of space (earlier or later), in one of the partitions (Workstation NOT Server).
I doubt it. Even for ports you can still symlink /usr/ports to $HOME/ports, for Scummvm with --enable-all-engines or Eduke32 (Build/GPLv2 license clash, can't be shared as a binary).
I think they're trying to say is you can just link stuff to $HOME if some filesystem runs out of space (not an endorsement of that view, just an explanation).
Because ZFS is not supported on OpenBSD. In fact he does mention in the beginning of the article that he was surprised that none of the NAS related file-systems are not supported by OpenBSD.
On the other side, ZFS is an overly complicated behemoth, that wants direct access to the block device. Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums. So both serve different use-cases.
ZFS is not the simplest solution to the problems that "ought to be solved", and the implementation can be rather annoying - lacking support for hardware configuration changes, using its own cache system sidestepping the one in the kernel, and generally not fitting in with normal filesystem paradigms. And that's not even addressing that incremental sends - a huge feature - was (is?) broken due to holebirth, making it unreliable.
btrfs kinda blew up, but it would be nice to have a good and simple reliable filesystem that actually fits in with the others. ZFS is what we're stuck with till then.
Miles better than all that geom_xxx RAIDs (I've been maintainer of geom_raid5, mind you), chipset RAIDs and FFS2 SU+J stuff, which is not completely stable even right now — I've got a ton of erros from forced foreground fsck after "normal" background fsck completion as soon as 2019 (I don't have single R/W FFS2 after that, and I'm happy!)
With all my due respect to McKusick, all this modernization of FFS2 (SU, snapshots, SU+J) always was very fragile, and software RAIDs implemented as GEOMs is much worse than ZFS VDEV layer.
Linux-induced changes downgrade ZFS performance a lot, though :-( Another level of indirection in ARC is really big deal.
The parity-based RAID levels still are officially not safe for production, and overall many people don't quite trust it in more complex setups due to past bugs.
It depends on your use case. BTRFS still has deficiencies in how it does quotas (significantly slows down fs if you enable quotas). BTRFS raid 5 has a write hole like traditional raid 5. And, there is a problem that sometimes occurs with individual extents when you use dedup. One of the dedup tools predicts that the issue will occur and will skip dedupe on such extents.
There were RFCs on proposals to address both the write hole and quota issues on LWN this year, with the write hole fix already having draft patches see "raid tree".
BTRFS is fine if you are not doing things that can hit those edges.
A CoW filesystem itself is not much more complicated than a plain filesystem. Or maybe a CoW softraid, with the filesystem existing at a different layer.
ZFS has countless bonus tunables, several types of caching distinct from the kernel VFS cache, its own write logs and special devices, multiple levels of topology (datasets in a pool consisting of vdevs consisting of drives), deduplication, compression, etc.
It is also not at all user friendly. When set up right (and no changing your mind on setup), and when fed enough resources, it does it's job well, but simple or elegant cannot describe it.
You didn't answer the question. You can't claim that there are simpler solutions then not actually provide a simpler solution. Which handwaving is not.
> A CoW filesystem itself is not much more complicated than a plain filesystem.
And yet there isn't one out there, there's pretty much only ZFS and BTRFS, the latter having been in a state of almost-but-not-actually-working for over a decade now.
bcachefs is the only contender and it remains a single-developer effort with little mainlining progress in the last few years.
> You didn't answer the question. You can't claim that there are simpler solutions then not actually provide a simpler solution.
And what was would be a reasonable response to you? Pasting a novel filesystem implementation as proof that the existing ones are overcomplicated? Opinions do not have a burden of evidence.
There is a handful of CoW filesystems out there, showing that it is certainly not an insurmountable issue to write one. Rather, the problem is stopping people from doing more at this point, keeping the design simple instead.
That we don't have something better yet is likely a result of writing filesystems in general being rather laborous to do right regardless of CoW, and being incredibly unrewarding - few care about filesystems unless it's broken.
I think the complaints levelled against ZFS is a little unfair though. I agree that there are more elegant ways to implement ZFS but actually what we have already works really damn well. And the comments about the CLI being hard to use is weird because having used a hell of a lot of different file systems over the years (including BtrFS), I’ve found ZFS to be remarkably easy.
ZFS has saved me from a number of hardware failures. If it really were as bad as the comments on here have made out, it’s have lost data several times over.
Depends what problems you're looking to solve; you don't need a CoW filesystem if you just want data integrity features for example, and you don't need data integrity features if you're just looking for something with quick and efficient snapshots.
ZFS tries to solve every filesystem problem and actually doesn't even do a terrible job at it, but it can be a bit of a beast due to its high complexity and that it doesn't integrate well with the rest of the system.
>On the other side, ZFS is an overly complicated behemoth, that wants direct access to the block device. Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums. So both serve different use-cases.
Na...it's not overly complicated for what it is, but yes it is a behemoth.
>that wants direct access to the block device.
Yes for high-performance "enterprise"-setup's it is preferable, but absolutely not needed.
> Meanwhile `muxfs` works with any already existing file-system (local or remote) and just provides the checksums.
That i think is the winning point here, just add bit-rot protection to ffs.