I don't have anything recent, but back in 2004, the majority of the Linux kernel code was in its drivers: https://dwheeler.com/essays/linux-kernel-cost.html I expect that most of the current Linux kernel code is also for handling hardware (that is, drivers + the code to handle various architectures).
> You think you want a stable kernel interface, but you really do not, and you don't even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree. You also get lots of other good benefits if your driver is in the main kernel tree, all of which has made Linux into such a strong, stable, and mature operating system which is the reason you are using it in the first place.
Just because the driver creator is a different human/org to the Kernel maintainers, doesn't mean their code needs to be separate if it makes more practical sense to bundle it all together.
Also the "proof of work" of being allowed to add your driver is probably that you are designing and producing actual hardware, so it is hard to troll.
It just produces Conway's law in the other direction (which is smart). Now anyone wanting to use Linux effectively needs to integrate with its development process, which is large part of why it is successful as a project. (However, GNU attempted similar with their lack of an extension interface for GCC and that arguably blew up in their face with LLVM, so it's far from a guaranteed success)
I think it's monorepo vs multi-repo, and since Linux maintainers update the drivers (I believe) when internal APIs change, and the internal APIs are not stable, monorepo seems more practical.
"how the source code is stored" is what leads to "wow, the linux kernel is really big". Nobody's measuring kernel size by the number of resident pages it takes up.
The author is the same (Linus), so it makes sense that he would design a source control system that supported the monorepo he created over the prior 15-ish years.
The machines are different. Multicore 64-bit chips are now standard for consumer PCs. RAM and persistent storage are faster and much more abundant. The architecture of the modern x86-64 is much more sophisticated than that of the 386 for which the earliest Linux was written. Vectorization, predictive branching, and asynchronous code are all front and center in the modern programmer's ecosystem.
In short, hardware is more capable, and so perhaps now we can afford to take more opportunities to trade a little bit of overhead for abstractions that are more modular and robust.
Being a microkernel doesn't automatically make you more robust. If you look at their docs, Hurd has only one feature that's different to Linux and that's a sort of souped up FUSE. But as they realized later, allowing arbitrary unprivileged programs to extend the filesystem like that doesn't mesh well with the UNIX security model. You can write a "translator" that redirects one part of the filesystem tree to another, so you get the same issues as with chroot or symlink attacks. Their proposed solution (never implemented?) is to stop programs paying attention to translators by default if they are running as a different user. There are other cases where translators can DoS apps or do other undesirable things.
The basic problem is that the kernel boundary is a trust boundary. You assume that anything in the kernel is legit and won't try to attack you, which simplifies things a lot. In the microkernel world the original idea was that all the servers would be just ordinary programs you could swap in and out at will. But then the threat model becomes muddied and unclear. Is it OK to run a server that's untrusted? If so, to what extent?
The Hurd seems under-designed as a consequence. To make that vision work you'd need to be willing to depart from UNIX way more, which really means being a research OS.
> Being a microkernel doesn't automatically make you more robust.
No, of course, it doesn't automatically do so. But it makes it a whole hell of a lot easier to write a reliable service, if you don't have to deal with all the crap that a monolithic kernel does. In a very real sense, lines of code are a liability: fewer LoC generally translates into fewer bugs in the software.
I think you'd find Plan 9 interesting. It deals with many of the issues you're talking about in a rather head on way. In fact, it takes things even further than Hurd might, and allows processes to migrate to different processors, which may be contained on completely different machines.
In a very real sense, the hardware has given a realization of microservice ideas, but at the hardware level. My WIFI card, the common and popular example, probably has more processing power than some of the early desktop computers back in the day. Certainly SSDs are getting much more complicated.
They aren't general purpose, but I presume microkernel "services" would also not be general purpose?
Often they are general-purpose. There's a blog from maybe 10 years ago of someone launching Linux on a hard drive (it refuses to fully boot because the hard drive doesn't have an MMU, but they get some familiar log output).
> In short, hardware is more capable, and so perhaps now we can afford to take more opportunities to trade a little bit of overhead for abstractions that are more modular and robust.
We could afford that before, it's just that Linus didn't see the value and put his foot down. Which is definitely a choice.
Do we have more hardware isolation? There was a pretty strong argument against microkernels - a driver running in userspace can still bork the whole machine if the hardware being controlled can write to memory at arbitrary addresses.
On the other hand, datacenters have become so large that a 1% performance improvement can amount to millions of dollars in hardware and energy savings, so the extra cost of a microkernel might not be very welcome outside consumer devices.
We did not yet have a big publicized hack of whole cloud infrastructure causing massive economic harm. When that happens (the buggy PC/Linux architecture running the world is ripe), more secure architectures will become commercially interesting despite the 1% performance penalty.
1. Read up on the seL4 microkernel. It pretty much destroyed the "microkernels are too slow" argument.
2. Modern day hardware is mostly just ringbuffers mapped to memory. There's a real convergence of hardware interfaces, virtio, and io_uring into all being this very similar looking thing. With IOMMUs, moving drivers to userspace is pretty attractive. There's not much of a difference between a cloud VM getting access to a VFIO device from the hypervisor and a userspace device driver getting access to hardware from the microkernel. And there's a lot of money in making cloud VM networking & storage faster.
The drivers are not used in that way in NetBSD, they are either compiled into the kernel or built as modules just like in Linux, the graphics drivers even use the source code from Linux.
Anywhere. There are lots of examples on the topic ; running NetBSD driver code inside NetBSD's user-land is the simplest use-case (https://www.netbsd.org/docs/rump/sptut.html), but that code is portable without modifications to just about any context possible.
Check out Antti Kantee's The Design and Implementation of the Anykernel and Rump Kernels if you want details on the architecture.
GNU Hurd for example. I haven't used it so I'm not sure of the exact status but it sounds like they are using it in some cases to make use of existing drivers in a microkernel style. There have been a few attempts to rumpify Linux but it doesn't seem like any of them succeeded and I'm not sure if anyone is still trying.
Yes, that is how it works in most sane operating systems, even more so nowadays where writing userspace drivers is preferable to dynamically loadable modules.
I don't have anything recent, but back in 2004, the majority of the Linux kernel code was in its drivers: https://dwheeler.com/essays/linux-kernel-cost.html I expect that most of the current Linux kernel code is also for handling hardware (that is, drivers + the code to handle various architectures).