Hello Linux Gurus,
I am seeking divine inspiration.
I don’t understand the apparent lack of hypervisor-based kernel protections in desktop Linux. It seems there is a significant opportunity for improvement beyond the basics of KASLR, stack canaries, and shadow stacks. However, I don’t see much work in this area on Linux desktop, and people who are much smarter than me develop for the kernel every day yet have not seen fit to produce some specific advanced protections at this time that I get into below. Where is the gap in my understanding? Is this task so difficult or costly that the open source community cannot afford it?
Windows PCs, recent Macs, iPhones, and a few Android vendors such as Samsung run their kernels atop a hypervisor. This design permits introspection and enforcement of security invariants from outside or underneath the kernel. Common mitigations include protection of critical data structures such as page table entries, function pointers, or SELinux decisions to raise the bar on injecting kernel code. Hypervisor-enforced kernel integrity appears to be a popular and at least somewhat effective mitigation although it doesn't appear to be common on desktop Linux despite its popularity with other OSs.
Meanwhile, in the desktop Linux world, users are lucky if a distribution even implements secure boot and offers signed kernels. Popular software packages often require short-circuiting this mechanism so the user can build and install kernel modules, such as NVidia and VirtualBox drivers. SELinux is uncommon, ergo root access is more or less equivalent to the kernel privileges including introduction of arbitrary code into the kernel on most installations. TPM-based disk encryption is only officially supported experimentally by Ubuntu and is usually linked to secure boot, while users are largely on their own elsewhere. Taken together, this feels like a missed opportunity to implement additional defense-in-depth.
It’s easy to put code in the kernel. I can do it in a couple of minutes for a "hello world" module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?
Please insert your unsigned modules into my brain-kernel. What have I failed to understand, or why is this the design of the kernel today? Is it an intentional omission? Is it somehow contrary to the desktop Linux ethos?
PWA rant incoming.
The context of your question reminds me of why I had to leave app development -- it's a race to the technological bottom. It's a real damn shame that PWAs work so well because it points to distribution and consumer reach to be the real limiting factors in writing a great application rather than infrastructure and code. It shouldn't have to be this way, but it is because we don't want to write an app for every platform separately. However, when we do this, we lose something and that is the vision for how the OS developer intended for applications to operate and interact with the rest of the system. It's a gap-filling technology that makes up for the lack of consistency between platforms that just never sat very well with me. It's something that shouldn't need to exist, but it does to fill an important role that could be designed out of an ideal system.
Rant over. Think I will label this as a rant at the beginning of the comment before wasting readers' time.
We need Android because at some point an app needs to interact with the real system. This could be through a library or some kind of native plugin. Sure, we could accept it's proprietary all the way down in the system, but that would be a dark world to live in, indeed. We could live without it, but we should care.