@sxan@midwest.social titelbild
@sxan@midwest.social avatar

sxan

@sxan@midwest.social

<span style="color:#323232;">       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
</span><span style="color:#323232;"> 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
</span>

Dieses Profil is von einem föderierten Server und möglicherweise unvollständig. Auf der Original-Instanz anzeigen

sxan ,
@sxan@midwest.social avatar

Das ist leader unser (USA) Reichstagbrand. 🙁

sxan ,
@sxan@midwest.social avatar

Ich dachte, daß der Reichstagbrand auch umstritten, wen aktuelles das Feuer entfacht.

sxan , (Bearbeitet )
@sxan@midwest.social avatar

Anyway, this is only relevant if you're writing only for yourself. It's your poor users who suffer for your expediency.

sxan ,
@sxan@midwest.social avatar

Also fake because zombie processes.

I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.

Zombie processes still infuriate me. While I'm not a Rust developer, nor do I particularly care about the language, I'm eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.

sxan ,
@sxan@midwest.social avatar

ORLY.

Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. I'm really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; that'd really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isn't getting security patches).

I'd love to hear how monolithic kernels have solved these.

sxan ,
@sxan@midwest.social avatar

This particular issue could be solved in most cases in a monolithic kernel. That it isn't, is by design. But it's a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists it's still in use by the zombie process. Which the kernel provides no mechanism for terminating.

It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.

This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.

Fuse is really close to the capabilities of microkernels, except it's only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.

Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

sxan ,
@sxan@midwest.social avatar

Zombies are usually tied to some resource use. In microkernels, you have more control over the resources.

sxan ,
@sxan@midwest.social avatar

I thought the point of lts kernels is they still get patches despite being old.

Well, yeah, you're right. My shameful admission is that I'm not using LTS because I wanted to play with bcachefs and it's not in LTS. Maybe there's a package for LTS now that'd let me at it, but, still. It's a bad excuse, but there you go.

I think a lot of people also don't realize that most of the performance issues have been worked around, and if RedoxOS is paying attention to advances in the microkernel field and is not trying to solve every problem in isolation, they could end up with close to monolithic kernel performance. Certainly close to Windows performance, and that seems good enough for Industry.

I don't think microkernels will ever compete in the HPC field, but I highly doubt anyone complaining about the performance penalty of microkernel architecture would actual notice a difference.

sxan ,
@sxan@midwest.social avatar

That's my point. If you're l33t gaming, what matters is your GPU anyway. If HPC, sure, use whatever architecture gets you the most bang for your buck, which is probably going to be a monolithic kernel (but, maybe not - nanokernels allow processes basically direct access to hardware, with minimal abstraction, like X11 DRI, and might allow even faster solutions to be programmed). For most people, the slight improvement in performance of a monolithic kernel over a modern, optimized microkernel design will probably not be noticeable.

I keep getting people telling me monolithic kernels are way faster, dude, but most are just parroting the state of things decades ago and are ignoring many of the advancements micro kernels like L4 have made in intervening years. But I need to go find links and put together references before I counter-claim, and right now I have other things I'd rather be doing.

sxan , (Bearbeitet )
@sxan@midwest.social avatar

As I said, we live in post-meltdown world. Microkernels are MUCH slower.

I've heard this from several people, but you're the lucky number by which I'd heard it enough that I bothered to gather some references to refute this.

First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It's been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L^4^Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

This is not MUCH slower, and - indeed - unless you're doing HPC applications, is close enough to be unnoticeable.

Edit: I was originally going to omit this, as it's propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I'm post-facto including it.

sxan ,
@sxan@midwest.social avatar

And I was agreeing with you! I was leaning on your sympathetic shoulder, while I suffered the slings and arrows of outrageously misinformed miscreants, and commiserating to your compatriotic ear.

sxan ,
@sxan@midwest.social avatar

Fun fact: Android's next Kernel, Fuchsia, is a microkernel. So even Google acknowledges the superiority of microkernels.

sxan ,
@sxan@midwest.social avatar

MMME > CCCT

Chording Causes Carpel Tunnel

Modal Makes Magnificent Experience

sxan ,
@sxan@midwest.social avatar

You can't install FireDragon on any other Linux distribution?

sxan ,
@sxan@midwest.social avatar

I open source all of my projects. Most people I encounter are reasonably polite, but of course even my most popular is used by a tiny fraction of the number of Gnome users. In any case, I long ago stopped caring about being beholden to users. Often they're doing me favors and finding issues I haven't, and some even provide useful analysis that saves me work. A few provide contributions. But at the end of the day, I do what I do for me, and anyone else who benefits from it provides a small dose of dopamine from being useful.

I regularly fork projects and implement changes I want; I also file PRs, but in the case the upstream author has different opinions about it, requiring work I don't think it's necessary, I just let it go and maintain my own fork.

This is not Ideal Open Software Development, with many people contributing to a common goal. It's fractured and selfish. But the other way, it becomes work, and nobody's paying me for this, and so I give no fucks.

My mental health improved drastically once I stopped emotionally caring about the opinions of my users. I still care about the technicalities, but only insofar as they affect me or I deem them to be a superior solution. Key to this is not engaging emotionally; if I'm not interested in working on it, I just say so: I have other priorities, but an happy to review and maybe accept PRs.

sxan ,
@sxan@midwest.social avatar

Yeah, that's fair. It's your work; you have no moral obligation to share it. Despite what the commies might say.

https://static1.thegamerimages.com/wordpress/wp-content/uploads/2024/04/fallout-walton-gogins.jpeg?q=49&fit=contain&w=480&h=300&dpr=2

sxan ,
@sxan@midwest.social avatar

The part about negotiation is a bit off-track.

On one end, in the kernel, there's a big array of pixels that is a picture that gets drawn on your monitor (or monitors). On the other end are a bunch of programs that want to draw stuff, like pictures of your friends and web pages. In between is software that decides how the stuff the softwares want to draw get put into the pixel array. This is Wayland; it was written to replace Xorg, which is what did that job for decades prior to Wayland.

If you understand the concepts of Xorg and window managers, Wayland + a compositor = Xorg + a window manager. Wayland abdicated a lot of work to the compositors, making it simpler and easier to maintain (and compositors more complex and harder). But together, they all do basically the same job. If one of the compositors implemented a network protocol, then you could declare equivalency.

sxan ,
@sxan@midwest.social avatar

Arch is only the larval stage. When a Linuxite consumes enough CLI, they metamorphose into one of two adult forms: a Void user, or a NixOS user. As these two adult forms are incompatible, this is a rare case of species divergence within a life cycle. Even more oddly, like the axolotl, many Arch users never leave the larval stage, and continue living comfortably in their ecological niche.

sxan ,
@sxan@midwest.social avatar

I discovered that EndeavourOS satisfied that for me, without me having to give up Arch. And snapper+btrfs-grub has eliminated any interest in messing about with the new line of immutable systems. The only tempting distro I might spend time in is Chimera Linux (link, b/c of an unfortunate naming conflict) which (a little hilariously) is an attempt to make a Linux distro that's purely Gnu-free. Chimera also runs dinit instead of systemd, and that's interesting.

Anyway, there are a couple of options that let a user stay in Arch but make things less... fussy.

sxan ,
@sxan@midwest.social avatar

Eh, it's all heretics all the way down.

Pick a preference. Go on. Any preference at all. Coffee? Great! All the coffee snobs agree that Starbucks is shit coffee. Then the pour-over gals and the espresso makers go home and wash their hands. Then the 40/60 pour-over gals meet with the 30/70 pour-over guys and agree that the espresso makers suck; then THEY go home and wash their hands. Then the 30/70 Japanese filter guys meet with the 30/70 German filter guys and agree that the 40/60 gals stink, and so on ad nauseum.

No group hates outsiders more than they hate heretics within their own group.

  • Alle
  • Abonniert
  • Moderiert
  • Favoriten
  • random
  • haupteingang
  • Alle Magazine