What an astounding achievement. In 6 years, this person has written not only a very well-designed microkernel, but a build system, UEFI bootloader, graphical shell, UI framework, and a browser engine.
The story of 10x developers among us is not a myth... if anything, it's understated.
Didn’t expect to see my project on the main page today ‘^^
Right now the build is broken, so you can’t test the full OS, but you can run individual apps with:
Impressive achievements, congrats! You said that your microkernel is "influenced by Zircon". Did you also study other architectures like e.g. sel4, Minix or openQNX? What do you consider the important design choices in your microkernel design? Is there a document where you go into this? Have you done performance measurements, i.e. to which other microkernel design do you think your kernel is comparable in terms of performance?
Thanks! Skift is basically a patchwork of all the OS ideas I like. The UI takes inspiration from SwiftUI/Flutter, the microkernel is influenced by Zircon, and there are some Plan 9 ideas where everything is a URL. A few bits are probably inspired by NT and Darwin too, though I don’t remember exactly which.
Maybe adding some Xerox PARC, Oberon, NeXTSTEP / NeWS style, Powershell ideas could also be interesting, on how the shell, UI, and dynamically loaded code (or OS IPC), makes the whole OS customizable, , just throwing another set of ideas into your bucket.
Hi monax, I would like to hear how you started the project. I am also currently trying to implement my own micro kernel, with hopes of doing something similar to SkiftOS in order to learn OS fundamentals, but I don't know how to start. What are the first things to tackle when taking on such a project?
I don’t know what I can tell you, I think where you start and how you start don’t really matter. The important thing is to keep going. These kinds of projects are a lot of work, and as long as you keep making progress, you’ll eventually get to what you want.
Thank you for the reply, one more thing. Did you study established code bases and/or books to guide you through the architecture process and initial implementation? If so, how do you take advantage of these resources without falling into the trap of "borrowing" implementation while trying to build your vision?
What you did here is really cool and inspiring :).
I always paste this book here when hobby OSes appear. I wrote my own GUI OS in the 90s and I couldn't have done it without this. Copies available on your usual shadow library I would imagine...
Wow, you did it yourself?! This is just wow, as a C/C++ developer I know how to create an OS, but at most I could come up with an idea, but writing all this myself, I have no words.
What ideas do you employ around security? Do apps have full access to memory? To hardware? Is there a permissions system? Sorry I'm not that familiar with how microkernels work.
Apps don’t get full access to memory or hardware. The kernel only maps what they’re allowed to see. Drivers live in user space, and apps talk to them through capabilities (handles you can pass around). There’s no ambient authority, you only get access if you’ve been given the key.
What about filesystem access rights? Does any application have full access to all user's files? Or only to files belonging to this particular application?
Hmm... what about wider hardware support? How difficult would be to port/adapt/etc libre drivers from other OS (linux comes to mind) considering SkiftOS is microkernel? :)
Kudos to the owner of this project. Well done. It is really modern C++ (with modules) and improvements on top. I see that it introduced some kind of GC and other high-level quality-of-life improvements. I noticed stuff like `co_try` and `.unwrap()` and `async`. Was it inspired by Rust? What plans do you have with this project?
Thank you! We need more GPOS options. We have been entrenched in the main 3. I think there's lots of room for making something better. [misaligned incentives?]
I dove deep into the code base. Found lib-sdl. Found impl-efi. Found co_return and co_await's. Found try's. Found composable classes. Found my codebase to be a mess compared to the elegance that is this. We are not worthy...
Every modern commercial OS is a hybrid architecture these days. Generally subsystems move out of the kernel when performance testing shows the cost isn't too high and there's time/money to do so. Very little moves back in, but it does happen sometimes (e.g. kernel TLS acceleration).
There's not much to say about it because there's never been an actual disagreement in philosophy. Every OS designer knows it's better for stability and development velocity to have code run in userspace and they always did. The word microkernel came from academia, a place where you can get papers published by finding an idea, giving it a name and then taking it to an extreme. So most microkernels trace their lineage back to Mach or similar, but the core ideas of using "servers" linked by some decent RPC system can be found in most every OS. It's only a question of how far you push the concept.
As hardware got faster, one of the ways OS designers used it was to move code out of the kernel. In the 90s Microsoft obtained competitive advantage by having the GUI system run in the kernel, eventually they moved it out into a userland server. Apple nowadays has a lot of filing systems run in userspace but not the core APFS that's used for most stuff, which is still in-kernel. Android moved a lot of stuff out of the kernel with time too. It has to be taken on a case by case basis.
Nintendo's 3DS OS and Switch 1+2 OS are bespoke and strictly microkernel-based (with the exception of DMA-330 CoreLink DMA handling on 3DS if you want to count is as such), and these have been deployed on hundreds of millions of commercially-sold devices.
Can you explain why TTY-PTY functionality hasn't been moved from the Linux kernel to userspace? Plan 9 did so in the 1990s or earlier (i.e., when Plan 9 was created, they initially put the functionality in userspace and left it there.)
I don't understand that, and I also don't understand why users who enjoy text-only interaction with computers are still relying on very old designs incorporating
things like "line discipline", ANSI control sequences and TERMINFO databases. A large chunk of cruft was introduced for performance reasons in the 1970s and even the 1960s, but the performance demands of writing a grid of text to a screen are very easily handled by modern hardware, and I don't understand why the cruft hasn't been replaced with something simpler.
In other words, why do users who enjoy text-only interaction with computers still emulate hardware (namely, dedicated terminals) designed in the 1960s and 1970s that mostly just displays a rectangular grid of monospaced text and consequently would be easy to implement afresh using modern techniques?
There a bunch of complexity in every terminal emulator for example for doing cursor-addressing. Network speeds are fast enough these days (and RAM is cheap enough) that cursor-addressing is unnecessary: every update can just re-send the entire grid of text to be shown to the user.
Also, I think the protocol used in communication between the terminal and the computer is stateful for no reason that remains valid nowadays.
The usual reason for all of this is that programmer time is expensive (even if you're a volunteer, you have limited hours available), and not many people want to volunteer to wade through tons of legacy tech debt. That's especially true when the outcome will be an OS that behaves identically to before. A lot of stuff stays in the kernel because it's just hard to move it out.
Bear in mind, moving stuff out of the kernel is only really worth it if you can come up with a reasonable specification for how to solve a bunch of new problems. If you don't solve them it's easy to screw up and end up with a slower system yet no benefit.
Consider what happens if you are overenthusiastic and try to move your core filesystem into userspace. What does the OS do if your filesystem process segfaults? Probably it can't do anything at that point beyond block everything and try to restart it? But every process then lost its connection to the FS server and so all the file handles are suddenly invalidated, meaning every process crashes. You might as well just panic and reboot, so, it might as well stay in the kernel. And what about security? GNU Hurd jumped on the microkernel bandwagon but ended up opening up security vulnerabilities "by design" because they didn't think it through deeply enough (in fairness, these issues are subtle). Having stuff be in the kernel simplifies your architecture tremendously and can avoid bugs as well as create them. People like to claim microkernels are inherently more secure but it's not the case unless you are very careful. So it's good to start monolithic and spin stuff out only when you're ready for the complexity that comes with that.
Linux also has the unusual issue that the kernel and userspace are developed independently, which is an obvious problem if you want to move functionality between the two. Windows and macOS can make assumptions about userspace that Linux doesn't.
If you want to improve terminals then the wrong place to start is fiddling with moving code between kernel and user space. The right place to start is with a brand new protocol that encodes what you like about text-only interaction and then try to get apps to adopt it or bridge old apps with libc shims etc.
What else does it have rather than beautiful UI? Network support? Sound? What file systems does it support? What about multiple users? What about applications isolation?
It would be nice to have such information displayed somewhere on the site.
It’s a microkernel-based operating system. Mostly just a learning/fun side project for me. It implements something akin to the NixOS /store. Hardware, networking, sound, and the file system are all very barebones. Most of the work so far has been put into the framework, some example apps, and the browser.
apologies for offtopic rant, but why cant they (palmsource/??) just open source BeOS codebase? what possible gain they can have by holding onto 20yo codebase just out of licensing spite. honestly for all the do-gooder talk in VC community this is the easiest thing to achieve. have a funding clause that if your company dies then all rights of unfinished works goto investors and by charter opensource it for benefit of other startups. we could have had greatness many times over.
Awesome! I’ve been waiting for this feature since 2020, and having them finally working is so cool. I haven’t migrated all the code yet, but it’s heading in the right direction
The story of 10x developers among us is not a myth... if anything, it's understated.
Very impressive!
Wild.
https://serenityos.org/
```bash ./skift.sh run --release <app-name> ```
on Linux or macOS.
To see all available apps:
```bash ls ./src/apps ```
```bash ./skift.sh run --release vaev-browser -- <url-or-file> ```
The HTTP stack is super barebones, so it only supports `http://` (no HTTPS). It works with my site, but results may vary elsewhere.
Most of my time so far has gone into the styling and layout engine rather than networking.
As a Norwegian, the name of this system and those components sound Danish (Skift, Karm, Opstart) and Danish-inspired (Hjert). Am I right? :)
Overall it looks interesting, all the best.
What you did here is really cool and inspiring :).
https://us.amazon.com/Developing-32-Bit-Operating-System-Cd-...
contact: your e-mail
skills: project website
and you'd get hired in a ton of places.
I'm curious, how come the app I just compiled works on macOS?
Also why do OS devs seem to have a thing for making browsers? Shouldn't browsers be mostly agnostic to the OS?
The UI looks nice :)
Looking forward to seeing it included in the next CCC CTF, like SerenityOS [0].
[0] https://2019.ctf.link/internal/challenge/1fef0346-a1de-4aa4-...
I am amazed that you also managed to write a browser engine!
I dove deep into the code base. Found lib-sdl. Found impl-efi. Found co_return and co_await's. Found try's. Found composable classes. Found my codebase to be a mess compared to the elegance that is this. We are not worthy...
The modules... :chefs-kiss:
There's not much to say about it because there's never been an actual disagreement in philosophy. Every OS designer knows it's better for stability and development velocity to have code run in userspace and they always did. The word microkernel came from academia, a place where you can get papers published by finding an idea, giving it a name and then taking it to an extreme. So most microkernels trace their lineage back to Mach or similar, but the core ideas of using "servers" linked by some decent RPC system can be found in most every OS. It's only a question of how far you push the concept.
As hardware got faster, one of the ways OS designers used it was to move code out of the kernel. In the 90s Microsoft obtained competitive advantage by having the GUI system run in the kernel, eventually they moved it out into a userland server. Apple nowadays has a lot of filing systems run in userspace but not the core APFS that's used for most stuff, which is still in-kernel. Android moved a lot of stuff out of the kernel with time too. It has to be taken on a case by case basis.
Every +*general-puprose OS.
Nintendo's 3DS OS and Switch 1+2 OS are bespoke and strictly microkernel-based (with the exception of DMA-330 CoreLink DMA handling on 3DS if you want to count is as such), and these have been deployed on hundreds of millions of commercially-sold devices.
I don't understand that, and I also don't understand why users who enjoy text-only interaction with computers are still relying on very old designs incorporating things like "line discipline", ANSI control sequences and TERMINFO databases. A large chunk of cruft was introduced for performance reasons in the 1970s and even the 1960s, but the performance demands of writing a grid of text to a screen are very easily handled by modern hardware, and I don't understand why the cruft hasn't been replaced with something simpler.
In other words, why do users who enjoy text-only interaction with computers still emulate hardware (namely, dedicated terminals) designed in the 1960s and 1970s that mostly just displays a rectangular grid of monospaced text and consequently would be easy to implement afresh using modern techniques?
There a bunch of complexity in every terminal emulator for example for doing cursor-addressing. Network speeds are fast enough these days (and RAM is cheap enough) that cursor-addressing is unnecessary: every update can just re-send the entire grid of text to be shown to the user.
Also, I think the protocol used in communication between the terminal and the computer is stateful for no reason that remains valid nowadays.
Bear in mind, moving stuff out of the kernel is only really worth it if you can come up with a reasonable specification for how to solve a bunch of new problems. If you don't solve them it's easy to screw up and end up with a slower system yet no benefit.
Consider what happens if you are overenthusiastic and try to move your core filesystem into userspace. What does the OS do if your filesystem process segfaults? Probably it can't do anything at that point beyond block everything and try to restart it? But every process then lost its connection to the FS server and so all the file handles are suddenly invalidated, meaning every process crashes. You might as well just panic and reboot, so, it might as well stay in the kernel. And what about security? GNU Hurd jumped on the microkernel bandwagon but ended up opening up security vulnerabilities "by design" because they didn't think it through deeply enough (in fairness, these issues are subtle). Having stuff be in the kernel simplifies your architecture tremendously and can avoid bugs as well as create them. People like to claim microkernels are inherently more secure but it's not the case unless you are very careful. So it's good to start monolithic and spin stuff out only when you're ready for the complexity that comes with that.
Linux also has the unusual issue that the kernel and userspace are developed independently, which is an obvious problem if you want to move functionality between the two. Windows and macOS can make assumptions about userspace that Linux doesn't.
If you want to improve terminals then the wrong place to start is fiddling with moving code between kernel and user space. The right place to start is with a brand new protocol that encodes what you like about text-only interaction and then try to get apps to adopt it or bridge old apps with libc shims etc.
See https://en.m.wikipedia.org/wiki/ANSI_escape_code
I'm on macOS, and still no luck building the code. But anything which doesn't involve building a custom GCC easily gets my vote :)
It would be nice to have such information displayed somewhere on the site.
rant over!