It could be a fun exercise to add invisicaps to something like chibicc/slimcc.
It leaves room for experimentation with reference counting and variations on the invisible capability system which could provide memory savings at the expense of some extra indirection.
Fil-C is not memory safe under data races. The capability and pointer values tear under assignment, which means that if you get the wrong thread interleaving, you can access an object through a wrong pointer, causing arbitrary program misbehavior.
This limitation would be fine if Fil-C proponents (including its author) didn't try to shout down anyone pointing out this limitation.
Fil-C is one of the most underrated projects I've ever seen. All this "rewrite it in rust for safety" just sounds stupid when you can compile your C program completely memory safe.
So, a few things, some of which others have touched on:
1. Fil-C is slower and bigger. Noticeably so. If you were OK with slower and bigger then the rewrite you should have considered wasn't to Rust in the last ten years but to Java or C# much earlier. That doesn't invalidate Fil'C's existence, but I want to point that out.
2. You're still writing C. If the program is finished or just occasionally doing a little bit of maintenance that's fine. I wrote C for most of my career, it's not a miserable language, and you are avoiding a rewrite. But if you're writing much new code Rust is just so much nicer. I stopped writing any C when I learned Rust.
3. This is runtime safety and you might need more. Rust gives you a bit more, often you can express at compile time things Fil-C would only have checked at runtime, but you might need everything and languages like WUFFS deliver that. WUFFS doesn't have runtime checks. It has proved to its satisfaction during compilation that your code is safe, so it can be executed at runtime in absolute safety. Your code might be wrong. Maybe your WUFFS GIF flipper actually makes frog GIFs purple instead of flipping them. But it can't crash, or execute x86 machine code hidden in the GIF, or whatever, that's the whole point.
Yes it's slower, but it works. It's being built by one single dad who focused on compatibility before speed.
I'm not convinced that tying the lifetimes into the type system is the correct way to do memory management. I've read too many articles of people being forced into refactoring the entire codebase to implement a feature.
You initially said that rewriting in Rust "seems stupid" based on what Fil-C provides, and someone pointed out technical reasons why it still might be useful in some circumstances. It's great that a single dad is able to build Fil-C on his own, and you're certainly entitled to the opinion that you don't like the idea of lifetimes in the type system, but it's genuinely hard to tell if there's a specific technical point you're trying to make or you dislike Rust so much that you're interpreting someone disagreeing with you about whether Fil-C obviates it as somehow being a personal attack on the author.
> tying the lifetimes into the type system is the correct way to do memory management.
Type systems used to be THE sexy PL research topic for about twenty years or so, so all the programming languages innovation has been about doing everything with type systems.
I can tell you that it's not that he's setting aside speed -- the fact that it's as fast as it is is an achievement. But there is a degree of unavoidable overhead -- IIRC his goal is to get it down to 20-30% for most workloads, but beyond that you're running into the realities of runtime bounds checks, materializing the flight ptrs, etc.
20% to 30% slower would be amazing for all the extra runtime work that is required in my limited understanding. This would be good enough for a whole lot of serious applications.
I think the core advantage of Fil-C (that Java and C# don't have) is that it moves the decision between security and performance to the user, not the programmer.
Imagine you're writing a library for, let's say, astronomical and orbital calculations. Writing it in Java means that it's always going to be slow. If you write it in C, NASA may decide to compile it with a normal compiler (because it won't ever be exposed to malicious inputs), while an astronomy website operator may use the Fil-C version for the extra security, at the cost of having to use slightly more computing resources, which are abundant on Earth.
This doesn't negate the advantages of Rust, which lets you get speed and performance at the same time.
It's not any slower or (proportionally) bigger compared to the experience you would have had 20 years ago running all sorts of utilities that happen to be the best candidates for Fil-C, and people got along just fine. How fast do ls and mkdir need to be?
buy that logic (which I somewhat agree with), we should be rewriting all of these tools in C# or some similar native gc'd language. C and rust both take on a ton of complexity to squeeze out the last 2x of speed, but if we no longer care about that, we should drop C in a heartbeat
No, rewriting in any language is bad by itself. What Fil-C gives you is that you don’t need to rewrite old programs. You spend zero effort, and immediately your battle-tested program is memory safe and free of any potential future vulnerabilities. With Rust you spend man-years on a rewrite, and as a result you have a new untested program full of subtle bugs, which you need to spend another 10 years of real-world use to uncover.
I think the problem with this logic is that it views language performance on an absolute scale, whereas people actually care about it on a relative scale compared to how fast it could be.
If you tell your boss "We spent $1m on servers this month and that's as cheap as its possible to be" he'll be like "ok fine". If you say "We spent $1m on servers this month but if we just disable this compiler security flag it could be $500k." ... you can guess what will happen.
(Counterpoint though: people use Python.)
But counter-counterpoint: Rust does so much more than preventing runtime memory errors. Even if Fil-C had no overhead (or I was using CHERI) I would still use Rust.
> than if you were using the equivalent tooling for Pascal, C, or Zig.
I think GP is talking about not-directly-related-to-safety things like sum types/pattern matching/traits/expressive type systems/etc. given the end of that paragraph. I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
I know what they were talking about. It was clearly intended to be a cheerfest for Rust.
> I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
I said builds. All of the languages I mentioned have "equivalent tooling" for that (i.e. compilers—to produce builds for the programs you choose to write in those languages).
> I said builds. All of the languages I mentioned have "equivalent tooling" for that (i.e. compilers—to produce builds for the programs you choose to write in those languages).
Oh, my mistake. Given the context I thought you were talking about some hypothetical tooling that gave you something approaching Rust's feature set.
All of these didn’t prevent Go from competing with Rust and I’m guessing that Fil-C will be the better choice in some cases.
Rust has managed to establish itself as a player, but it’s only the best choice for a limited amount of projects, like some (but not all) browser code or kernel code. Go, C++, C with Fil-C) have solid advantages of their own.
To name two:
* idiomatic code is easier to write in any of these languages compared to Rust, because one can shortcut thinking about ownership.
Rust idiomatic code requires it.
* less effort needed to protect from supply-chain attacks
To handle supply chain attacks, you need to know where yiur code comes from. That is often not a given when working with languages where it is easier to copy and paste in code from random other projects.
I have seen so must stuff copy and pasted into projects in my life, its not funny. Often it is undocumented where exactly the code comes from, which version it was taken from, how it was changed, and how to update it when something goes wrong.
When code is not copy and pasted it is over rewritten (poorly).
Code sharing does have its benefit. So does making it obvious which exact code is shared and how to update it. Yes, you can overdo code sharing, but just making code sharing hard on the tooling level does mote to hide supply chain security issues than it does to prevent the problem.
I doubt that ideomatic code is easier to write in C once you switch to Fil-C. Your code will just get killed by the runtime whenever it does something it should not do (and the compiler does not catch). You need to think about that stuff when your runtime enforces it.
Indeed, however Rust solves two problems in one language, the safety of managed languages, without having to use any form of automatic resource management, even if reference counting library types might be used, additionally.
As my comment history reveals I am more on the camp of having rewrites in Go (regardless of my opinion on its design), Java, C#, Haskell, OCaml, Lisp, Scheme,... Also following experiments of Cedar, Oberon, Singularity, Interlisp-D, StarLisp,....
However you will never convince someone anti-automatic resource management from ideological point of view.
Now would someone like that embrace Fil-C, with its sandboxing and GC? Maybe not, unless pushed from management kind of decision.
They would probably rewrite in Rust, Zig, Odin,... if those are appealing to them, or be faced with OS vendors pushing hardware with SPARC ADI, CHERI, ARM MTE,... enabled.
> However you will never convince someone anti-automatic resource management from ideological point of view.
It's generally accepted that 'explicit is better than implicit' and what you want in the end is deterministic, machine checked resource management. Automatic resource management is a subset of machine checked resource management. There is a large, somewhat less explored space of possibility (for example seL4 lives in this space) where you have to manually write the resource declarations and either the compiler or some other static analysis checks your work.
The fundamental question is that if an address is “safe” is a runtime thing, which in some cases you can decide it in compile time but not always. To force that during coding is just handicapping oneself to be “safe”. Which you can do the same in C (or mostly any language if you want it)
The issue with Fil-C is that it's runtime memory safety. You can still write memory-unsafe code, just now it is guaranteed to crash rather than being a potential vulnerability.
Guaranteed memory safety at compile time is clearly the better approach when you care about programs that are both functionally correct and memory safe. If I'm writing something that takes untrusted user input like a web API memory safety issues still end up as denial-of-service vulns. That's better, but it's still not great.
Not to disparage the Fil-C work, but the runtime approach has limitations.
I don't think that's a useful way of thinking about memory-safety - a C compiler that compiles any C program to `main { exit(-1); }` is completely memory-safe. It's easy to design a memory-safe language/compiler, the question is what compromises are being made to achieve it.
Other languages have runtime exceptions on out-of-bounds access, Fil-C has unrecoverable crashes. This makes it pretty unsuitable to a lot of use cases. In Go or Java (arbitrary examples) I can write a web service full of unsafe out-of-bounds array reads, any exception/panic raised is scoped to the specific malformed request and doesn't affect the overall process. A design that's impossible in Fil-C.
Then you run into the problem of infinite loops, which nothing can prevent (sans `main { exit(-1); }` or other forms of losing turing-completeness), and are worse than crashes - at least on crashes you can quickly restart the program (something something erlang).
try-catch isn't a particularly complete solution either if you have any code outside of it (at the very least, the catch arm) or if data can get preserved across iterations that can easily get messed up if left half-updated (say, caches, poisoned mutexes, stuck-borrowed refcells) so you'll likely want a full restart to work well too, and might even prefer it sometimes.
I don't think runtime error handling is impossible in Fil-C, at least in theory. But the use cases for that are fairly limited. Most errors like this are not anticipated, and if you did encounter them then there's little or nothing you can do useful in response. Furthermore, runtime handling to continue means code changes, thus coupling to the runtime environment. All of these things are bad. It is usually acceptable to fail fast and restart, or at least report the error.
I could have made Fil-C’s panic be a C++ exception if I had thought that it was a good idea. And then you could catch it if that’s what tickled your pickle
I just don’t like that design. It’s a matter of taste
That's actually not a bad idea, since apparently it can be used for whole operating systems. But certainly, if you started doing that, any code using the exception would need to be exclusive to Fil-C to benefit from that.
Why are you talking like this is black and white? Many things being compile time checkable is better than no things being compile time checkable. The existence of some thing in rust that can only be checked at runtime does not somehow make all the compile time checks that are possible irrelevant.
(Also I think the commenter you're replying to just worded their comment innacurately, code that crashes instead of violating memory safety is memory safe, a compilation error would just have been more useful than a runtime crash in most cases)
There are several ways to safely provide array bounds check hints to the Rust compiler, in-fact there's a whole cookbook. But for many cases, yep, runtime check.
.get() will bounds check and the compiler will optimize that away if it can prove safety at compile time. That leaves you 3 options made available in Rust:
- Explicitly unsafe
- Runtime crash
- Runtime crash w/ compile time avoidence when possible
Reminds me of a commercial project I did for my old University department around 1994. The GUI was ambitious and written in Motif, which was a little buggy and leaked memory. So... I ended up catching any SEGVs, saving state, and restarting the process, with a short message in a popup telling the user to wait. Obviously not guaranteed, but surprisingly it mostly worked. With benefit of experience & hindsight, I should have just (considerably) simplified it: I had user-configurable dialogs creating widgets on the fly etc, none of which was really required.
It's an apple to non-existent-apple comparison. Fil-C can't handle it even with extra code because Fil-C provides no recovery mechanism.
I also don't think it's that niche a use case. It's one encountered by every web server or web client (scope exception to single connection/request). Or anything involving batch processing, something like "extract the text from these 10k PDFs on disk".
Sure, it's not implemented in Fil-C because it is very new and the point of it is to improve things without extensive rewrites.
Generally, I think one could want to recover from errors. But error recovery is something that needs to be designed in. You probably don't want to catch all errors, even in a loop handling requests for an application. If your application isn't designed to handle the same kinds of memory access issues as we're talking about here, the whole thing turns into non-existent-apples to non-existent-apples lol.
> All this "rewrite it in rust for safety" just sounds stupid when you can compile your C program completely memory safe.
All of the points about Rust were made in that context, and they've pushed back against it successfully enough that now you're trying to argue from the other side as if it disproves their point. No one here is saying that there's no point in having safer C code or that literally everything needs to get rewritten; they're just pointing out that yes, there is a concrete advantage that something in Rust has over something in C today even with Fil-C available.
There are many concrete disadvantages to writing things in Rust too, not to mention rewriting. But you are right, these are different solutions and they thus have different characteristics.
As for your "as if it disproves their point" stuff is wrong. The fact is, the reply to a comment in a thread is not a reply to a different one. You are implicitly setting up a straw man like "See, you are saying there are NO advantages to using Rust over Fil-C" and I never said that at any point. I also didn't say that you said that there was no advantage to using Fil-C.
Rust also has run-time crash checks in the form of run-time array bounds checks that panic. So let us not pretend that Rust strictly catches everything at compile-time.
It’s true that, assuming all things equal, compile-time checks are better than run-time. I love Rust. But Rust is only practical for a subset of correct programs. Rust is terrible for things like games where Rust simply can not prove at compile-time that usage is correct. And inability to prove correctness does NOT imply incorrectness.
I love Rust. I use it as much as I can. But it’s not the one true solution to all things.
Not trying to be a Rust advocate and I actually don't work in it personally.
But Rust provides both checked alternatives to indexed reads/writes (compile time safe returning Option<_>), and an exception recovery mechanism for out-of-bounds unsafe read/write. Fil-C only has one choice which is "crash immediately".
Same reason any turing complete language needs any constructs - to help the programmer and identify/block "unsafe" constructs.
Programming languages have always been more about what they don't let you do rather than what they do - and where that lies on the spectrum of blocking "Possibly Valid" constructs vs "Possibly Invalid".
Minor nitpick. Or confusion on my part. In the filc_malloc function the call to calloc doesn't seem to allocate enough memory to store an AllocationRecord for each location in visible_bytes. Should it be:
Regardless of what you consider a GC (let's not have that debate for the millionth time on the internet...), for the point I was trying to make, I was not including RC as a form of GC. And I don't think Fil-C relies solely on RC either.
Those using Managed C++, C++/CLI, Unreal C++, the group of WG21 folks that voted C++11 GC into the standard, or targeting WebAssembly (which runs on a managed runtime for all practical purposes).
Windows developers using COM, and WinRT, Apple developers using IO and Driver Kit.
Which only appears relevant if you disregard critical differences like this:
The GCC garbage collector GGC is only invoked explicitly. In contrast with many other garbage collectors, it is not implicitly invoked by allocation routines when a lot of memory has been consumed. [1]
There are many C++ programmers and we are not the same!
My original foray into GCs was making real time ones, and the Fil-C GC is based on that work. I haven’t fully made it real time friendly (the few locks it has aren’t RT-friendly) but if I had more time I could make it give you hard guarantees.
It’s already full concurrent and on the fly, so it won’t pause you
Fil-C has two major downsides: it slows programs down and it doesn't interoperate with non-Fil-C code, not even libc. That second problem complicates using it on systems other than Linux (even BSDs and macOS) and integrating it with other safe languages.
I would never say it's impossible, and you've done some amazing work, but I do wonder if the second problem is feasibly surmountable. Setting aside cross-language interop, BYOlibc is not really tolerated on most systems. Linux is fairly unique here with its strongly compatible syscall ABI.
You're right that it's challenging. I don't think it's infeasible.
Here's why:
1. For the first year of Fil-C development, I was doing it on a Mac, and it worked fine. I had lots of stuff running. No GUI in that version, though.
2. You could give Fil-C an FFI to Yolo-C. It would look sort of like the FFIs that Java, Python, or Ruby do. So, it would be a bit annoying to bridge to native APIs, but not infeasible. I've chosen not to give Fil-C such an FFI (except a very limited FFI to assembly for constant time crypto) because I wanted to force myself to port the underlying libraries to Fil-C.
3. Apple could do a Fil-C build of their userland, and MS could do a Fil-C build of their userland. Not saying they will do it. But the feasibility of this is "just" a matter of certain humans making choices, not anything technical.
I think there's two main avenues for hardware acceleration: pointer provenance and garbage collection. The first dovetails with things like CHERI [1] but the second doesn't seem to be getting much hardware attention lately. It has been decades since Lisp Machines were made, and I'm not aware of too many other architectures with hardware-level GC support. There are more efficient ways to use the existing hardware for GC though, as e.g. Go has experimented with recently [2].
There are algorithms to align allocations and use metadata in unused pointer bits to encode object start addresses. That would allow Fil-C's shadow memory to be reduced to a tag bit per 8-byte word (like 32-bit CHERI), at the expense of more bit shuffling. But that shuffling could certainly be a candidate for hardware acceleration.
There is a startup working on "Object Memory Addressing" (OMA) with tracing GC in hardware [1], and its model seems to map quite well to Fil-C's.
I have also seen a discussion on RISC-V's "sig-j" mailing list about possible hardware support for ZGC's pointer colours in upper pointer bits, so that it wouldn't have to occupy virtual memory bits — and space — for those.
However, I think that tagged pointers with reference counting GC could be a better choice for hardware acceleration than tracing GC.
The biggest performance bottleneck with RC in software are the many atomic counter updates, and I think those could instead be done transparently in parallel by a dedicated hardware unit.
Cycles would still have to be reclaimed by tracing but modern RC algorithms typically need to trace only small subsets of the object graph.
Fil-C is only good if one needs to recompile an existing C program and gain extra safety without caring much about result performance. And even in such case I doubt it's useful, since existing code should be already well-tested and (almost) bug-free.
If new code needs to be written, there is no reason to use Fil-C, since better languages with build-in security mechanisms exist.
The performance hit comes from the extra work Fil-C does over regular C, so if your decompilation/recompiliation process preserves semantics that extra work (and thus the performance hit) will remain in the final product.
It makes more sense for new software to be written in Rust, rather than a full rewrite of existing C/C++ software to Rust in the same codebase.
Fil-C just does the job with existing software in C or C++ without an expensive and bug riddled re-write and serves as a quick protection layer against the common memory corruption bugs found in those languages.
Even without Fil-C I do not think it is even clear that new software should be written in Rust. It seems to have a lot of fans, but IMHO it is overrated. If you need perfect memory safety Rust has an advantage at the moment, but if you are not careful you trade this for much higher supply chain risks. And I believe the advantage in memory safety will disappear in the next years due to improved tooling in C, simply by adding the formal verification that proves the safety which will work automatically.
Memory safety is absolute table stakes when it comes to formal verification. You can't endow your code with meaningful semantics if you don't have some way of ensuring memory safety.
Indeed, and this is why people who care about this are also proving memory safety in C. The issue is that we do not have good open-source tooling that specifically focuses on formal verification of memory safety in C.
That's far from trivial in Rust because of 'unsafe' blocks. There are approaches to verifying unsafe code in Rust, but formally verifying a Rust program is still likely to be significantly more complex than formally verifying, say, a Java program.
(And before someone says it: no you don't only have to verify the small amount of code in 'unsafe' blocks. Memory safety errors can be caused by the interaction of safe and unsafe code.)
Fil-C is much slower, no free lunch, if you want the language to be fast and memory safe you need to add restrictions to allow proper static analysis of the code.
This is yet another variant of the "fat pointers" technique, which has been implemented and rejected many times due to either insufficient security guarantees, inability to cross non-fat ABI boundaries, or the overhead it introduces.
It leaves room for experimentation with reference counting and variations on the invisible capability system which could provide memory savings at the expense of some extra indirection.
This limitation would be fine if Fil-C proponents (including its author) didn't try to shout down anyone pointing out this limitation.
> Upon freeing an unreachable AllocationRecord, call filc_free on it.
I think the intention was to say: before freeing an unreachable AR, free the memory pointed to by its visible_bytes and invisible_bytes fields.
1. Fil-C is slower and bigger. Noticeably so. If you were OK with slower and bigger then the rewrite you should have considered wasn't to Rust in the last ten years but to Java or C# much earlier. That doesn't invalidate Fil'C's existence, but I want to point that out.
2. You're still writing C. If the program is finished or just occasionally doing a little bit of maintenance that's fine. I wrote C for most of my career, it's not a miserable language, and you are avoiding a rewrite. But if you're writing much new code Rust is just so much nicer. I stopped writing any C when I learned Rust.
3. This is runtime safety and you might need more. Rust gives you a bit more, often you can express at compile time things Fil-C would only have checked at runtime, but you might need everything and languages like WUFFS deliver that. WUFFS doesn't have runtime checks. It has proved to its satisfaction during compilation that your code is safe, so it can be executed at runtime in absolute safety. Your code might be wrong. Maybe your WUFFS GIF flipper actually makes frog GIFs purple instead of flipping them. But it can't crash, or execute x86 machine code hidden in the GIF, or whatever, that's the whole point.
I'm not convinced that tying the lifetimes into the type system is the correct way to do memory management. I've read too many articles of people being forced into refactoring the entire codebase to implement a feature.
Not some random dad, but a GC expert and former leader of the JavaScript VM team at Apple.
Type systems used to be THE sexy PL research topic for about twenty years or so, so all the programming languages innovation has been about doing everything with type systems.
Imagine you're writing a library for, let's say, astronomical and orbital calculations. Writing it in Java means that it's always going to be slow. If you write it in C, NASA may decide to compile it with a normal compiler (because it won't ever be exposed to malicious inputs), while an astronomy website operator may use the Fil-C version for the extra security, at the cost of having to use slightly more computing resources, which are abundant on Earth.
This doesn't negate the advantages of Rust, which lets you get speed and performance at the same time.
It's not any slower or (proportionally) bigger compared to the experience you would have had 20 years ago running all sorts of utilities that happen to be the best candidates for Fil-C, and people got along just fine. How fast do ls and mkdir need to be?
If you tell your boss "We spent $1m on servers this month and that's as cheap as its possible to be" he'll be like "ok fine". If you say "We spent $1m on servers this month but if we just disable this compiler security flag it could be $500k." ... you can guess what will happen.
(Counterpoint though: people use Python.)
But counter-counterpoint: Rust does so much more than preventing runtime memory errors. Even if Fil-C had no overhead (or I was using CHERI) I would still use Rust.
It sure does. Like making your build times slower (and bigger) than if you were using the equivalent tooling for Pascal, C, or Zig.
I think GP is talking about not-directly-related-to-safety things like sum types/pattern matching/traits/expressive type systems/etc. given the end of that paragraph. I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
> I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
I said builds. All of the languages I mentioned have "equivalent tooling" for that (i.e. compilers—to produce builds for the programs you choose to write in those languages).
Oh, my mistake. Given the context I thought you were talking about some hypothetical tooling that gave you something approaching Rust's feature set.
Rust has managed to establish itself as a player, but it’s only the best choice for a limited amount of projects, like some (but not all) browser code or kernel code. Go, C++, C with Fil-C) have solid advantages of their own.
To name two:
* idiomatic code is easier to write in any of these languages compared to Rust, because one can shortcut thinking about ownership. Rust idiomatic code requires it.
* less effort needed to protect from supply-chain attacks
I have seen so must stuff copy and pasted into projects in my life, its not funny. Often it is undocumented where exactly the code comes from, which version it was taken from, how it was changed, and how to update it when something goes wrong.
When code is not copy and pasted it is over rewritten (poorly).
Code sharing does have its benefit. So does making it obvious which exact code is shared and how to update it. Yes, you can overdo code sharing, but just making code sharing hard on the tooling level does mote to hide supply chain security issues than it does to prevent the problem.
As my comment history reveals I am more on the camp of having rewrites in Go (regardless of my opinion on its design), Java, C#, Haskell, OCaml, Lisp, Scheme,... Also following experiments of Cedar, Oberon, Singularity, Interlisp-D, StarLisp,....
However you will never convince someone anti-automatic resource management from ideological point of view.
Now would someone like that embrace Fil-C, with its sandboxing and GC? Maybe not, unless pushed from management kind of decision.
They would probably rewrite in Rust, Zig, Odin,... if those are appealing to them, or be faced with OS vendors pushing hardware with SPARC ADI, CHERI, ARM MTE,... enabled.
It's generally accepted that 'explicit is better than implicit' and what you want in the end is deterministic, machine checked resource management. Automatic resource management is a subset of machine checked resource management. There is a large, somewhat less explored space of possibility (for example seL4 lives in this space) where you have to manually write the resource declarations and either the compiler or some other static analysis checks your work.
Fil-Qt: A Qt Base build with Fil-C experience (143 points, 3 months ago, 134 comments) https://news.ycombinator.com/item?id=46646080
Linux Sandboxes and Fil-C (343 points, 4 months ago, 156 comments) https://news.ycombinator.com/item?id=46259064
Ported freetype, fontconfig, harfbuzz, and graphite to Fil-C (67 points, 5 months ago, 56 comments) https://news.ycombinator.com/item?id=46090009
A Note on Fil-C (241 points, 5 months ago, 210 comments) https://news.ycombinator.com/item?id=45842494
Notes by djb on using Fil-C (365 points, 6 months ago, 246 comments) https://news.ycombinator.com/item?id=45788040
Fil-C: A memory-safe C implementation (283 points, 6 months ago, 135 comments) https://news.ycombinator.com/item?id=45735877
Fil's Unbelievable Garbage Collector (603 points, 7 months ago, 281 comments) https://news.ycombinator.com/item?id=45133938
Guaranteed memory safety at compile time is clearly the better approach when you care about programs that are both functionally correct and memory safe. If I'm writing something that takes untrusted user input like a web API memory safety issues still end up as denial-of-service vulns. That's better, but it's still not great.
Not to disparage the Fil-C work, but the runtime approach has limitations.
If it's guaranteed to crash, then it's memory-safe.
If you dislike that definition, then no mainstream language is memory-safe, since they all use crashes to handle out of bounds array accesses
Other languages have runtime exceptions on out-of-bounds access, Fil-C has unrecoverable crashes. This makes it pretty unsuitable to a lot of use cases. In Go or Java (arbitrary examples) I can write a web service full of unsafe out-of-bounds array reads, any exception/panic raised is scoped to the specific malformed request and doesn't affect the overall process. A design that's impossible in Fil-C.
try-catch isn't a particularly complete solution either if you have any code outside of it (at the very least, the catch arm) or if data can get preserved across iterations that can easily get messed up if left half-updated (say, caches, poisoned mutexes, stuck-borrowed refcells) so you'll likely want a full restart to work well too, and might even prefer it sometimes.
I just don’t like that design. It’s a matter of taste
(Also I think the commenter you're replying to just worded their comment innacurately, code that crashes instead of violating memory safety is memory safe, a compilation error would just have been more useful than a runtime crash in most cases)
https://play.rust-lang.org/?version=stable&mode=debug&editio...
- Explicitly unsafe
- Runtime crash
- Runtime crash w/ compile time avoidence when possible
Catch the panic & unwind, safe program execution continues. Fundamentally impossible in Fil-C.
I also don't think it's that niche a use case. It's one encountered by every web server or web client (scope exception to single connection/request). Or anything involving batch processing, something like "extract the text from these 10k PDFs on disk".
Generally, I think one could want to recover from errors. But error recovery is something that needs to be designed in. You probably don't want to catch all errors, even in a loop handling requests for an application. If your application isn't designed to handle the same kinds of memory access issues as we're talking about here, the whole thing turns into non-existent-apples to non-existent-apples lol.
> All this "rewrite it in rust for safety" just sounds stupid when you can compile your C program completely memory safe.
All of the points about Rust were made in that context, and they've pushed back against it successfully enough that now you're trying to argue from the other side as if it disproves their point. No one here is saying that there's no point in having safer C code or that literally everything needs to get rewritten; they're just pointing out that yes, there is a concrete advantage that something in Rust has over something in C today even with Fil-C available.
As for your "as if it disproves their point" stuff is wrong. The fact is, the reply to a comment in a thread is not a reply to a different one. You are implicitly setting up a straw man like "See, you are saying there are NO advantages to using Rust over Fil-C" and I never said that at any point. I also didn't say that you said that there was no advantage to using Fil-C.
It’s true that, assuming all things equal, compile-time checks are better than run-time. I love Rust. But Rust is only practical for a subset of correct programs. Rust is terrible for things like games where Rust simply can not prove at compile-time that usage is correct. And inability to prove correctness does NOT imply incorrectness.
I love Rust. I use it as much as I can. But it’s not the one true solution to all things.
But Rust provides both checked alternatives to indexed reads/writes (compile time safe returning Option<_>), and an exception recovery mechanism for out-of-bounds unsafe read/write. Fil-C only has one choice which is "crash immediately".
Programming languages have always been more about what they don't let you do rather than what they do - and where that lies on the spectrum of blocking "Possibly Valid" constructs vs "Possibly Invalid".
And inability to prove incorrectness does NOT imply correctness. I think most Rust users don't understand either, because of the hype.
> "rewrite it in rust for safety" just sounds stupid
To be fair, Fil-C is quite a bit slower than Rust, and uses more memory.
On the other hand, Fil-C supports safe dynamic linking and is strictly safer than Rust.
It's a trade off, so do what you feel
I am the author of Fil-C
If you want to see my write-ups of how it works, start here: https://fil-c.org/how
When's the last time you told a C/C++ programmer you could add a garbage collector to their program, and saw their eyes light up?
And of course it's easy to think of lots of apps that heavily use those or another form of GC.
Windows developers using COM, and WinRT, Apple developers using IO and Driver Kit.
https://gcc.gnu.org/onlinedocs/gccint/Type-Information.html
The GCC garbage collector GGC is only invoked explicitly. In contrast with many other garbage collectors, it is not implicitly invoked by allocation routines when a lot of memory has been consumed. [1]
[1] https://gcc.gnu.org/onlinedocs/gccint/Invoking-the-garbage-c...
- Me. I'm a C++ programmer.
- Any C++ programmer who has added a GC to their C++ program. (Like the programmers who used the web browser you're using right now.)
- Folks who are already using Fil-C.
My original foray into GCs was making real time ones, and the Fil-C GC is based on that work. I haven’t fully made it real time friendly (the few locks it has aren’t RT-friendly) but if I had more time I could make it give you hard guarantees.
It’s already full concurrent and on the fly, so it won’t pause you
Here's why:
1. For the first year of Fil-C development, I was doing it on a Mac, and it worked fine. I had lots of stuff running. No GUI in that version, though.
2. You could give Fil-C an FFI to Yolo-C. It would look sort of like the FFIs that Java, Python, or Ruby do. So, it would be a bit annoying to bridge to native APIs, but not infeasible. I've chosen not to give Fil-C such an FFI (except a very limited FFI to assembly for constant time crypto) because I wanted to force myself to port the underlying libraries to Fil-C.
3. Apple could do a Fil-C build of their userland, and MS could do a Fil-C build of their userland. Not saying they will do it. But the feasibility of this is "just" a matter of certain humans making choices, not anything technical.
Interesting, how costly would be hardware acceleration support for Fil-C code.
[1]: https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_R...
[2]: https://go.dev/blog/greenteagc
There is a startup working on "Object Memory Addressing" (OMA) with tracing GC in hardware [1], and its model seems to map quite well to Fil-C's. I have also seen a discussion on RISC-V's "sig-j" mailing list about possible hardware support for ZGC's pointer colours in upper pointer bits, so that it wouldn't have to occupy virtual memory bits — and space — for those.
However, I think that tagged pointers with reference counting GC could be a better choice for hardware acceleration than tracing GC. The biggest performance bottleneck with RC in software are the many atomic counter updates, and I think those could instead be done transparently in parallel by a dedicated hardware unit. Cycles would still have to be reclaimed by tracing but modern RC algorithms typically need to trace only small subsets of the object graph.
[1]: "Two Paths to Memory Safety: CHERI and OMA" https://news.ycombinator.com/item?id=45566660
If new code needs to be written, there is no reason to use Fil-C, since better languages with build-in security mechanisms exist.
Fil-C just does the job with existing software in C or C++ without an expensive and bug riddled re-write and serves as a quick protection layer against the common memory corruption bugs found in those languages.
(And before someone says it: no you don't only have to verify the small amount of code in 'unsafe' blocks. Memory safety errors can be caused by the interaction of safe and unsafe code.)
"simply" and "formal verification" are usually oxymorons, never mind "automatically"
I love Fil-C. It's underrated. Not the same niche as Rust or Ada.
Also filc isn’t just fat pointers.