The c-style interface of wasm is pretty limiting when designing higher level interfaces, which is why wasn-bindgen is required in the first place.
Luckily, Firefox is pushing an early proposal to expose all the web apis directly to wasm through a higher level interface based on the wasm component-model proposal.
Tbh, apart from the demonstrated performance improvement for string marshalling I really fail to see how integrating the WASM Component Model into the browser is a good thing. It's a lot of complexity for a single niche use case (less overhead for string conversion - but passing tons of strings across the boundary is really only needed for one use case: when doing a 1:1 mapping of the DOM API).
I really doubt that web APIs like WebGPU or WebGL would see similar performance improvements, and it's unclear how the much more critical performance problems for accessing WebGPU from WASM would be solved by the WASM Component Model (e.g. WebGPU maps WGPUBuffer content into separate JS ArrayBuffer objects which cannot be accessed directly from WASM without copying the data in and out of the WASM heap).
1. It’s not just one use case. I’m working on a product which makes heavy use of indexeddb from JavaScript for offline access. We’d love to rewrite this code into rust & webassembly, but performance might get worse if we did so because so many ffi calls would be made, marshalling strings from wasm -> js -> c++ (browser). Calling indexeddb from wasm directly would be way more efficient for us, too!
2. It’s horrible needing so much JS glue code to do anything in wasm. I know most people don’t look at it, but JS glue code is a total waste of everyone’s time when you’re using wasm. It’s complex to generate. It can be buggy. It needs to be downloaded and parsed by the browser. And it’s slow. Like, it’s pure overhead. There’s simply no reason that this glue needs to exist at all. Wasm should be able to just talk to the browser directly.
I’d love to be able to have a <script src=foo.wasm> on my page and make websites like that. JS is a great language, but there’s no reason to make developers bridge everything through JS from other languages. Nobody should be required to learn and use JavaScript to make web software using webassembly.
> Wasm should be able to just talk to the browser directly.
Web APIs are designed for JavaScript, though, which makes this hard. For example, APIs that receive or return JS Typed Arrays, or objects with flags, etc. - wasm can't operate on those things.
You can add a complete new set of APIs which are lower-level, but that would be a lot of new surface area and a lot of new security risk. NaCl did this back in the day, and WASI is another option that would have similar concerns.
There might be a middle ground with some automatic conversion between JS objects and wasm. Say that when a Web API returns a Typed Array, it would be copied into wasm's linear memory. But that copy may make this actually slower than JS.
Another option is to give wasm a way to operate on JS objects without copying. Wasm has GC support now so that is possible! But it would not easily help non-GC languages like Rust and C++.
Anyhow, these are the sort of reasons that previous proposals here didn't pan out, like Wasm Interface Types and Wasm WebIDL bindings. But hopefully we can improve things here!
At least the DOM APIs are ostensibly designed to work in multiple languages, and are used by XML parsers in many languages.
Some of the newer Web APIs would be difficult to port. But the majority of APIs have quite straight forward equivalents in any language with a defined struct type (which you admittedly do have to define for WASM, and whether that interface would end up being zero-copy would change depending on the language you are compiling to wasm)
There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
> There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
Correct, but this is has been one of wasm's guiding principles since the start: move complexity from browsers to toolchains.
Wasm is simple to optimize in browsers, far simpler than JavaScript. It does require a lot more toolchain work! But that avoids browser exploits.
This is the reason we don't support the wasm text format in browsers, or wasm-ld, or wasm-opt. All those things would make toolchains easier to develop.
You are right that this sometimes causes duplicate effort among toolchains, each one needing to do the same thing, and that is annoying. But we could also share that effort, and we already do in things like LLVM, wasm-ld, wasm-opt, etc.
Maybe we could share the effort of making JS bindings as well. In fact there is a JS polyfill for the component model, which does exactly that.
> ... I really fail to see how integrating the WASM Component Model into the browser is a good thing.
One of the common (mis-)understandings about WASM when it was released, was people could write web applications "in any language" that could output WASM. (LLVM based things as an example)
That was clear over-selling of WASM, as in reality people still needed to additionally learn JS/TS to make things work.
So for the many backend devs who completely abhor JS/TS (there are many), trying out WASM and then finding it was bullshit has not been positive.
If WASM is made a first class browser citizen, and the requirement for JS/TS truly goes away, then I'd expect a lot of positive web application development to happen by those experienced devs who abhor JS/TS.
At being said, that viewpoint is from prior to AI becoming reasonably capable. That may change the balance of things somewhat (tbd).
It's not just about string performance, it's about making wasm a first class experience on the web. That includes performance improvements - because you don't need to wake up the js engine - but it's a lot more than that. Including much better dev-ex, which is not great as you can see in the OP.
It would also enable combining different languages with high-level interfaces rather than having to drop down to c-style interfaces for everything.
> Including much better dev-ex, which is not great as you can see in the OP.
IMHO the developer experience should be provided by compiler toolchains like Emscripten or the Rust compiler, and by their (standard) libraries. E.g. keep the complexity out of the browser, the right place for binding-layer complexity is the toolchains and at compile time. The browser is already complex enough as it is and should be radically stripped down instead instead of throwing more stuff onto the pile.
Web APIs are designed from the ground up for Javascript, and no amount of 'hidden magic' can change that. The component model just moves the binding shim to a place inside the browser where it isn't accessible, so it will be even harder to investigate and fix performance problems.
The dev-ex issues largely occur at the boundaries between environments. In the browser, that's often a JS-Rust boundary or a JS-C++ boundary. On embedded runtimes, it could be a Go-Rust boundary, or a Zig-Python boundary. To bridge every possible X-Y boundary for N different environments, you need N^2 different glue systems.
You're probably already thinking "obviously we just need a hub-and-spoke architecture where there's a common intermediate representation for all these types". That kind of architecture means that each environment only has to worry about conversions to and from the common representation, then you can connect any environment to any other environment, and you only need 2N glue systems instead of N^2. Effectively, you'd be formalizing the prior system of bespoke glue code generation into a standardized interface for interoperation.
How would a compiler toolchain ship a debugger for webassembly? It’s kind of impossible. The only place for a debugger is inside the browser. Just like we do now with dev tools, JavaScript, typescript and webassembly languages.
> The browser is already complex enough as it is and should be radically stripped down
I’d love this too, but I think this ship has sailed. I think the web’s cardinal sin is trying to be a document platform and an application platform the same time. If I could wave a magic wand, I’d split those two used cases back out. Documents shouldn’t need JavaScript. They definitely don’t need wasm. And applications probably shouldn’t have the URL bar and back and forward buttons. Navigation should be up to the developers themselves. If apps were invented today, they should probably be done in pure wasm.
> Web APIs are designed from the ground up for Javascript
Web APIs are already almost all bridged into rust via websys. The APIs are more awkward than we’d like. But they all work today.
FWIW, it's possible to setup an IDE-like debugging environment with VSCode and a couple of plugins [1]. E.g. I can press F5 in VSCode, this starts the debuggee in Chrome and I can step-debug in VSCode exactly like debugging a native program, and it's even possible to seamlessly step into JS and back. And it's actually starting faster into a debug session than a native macOS UI program via lldb.
> FWIW I prefer `futures::lock::Mutex` on std, or `async_lock::Mutex` under no_std.
Async mutexes in Rust have so many footguns that I've started to consider them a code smell. See for example the one the Oxide project ran into [1]. IME there are relatively few cases where it makes sense to want to await a mutex asynchronously, and approximately none where it makes sense to hold a mutex over a yield point, which is why a lot of people turn to async mutexes despite advice to the contrary [2]. They are essentially incompatible with structured concurrency, but Rust async in general really wants to be structured in order to be able to play nicely with the borrow checker.
`shadow-rs` [3] bears mentioning as a prebuilt way to do some of the build info collection mentioned later in the post.
Author of the article here! I've actually come to agree with you since writing that article. I'm actually not a fan of mutexes in general and miss having things like TVars from my Haskell days. Just to shout out a deadlock freedom project that I'm not involved in and haven't put in production, but would like to see more exploration in this direction: https://crates.io/crates/happylock
I try to avoid tokio in its entirety. There are some embedded use cases with embassy that make sense to me, but I have never needed to write something that benefited from more threads than I had cores to give it. I don't deny those use cases exist, I just don't run into them. I typically spend more time computing than on i/o but so many solid libraries have abandoned their non-async branches I still have to use it more often than I'd like. I get this is a bit of a whine, I could fork those branches if I cared that much. But complaining is easier.
I think the dream is executor-independence. You shouldn't really need to care what executor you or your library consumer is using, and the Rust auto traits are designed so that you can in theory be generic over it. There are a few speed bumps that still make that harder than it really should be though.
I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.
When you are compute bound threads are just better. Async shines when you are i/o bound and you need to wait on a lot of i/o concurrently. I'm usually compute bound, and I've never needed to wait on more i/o connections than I could handle with threads. Typically all the output and input ip addresses are known in advance and in the helm chart. And countable on one hand.
I agree. Async makes sense for Embassy and WASM. I'm skeptical that it really ever makes sense for performance, even if it is technically faster in some extreme cases.
This reads like the same problems as using Emscripten's embind for automatically generating C++ <=> JS bindings, and my advice would be "just don't do it".
It adds an incredible amount of complexity and bloat versus writing a proper hybrid C++/JS application where non-trivial work is happening in handwritten JS functions versus hopping across the JS/WASM boundary for every little setter/getter. It needs experience though to find just the right balance between what code should go on either side of the boundary.
Alternatively tunnel through a properly designed C API instead of trying to map C++ or Rust types directly to JS (e.g. don't attempt to pass complex C++/Rust objects across the boundary, there's simply too little overlap between the C++/Rust and JS type systems).
The automatic bindings approach makes much more sense for a C API than for a native API of a language with a huge 'semantic surface' like C++ or Rust.
Tangential: I decided to write a Wasm parser (more precisely, a decoder the the Wasm binary format) from scratch, as a means to learn both Wasm and Rust[0].
It was the first time I was writing this sort of thing, but I found the spec very clear and well-written.
Fun fact: I was surprised when the test from a toy parser surfaced a real regression in version 3 of the spec[1], released roughly 4 months before.
I'm writing a new Wasm GC language because I'm so unsatisfied with the current options. I'm three months in, and so for it's mostly good.
WASM takes care of all the GC bits, but in turn you have to use its ref, struct, and array types for everything. Making vtables is straight forward. Fat pointers for interfaces require an object though, since you can't do your own pointer layout.
In web contexts GC should be great because browser GCs can work across DOM, JS, and WASM, so you can hold a reference to a node that you get via JS, and if you remove the node from the DOM and discard the reference, it'll be collected just like in JS.
The big downside is data transfer over the Wasm boundary requires a lot more copying with GC. Byte arrays like (array i8) are completely opaque to the outside, so you need to vend individual byte access functions to read data. On the WASI side, it only lowers to linear memory, so you need to allocate some from scratch space, then copy into GC structs and arrays. Strings are doubly awful because of this because you also have to deal with encoding.
GC also doesn't support multi-threading yet.
Some of this is supposed to be fixed by things like https://github.com/WebAssembly/design/issues/1569 and WASI lowering to GC types. stringref would have been great, but that appears to be dead for now.
So I'm on board and optimistic these things will get fixed, but GC is still a bit of a second-class citizen for now.
If you are interested in multi-byte access to GCed arrays, something similar can already be accomplished. You can hold the array data in Wasm linear memory, then have an object that wraps a pointer to it, and using JS code, associate a finalizer with that object that frees the related section of linear memory. It's essentially how FFI memory is handled in most native GCed languages.
I considered that, but it requires more pointer indirection and I'm supporting WASI as a first class target, so won't always have access to JS finalizers. I'm going with copying as simpler for now, praying that multi-byte access lands by the time my language is viable.
I think most GC languages that compile to WASM don't use the built-in GC support in WASM, but just compile the GC to WASM also. Languages like Go, Java, C# have support for WASM.
Yes wasm is basically a rust thing at this point unfortunately. And if you want to write a wasi preview 0.2 component, you need a bunch of rust tooling to do it properly
Only partially correct IMHO, the *WASM Component Model* is mostly a Rust thing (at least it's very apparent that it has been designed by Rust people), thankfully WASM itself is independent from that overengineered boondoggle.
Not at all. It's much more efficient to implement a GC on x86 or ARM than it is on Wasm 1.0/2.0, because you control the stack layout, and you don't have an impenetrable security boundary with the JS runtime that your GC needs to interop with.
Not to mention the issue that bundling a GC implementation as part of your web page can be prohibitive in terms of download size.
WASM is not nearly as capable as either architecture.
But.... they would certainly be much more useful architectures and devices if they chose to cater more to actual needs rather than performance under C/C++
I really hate wasm, not because of the idea or concept but because it gets bloated.
The first idea was computation heavy algorithms to be written in a language like C/C++/Rust and to be compiled to wasm.
Now it gets marketed as something to write sandboxed code/compontents for every language to be consumed by a wasm runtime.
Then there is the problem with the types of wasm. While it was seen to be something run on the web/browser. Its types are way more similar to rusts. For example strings in JS are fundamentally UTF-16 while wasm/rust is utf-8.
We need to constantly convert between them. I always hoped that wasm would simply allow for faster code on the web not here is my program completly sandboxed from the outside world you cant interact with other programs on the same machine.
Didn't it start with asm.js, a subset of javascript serving as a way to compile C code to be run in a browser? Then browser makers figured that it'd be better to have a dedicated target for this. So while it can be used to achieve performance in specific scenarions, it's largely designed with the goal in mind to be able to run non js code in a browser. The wasm toolchain Emscripten encompasses this notion quite clearly as it emulates things like filesystems etc. If the main goal was faster execution, they would probably have gone a different route. PRobably even gone for a new language altogether.
I'd like a toolchain better targeted for the pure acceleration use case though. Emscripten adds a lot of bloat and edges just to serve out of the box posix compatibility. Which is nice for quick demos of "look I can run Doom in the browser"-kind. But less useful for advanced web app usage, where you anyways will want to keep control of such behavior and interact with the browser apis more directly.
It started even much earlier. At first Emscripten compiled to a plain Javascript subset, after this demonstrated 'usefulness' this JS subset was properly specificed into 'asm.js' which browsers could specifically target and optimize for. The next evolutionary step was WASM (which didn't immediately bring any peformance improvements over asm.js, but allowed further improvements without having to 'compromise' Javascript with features that are only useful for a compilation target).
WASM is "just" another virtual ISA, everything else is just marketing. If you manage expections (in the sense of "it's just another ISA / bytecode VM") then WASM can be incredibly useful, e.g. all my C/C++ projects are running automatically in browsers thanks to WASM (for instance these home computer emulators: https://floooh.github.io/tiny8bit/).
The even bigger issue is that this will remain a niche thing.
Things that are a niche, will often sooner or later just die -
and nobody will even notice this. I don't understand the wasm
committee. Why design something that is bound to fail due to
barely anyone using it?
I think you overestimate how niche it really is. In the browser it's an essential part for many creative/productivity tools, e.g. Figma or Miro. On the backend it's used quite regularly as sandboxing mechanism or plugin system, e.g. Istio, Helm or OPA (so generally a high prominence in the CNCF ecosystem).
There are a lot more niche web standards that have a lot less usage that stuck around for a log time (e.g. the recent debate around removal of XSLT)
I made a couple 2D browser games a few years ago in various WASM languages. I used C++, Zig, Odin, and Rust.
(Including a game with online multiplayer! Though only the client, I did the server in TS ;)
Now a disclaimer, I experienced most of these languages for the first time during the jams, which biases me towards confusion and suffering. That being said, there was still a big gradient, and it does give us useful data on "how easy is it to get started with the basics and achieve basic tasks" (my games were like, 1970s level complexity).
Spent the last day of the C++ one dealing with a weird Emscripten bug. 12 hours left in the jam and suddenly the whole thing refused to compile, but only with Emscripten.
Spent most of the Zig jam trying to find up to date libraries and documentation. (Stuff changes fast apparently. This was 2024 though so maybe different now).
The Rust one was the most painful because I kept running into "I know what I want to do, and it's correct, and it works in every single other programming language, but rust is an authoritarian nanny state and won't let me."
(That's not WASM specific, I had similar problems when making native games in Rust. But WASM does make that aspect of the Rust experience worse, unless, I'm told, you go for one of the well supported WASM games libraries, in which case it should be relatively smooth.)
I found this too bad because overall Rust should have been the winner, in terms of cool language features. It's just a bad match for "yes I know what I'm doing and this over-cautious compiler case genuinely doesn't apply to me and there's no way to configure that so it actually lets you do your job". Also probably not ideal for game jams where dev speed and flexibility matter more than correctness.
(To beat a dead horse, my favorite part about the Rust jam was going into discord for help, getting even more absurd workarounds than GPT gave me, and then being called a bad person for not prioritizing memory safety on a game jam xD)
Zig was similarly suboptimal for game jams (though I imagine it would be fine for longer projects). Where I had 6 hours left and it was forcing me to think about what kind of division operator I wanted to use xD
Odin, I found surprisingly pleasant, even in the context of a very short jam. Very pleasant syntax. Nicest I've seen by far. (Well, I hear Jai is nice too, being Odin inspired, but no invite from Jon yet ;)
Odin's also very nice for game dev, being batteries included. For wasm there wasn't native support for game libraries at the time, but I found a GitHub repo that let you do it with a C wrapper.
Overall a fun and educational experience (and I definitely want to give Rust another go, in a less time sensitive context — it's the one language you can't just hope to wing it ;).
I did, I must admit, switch back to JS/TS and just use that for the last few games I made, because the truth is you're going to be writing it anyway for web games (unless you're using an engine or heavyweight library / toolchain), and the interop gave me more headaches than the nicer languages solved.
Depending on your scale or timeline, (or deep seated feelings about JavaScript!) WASM may be worth it for you though :)
> yes I know what I'm doing and this over-cautious compiler case genuinely doesn't apply to me and there's no way to configure that so it actually lets you do your job
I'm curious why you didn't use `unsafe`?
In general people are really bad at knowing when the strict safety rules are actually being too strict, but if you're confident they are then using `unsafe` seems like a valid path to explore.
I guess it's mostly because cpython has a fairly good C API allowing PyO3 to just write "safe" wrappers on top of cpython APIs and provide macros to generate boilerplate whereas wasm-bindgen has to generate both Rust and JS sides and deal with the painful linear memory intermediate.
Well ... that kind of explains why nobody uses wasm.
Initially I fell for the hype too. Now it seems the number
of people using wasm, is mega-low. It's like the tiniest
ever fraction of javascript-knowing folks. And this will
probably never change either.
I guess it is time to conclude:
- wasm will remain a small niche. At the least for the, say,
close future. But probably forever.
Wasm remains niche among JavaScript developers because the costs are front loaded: you need different compile targets and toolchains like wasm-pack and wasm-bindgen, you must design around explicit linear memory and ABI shapes, JS/Wasm boundary calls are expensive so tiny functions suffer, binaries get fat unless you strip and optimize, and browsers force JS glue for DOM access.
If you want it to pay off, use Wasm for CPU bound modules, batch calls and pass pointers into linear memory to minimize boundary overhead, use a tiny allocator like wee_alloc, build with cargo build --release --target wasm32-unknown-unknown then run wasm-opt -Oz, and run server or plugin workloads on wasmtime or wasmer with wasm32-wasi instead of shoehorning it into DOM heavy front ends.
Does anyone know if there's any reasonable timescale likely for this to happen? Last time I looked into the topic it seemed to be completely stalled, but I might well have been looking in the wrong places.
To a very casual observer (me) it seems like it ought to be simple, but I expect there are good reasons why it isn't.
The JS shim is still there, but it's hidden away from the programmer.
For a more direct approach which entirely avoids going through a JS shim:
Mozilla is starting to experiment with integrating the WASM Component Model into the browser. Personally I'm not a fan of this because apart from string conversion the JS shim is not the performance bottleneck that people think it is, but at the least it will finally shut up all the 'direct DOM access whining' that shows up in each and every HN thread about WASM from people who never actually used WASM ;)
> You make it sound like the shim layer is actively desirable - why is that?
More flexibility. For instance even though the 'official' WebGPU shim has the upside that it is compatible with the webgpu.h C API header, it buys that compatibility with some pretty serious performance compromises which can be avoided when using a non-standard JS shim (for instance reading/writing data from/to mapped WebGPU buffers which live in their own ArrayBuffer object, I don't think the WASM Component Model has a solution for such scenarios). The WASM Component Model solution basically has to deal with the exact same problems, but the WebGPU C API will essentially be baked into the browser. I expect though that it will still be possible to write your own specialized JS shim, so it's not a too big of an issue. I would prefer if the WASM peeps first focus on solving other problems which provide more bang for the buck though.
> It's like the tiniest ever fraction of javascript-knowing folks.
“Javascript-knowing folks” aren’t likely to have much overlap with people who have a need for WASM. It’s a bit confusing because of the history of WASM, but the two are pretty separate at this point.
We use WASM to deploy machine learning models in the browser, for example. That’s not something we would have ever considered doing with Javascript.
I will admit to being pretty ignorant. But wasm seems like a pretty big swing and a miss. Far too many edges cases and gotchas. It definitely does not “just work”. :(
It misled me too - though I'm interested in the article anyway. I've emailed the mod team so hopefully the submission here will get a less context-dependent title soon.
The c-style interface of wasm is pretty limiting when designing higher level interfaces, which is why wasn-bindgen is required in the first place.
Luckily, Firefox is pushing an early proposal to expose all the web apis directly to wasm through a higher level interface based on the wasm component-model proposal.
See https://hacks.mozilla.org/2026/02/making-webassembly-a-first...
I really doubt that web APIs like WebGPU or WebGL would see similar performance improvements, and it's unclear how the much more critical performance problems for accessing WebGPU from WASM would be solved by the WASM Component Model (e.g. WebGPU maps WGPUBuffer content into separate JS ArrayBuffer objects which cannot be accessed directly from WASM without copying the data in and out of the WASM heap).
2. It’s horrible needing so much JS glue code to do anything in wasm. I know most people don’t look at it, but JS glue code is a total waste of everyone’s time when you’re using wasm. It’s complex to generate. It can be buggy. It needs to be downloaded and parsed by the browser. And it’s slow. Like, it’s pure overhead. There’s simply no reason that this glue needs to exist at all. Wasm should be able to just talk to the browser directly.
I’d love to be able to have a <script src=foo.wasm> on my page and make websites like that. JS is a great language, but there’s no reason to make developers bridge everything through JS from other languages. Nobody should be required to learn and use JavaScript to make web software using webassembly.
Web APIs are designed for JavaScript, though, which makes this hard. For example, APIs that receive or return JS Typed Arrays, or objects with flags, etc. - wasm can't operate on those things.
You can add a complete new set of APIs which are lower-level, but that would be a lot of new surface area and a lot of new security risk. NaCl did this back in the day, and WASI is another option that would have similar concerns.
There might be a middle ground with some automatic conversion between JS objects and wasm. Say that when a Web API returns a Typed Array, it would be copied into wasm's linear memory. But that copy may make this actually slower than JS.
Another option is to give wasm a way to operate on JS objects without copying. Wasm has GC support now so that is possible! But it would not easily help non-GC languages like Rust and C++.
Anyhow, these are the sort of reasons that previous proposals here didn't pan out, like Wasm Interface Types and Wasm WebIDL bindings. But hopefully we can improve things here!
Some of the newer Web APIs would be difficult to port. But the majority of APIs have quite straight forward equivalents in any language with a defined struct type (which you admittedly do have to define for WASM, and whether that interface would end up being zero-copy would change depending on the language you are compiling to wasm)
There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
Correct, but this is has been one of wasm's guiding principles since the start: move complexity from browsers to toolchains.
Wasm is simple to optimize in browsers, far simpler than JavaScript. It does require a lot more toolchain work! But that avoids browser exploits.
This is the reason we don't support the wasm text format in browsers, or wasm-ld, or wasm-opt. All those things would make toolchains easier to develop.
You are right that this sometimes causes duplicate effort among toolchains, each one needing to do the same thing, and that is annoying. But we could also share that effort, and we already do in things like LLVM, wasm-ld, wasm-opt, etc.
Maybe we could share the effort of making JS bindings as well. In fact there is a JS polyfill for the component model, which does exactly that.
One of the common (mis-)understandings about WASM when it was released, was people could write web applications "in any language" that could output WASM. (LLVM based things as an example)
That was clear over-selling of WASM, as in reality people still needed to additionally learn JS/TS to make things work.
So for the many backend devs who completely abhor JS/TS (there are many), trying out WASM and then finding it was bullshit has not been positive.
If WASM is made a first class browser citizen, and the requirement for JS/TS truly goes away, then I'd expect a lot of positive web application development to happen by those experienced devs who abhor JS/TS.
At being said, that viewpoint is from prior to AI becoming reasonably capable. That may change the balance of things somewhat (tbd).
It would also enable combining different languages with high-level interfaces rather than having to drop down to c-style interfaces for everything.
IMHO the developer experience should be provided by compiler toolchains like Emscripten or the Rust compiler, and by their (standard) libraries. E.g. keep the complexity out of the browser, the right place for binding-layer complexity is the toolchains and at compile time. The browser is already complex enough as it is and should be radically stripped down instead instead of throwing more stuff onto the pile.
Web APIs are designed from the ground up for Javascript, and no amount of 'hidden magic' can change that. The component model just moves the binding shim to a place inside the browser where it isn't accessible, so it will be even harder to investigate and fix performance problems.
You're probably already thinking "obviously we just need a hub-and-spoke architecture where there's a common intermediate representation for all these types". That kind of architecture means that each environment only has to worry about conversions to and from the common representation, then you can connect any environment to any other environment, and you only need 2N glue systems instead of N^2. Effectively, you'd be formalizing the prior system of bespoke glue code generation into a standardized interface for interoperation.
That's the component model.
> The browser is already complex enough as it is and should be radically stripped down
I’d love this too, but I think this ship has sailed. I think the web’s cardinal sin is trying to be a document platform and an application platform the same time. If I could wave a magic wand, I’d split those two used cases back out. Documents shouldn’t need JavaScript. They definitely don’t need wasm. And applications probably shouldn’t have the URL bar and back and forward buttons. Navigation should be up to the developers themselves. If apps were invented today, they should probably be done in pure wasm.
> Web APIs are designed from the ground up for Javascript
Web APIs are already almost all bridged into rust via websys. The APIs are more awkward than we’d like. But they all work today.
[1] https://floooh.github.io/2023/11/11/emscripten-ide.html
You can integrate external debuggers, like Uno documents here:
https://platform.uno/docs/articles/debugging-wasm.html
I assume that uses some browser extension, but I didn't look into the details.
You can also use an extension to provide additional debugging capability in the browser:
https://developer.chrome.com/docs/devtools/wasm
Async mutexes in Rust have so many footguns that I've started to consider them a code smell. See for example the one the Oxide project ran into [1]. IME there are relatively few cases where it makes sense to want to await a mutex asynchronously, and approximately none where it makes sense to hold a mutex over a yield point, which is why a lot of people turn to async mutexes despite advice to the contrary [2]. They are essentially incompatible with structured concurrency, but Rust async in general really wants to be structured in order to be able to play nicely with the borrow checker.
`shadow-rs` [3] bears mentioning as a prebuilt way to do some of the build info collection mentioned later in the post.
[1]: https://rfd.shared.oxide.computer/rfd/0609 [2]: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh... [3]: https://docs.rs/shadow-rs/latest/shadow_rs/
I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.
It adds an incredible amount of complexity and bloat versus writing a proper hybrid C++/JS application where non-trivial work is happening in handwritten JS functions versus hopping across the JS/WASM boundary for every little setter/getter. It needs experience though to find just the right balance between what code should go on either side of the boundary.
Alternatively tunnel through a properly designed C API instead of trying to map C++ or Rust types directly to JS (e.g. don't attempt to pass complex C++/Rust objects across the boundary, there's simply too little overlap between the C++/Rust and JS type systems).
The automatic bindings approach makes much more sense for a C API than for a native API of a language with a huge 'semantic surface' like C++ or Rust.
Very handy to update serde data structure and see all the typescript errors that after recompiling.
[0]: https://github.com/madonoharu/tsify
It was the first time I was writing this sort of thing, but I found the spec very clear and well-written.
Fun fact: I was surprised when the test from a toy parser surfaced a real regression in version 3 of the spec[1], released roughly 4 months before.
[0] https://github.com/agis/wadec
[1] https://github.com/WebAssembly/spec/issues/2066
WASM takes care of all the GC bits, but in turn you have to use its ref, struct, and array types for everything. Making vtables is straight forward. Fat pointers for interfaces require an object though, since you can't do your own pointer layout.
In web contexts GC should be great because browser GCs can work across DOM, JS, and WASM, so you can hold a reference to a node that you get via JS, and if you remove the node from the DOM and discard the reference, it'll be collected just like in JS.
The big downside is data transfer over the Wasm boundary requires a lot more copying with GC. Byte arrays like (array i8) are completely opaque to the outside, so you need to vend individual byte access functions to read data. On the WASI side, it only lowers to linear memory, so you need to allocate some from scratch space, then copy into GC structs and arrays. Strings are doubly awful because of this because you also have to deal with encoding.
GC also doesn't support multi-threading yet.
Some of this is supposed to be fixed by things like https://github.com/WebAssembly/design/issues/1569 and WASI lowering to GC types. stringref would have been great, but that appears to be dead for now.
So I'm on board and optimistic these things will get fixed, but GC is still a bit of a second-class citizen for now.
Outside the browser, GC available on JVM and CLR runtimes are much more advanced than WASM GC will ever be.
This was one of the things that has put me off, and ergonomics are still a subset for Rust Roadmap 2026.
https://github.com/modelcontextprotocol/modelcontextprotocol...
https://github.com/modelcontextprotocol/rust-sdk/pull/183
A tool like wasm2c (or my wasm2go) shows this: there is no huge runtime to carry, just a fairly direct translation of Wasm byte code to C (or Go).
wasm2c: https://github.com/WebAssembly/wabt/blob/main/wasm2c/README....
wasm2go: https://github.com/ncruces/wasm2go
Java Applets were amazing technology IMO. Windows Java launcher ruined it (as i understand that was the main security issue).
ps. Java 8 still works :P
I can't tell if this is a jab at WASM itself or whether it should be taken at face value lmao, WASM is the definition of overengineered boondoggle.
It's a great target for a simple language (unless you insist on someone else doing your GC for you, which mandates their design on your language).
And it's also fairly easy to build a Wasm interpreter, or an AOT compiler.
It's also largely useless outside of targeting c/c++ and derivatives. Most code we write cannot target wasm without severe drawbacks.
Not to mention the issue that bundling a GC implementation as part of your web page can be prohibitive in terms of download size.
But.... they would certainly be much more useful architectures and devices if they chose to cater more to actual needs rather than performance under C/C++
The first idea was computation heavy algorithms to be written in a language like C/C++/Rust and to be compiled to wasm.
Now it gets marketed as something to write sandboxed code/compontents for every language to be consumed by a wasm runtime.
Then there is the problem with the types of wasm. While it was seen to be something run on the web/browser. Its types are way more similar to rusts. For example strings in JS are fundamentally UTF-16 while wasm/rust is utf-8.
We need to constantly convert between them. I always hoped that wasm would simply allow for faster code on the web not here is my program completly sandboxed from the outside world you cant interact with other programs on the same machine.
I'd like a toolchain better targeted for the pure acceleration use case though. Emscripten adds a lot of bloat and edges just to serve out of the box posix compatibility. Which is nice for quick demos of "look I can run Doom in the browser"-kind. But less useful for advanced web app usage, where you anyways will want to keep control of such behavior and interact with the browser apis more directly.
It started even much earlier. At first Emscripten compiled to a plain Javascript subset, after this demonstrated 'usefulness' this JS subset was properly specificed into 'asm.js' which browsers could specifically target and optimize for. The next evolutionary step was WASM (which didn't immediately bring any peformance improvements over asm.js, but allowed further improvements without having to 'compromise' Javascript with features that are only useful for a compilation target).
Mozzilla not wanting to jump into Google's boat came up with asm.js.
This naturally ignoring all the other plugins.
Plus, having to watch talks selling the idea as if no one had ever done it before.
This even ignoring operating systems where the whole userspace was bytecode based.
WASM is "just" another virtual ISA, everything else is just marketing. If you manage expections (in the sense of "it's just another ISA / bytecode VM") then WASM can be incredibly useful, e.g. all my C/C++ projects are running automatically in browsers thanks to WASM (for instance these home computer emulators: https://floooh.github.io/tiny8bit/).
Things that are a niche, will often sooner or later just die - and nobody will even notice this. I don't understand the wasm committee. Why design something that is bound to fail due to barely anyone using it?
There are a lot more niche web standards that have a lot less usage that stuck around for a log time (e.g. the recent debate around removal of XSLT)
(Including a game with online multiplayer! Though only the client, I did the server in TS ;)
Now a disclaimer, I experienced most of these languages for the first time during the jams, which biases me towards confusion and suffering. That being said, there was still a big gradient, and it does give us useful data on "how easy is it to get started with the basics and achieve basic tasks" (my games were like, 1970s level complexity).
Spent the last day of the C++ one dealing with a weird Emscripten bug. 12 hours left in the jam and suddenly the whole thing refused to compile, but only with Emscripten.
Spent most of the Zig jam trying to find up to date libraries and documentation. (Stuff changes fast apparently. This was 2024 though so maybe different now).
The Rust one was the most painful because I kept running into "I know what I want to do, and it's correct, and it works in every single other programming language, but rust is an authoritarian nanny state and won't let me."
(That's not WASM specific, I had similar problems when making native games in Rust. But WASM does make that aspect of the Rust experience worse, unless, I'm told, you go for one of the well supported WASM games libraries, in which case it should be relatively smooth.)
I found this too bad because overall Rust should have been the winner, in terms of cool language features. It's just a bad match for "yes I know what I'm doing and this over-cautious compiler case genuinely doesn't apply to me and there's no way to configure that so it actually lets you do your job". Also probably not ideal for game jams where dev speed and flexibility matter more than correctness.
(To beat a dead horse, my favorite part about the Rust jam was going into discord for help, getting even more absurd workarounds than GPT gave me, and then being called a bad person for not prioritizing memory safety on a game jam xD)
Zig was similarly suboptimal for game jams (though I imagine it would be fine for longer projects). Where I had 6 hours left and it was forcing me to think about what kind of division operator I wanted to use xD
Odin, I found surprisingly pleasant, even in the context of a very short jam. Very pleasant syntax. Nicest I've seen by far. (Well, I hear Jai is nice too, being Odin inspired, but no invite from Jon yet ;)
Odin's also very nice for game dev, being batteries included. For wasm there wasn't native support for game libraries at the time, but I found a GitHub repo that let you do it with a C wrapper.
Overall a fun and educational experience (and I definitely want to give Rust another go, in a less time sensitive context — it's the one language you can't just hope to wing it ;).
I did, I must admit, switch back to JS/TS and just use that for the last few games I made, because the truth is you're going to be writing it anyway for web games (unless you're using an engine or heavyweight library / toolchain), and the interop gave me more headaches than the nicer languages solved.
Depending on your scale or timeline, (or deep seated feelings about JavaScript!) WASM may be worth it for you though :)
I'm curious why you didn't use `unsafe`?
In general people are really bad at knowing when the strict safety rules are actually being too strict, but if you're confident they are then using `unsafe` seems like a valid path to explore.
Initially I fell for the hype too. Now it seems the number of people using wasm, is mega-low. It's like the tiniest ever fraction of javascript-knowing folks. And this will probably never change either.
I guess it is time to conclude:
- wasm will remain a small niche. At the least for the, say, close future. But probably forever.
If you want it to pay off, use Wasm for CPU bound modules, batch calls and pass pointers into linear memory to minimize boundary overhead, use a tiny allocator like wee_alloc, build with cargo build --release --target wasm32-unknown-unknown then run wasm-opt -Oz, and run server or plugin workloads on wasmtime or wasmer with wasm32-wasi instead of shoehorning it into DOM heavy front ends.
And if using a GC language, JS/TS isn't really that bad.
I’m still waiting for being able to access the DOM from WebAssembly, so it’s possible to do something useful with it without JavaScript glue code.
To a very casual observer (me) it seems like it ought to be simple, but I expect there are good reasons why it isn't.
The JS shim is still there, but it's hidden away from the programmer.
For a more direct approach which entirely avoids going through a JS shim:
Mozilla is starting to experiment with integrating the WASM Component Model into the browser. Personally I'm not a fan of this because apart from string conversion the JS shim is not the performance bottleneck that people think it is, but at the least it will finally shut up all the 'direct DOM access whining' that shows up in each and every HN thread about WASM from people who never actually used WASM ;)
https://hacks.mozilla.org/2026/02/making-webassembly-a-first...
Fair if you mean me - I've at most toyed with it, but the shim stuff was off-putting.
I'm a backender though, I don't pretend my opinion is of any consequence here.
> Personally I'm not a fan of this
You make it sound like the shim layer is actively desirable - why is that?
More flexibility. For instance even though the 'official' WebGPU shim has the upside that it is compatible with the webgpu.h C API header, it buys that compatibility with some pretty serious performance compromises which can be avoided when using a non-standard JS shim (for instance reading/writing data from/to mapped WebGPU buffers which live in their own ArrayBuffer object, I don't think the WASM Component Model has a solution for such scenarios). The WASM Component Model solution basically has to deal with the exact same problems, but the WebGPU C API will essentially be baked into the browser. I expect though that it will still be possible to write your own specialized JS shim, so it's not a too big of an issue. I would prefer if the WASM peeps first focus on solving other problems which provide more bang for the buck though.
A benchmark in the article you linked shows a 2× slowdown.
What are you basing this on? Got a source?
“Javascript-knowing folks” aren’t likely to have much overlap with people who have a need for WASM. It’s a bit confusing because of the history of WASM, but the two are pretty separate at this point.
We use WASM to deploy machine learning models in the browser, for example. That’s not something we would have ever considered doing with Javascript.