I very much dislike such features in a runtime or app.
The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
For years, I heard it's better to use cron, because the problem was already solved the right way(tm). My experience with cron has been about a dozen difficult fixes in production of cron not running / not with the right permission / errors lost without being logged / ... Changing / upgrading OSes became a problem. I since switched to a small node script with a basic scheduler in it, I had ZERO issues in 7 years. My devs happily add entries in the scheduler without bothering me. We even added consistency checks, asserts, scheduled one time execution tasks, ... and now multi server scheduling.
Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.
If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.
If cron is broken for you, than the logic solution would be to replace it with something that does work for you. But do so at the right place and abstraction. That's hardly ever the runtime or in the app.
Do One Thing (and do it well).
A special domain specific scheduler microservice? One of the many Cron replacements? One of the many "SaaS cron"? Systemd?
This problem has been solved. Corner cases ironed out. Free to use.
Same for ENV var as configurations (as opposed to inventing yet another config solution), file permissions, monitoring, networking, sandboxing, chrooting etc. the amount of broken, insecure or just inefficient DIY versions of stuff handled in an OS I've had to work around is mind boggling. Causing a trice a loss: the time taken to build it. That time not spent on the business domain, and the time to them maintain and debug it for the next fifteen years.
This is a nice idea, but what do you do when the OS tooling is not that good? macOS is a good example, they have OS level sandboxing [0], but the docs are practically nonexistent and the only way to figure it out is to read a bunch of blog posts by people who struggled with it before you. Baking it into Node means that at least theoretically you get the same thing out of the box on every OS.
Except, the OS hasn’t actually solved it. Any program you can run can access arbitrary files of yours and it’s quite difficult to actually control that access even if you want to limit the blast radius of your own software. Seriously - what software works you use? Go write eBPF to act as a mini adhoc hypervisor to enforce difficult to write policies via seLinux? That only even works if you’re the admin of the machine which isn’t necessarily the same person writing the software they want to code defensively.
Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).
This is what process' mount namespace is for. Various container implementations use it. With modern Linux you don't even need a third-party container manager, systemd-nspawn comes with the system and should be able to do that.
The problem with the "solutions" s.a. the one in Node.js is that Node.js doesn't get to decide how eg. domain names are resolved. So, it's easy to fool it to allow or to deny access to something the author didn't intend for it.
Historically, we (the computer users) decided that operating system is responsible for domain name resolution. It's possible that today it does that poorly, but, in principle we want the world to be such that OS takes care of DNS, not individual programs. From administrator perspective, it spares the administrator the need to learn the capabilities, the limitations and the syntax of every program that wants to do something like that.
It's actually very similar thing with logs. From administrator perspective, logs should always go to stderr. Programs that try to circumvent this rule and put them in separate files / send them into sockets etc. are a real sore spot of any administrator who'd spent some times doing his/her job.
Same thing with namespacing. Just let Linux do its job. No need for this duplication in individual programs / runtimes.
How would you do this in a native fashion? I mean I believe you (chroot jail I think it was?), but not everyone runs on *nix systems, and perhaps more importantly, not all Node developers know or want to know much about the underlying operating system. Which is to their detriment, of course, but a lot of people are "stuck" in their ecosystem. This is arguably even worse in the Java ecosystem, but it's considered a selling point (write once run anywhere on the JVM, etc).
I dunno how GP would do it, but I run a service (web app written in Go) under a specific user and lock-down what that user can read and write on the FS.
Just locally, that seems like a huge pain in the ass... At least you can suggest containers which has an easier interface around it generally speaking.
This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.
The argument often heard though is 'but windows'. Though if windows lacks env (or Cron, or chroot, etc) the solution would be to either move to an env that does support it, or introduce some tooling only for the windows users.
Not build a complex, hierarchical directory scanner that finds and merges all sorts of .env .env.local and whatnots.
On dev I often do use .ENV files, but use zenv or a loadenv tool or script outside of the projects codebase to then load these files into the env.
In a team setting, it can be extremely helpful to have env/config loading logic built into the repo itself. It does not mean it has be loaded by the application process, but it can be part of the surrounding tooling that is part of your codebase.
Yes, that's indeed the right place, IMO: ephemeral tooling that leverages, or simplifies OS features.
Tooling such as xenv, a tiny bash script, a makefile etc. that devs can then replace with their own if they wish (A windows user may need something different from my zsh built-in). That isn't present at all in prod, or when running in k8s or docker compose locally.
A few years ago, I surfaced a security bug in an integrated .env loader that partly leveraged a lib and partly was DIY/NIH code. A dev built something that would traverse up and down file hierarchies to search for .env.* files and merge them runtime and reload the app if it found a new or changed one. Useful for dev. But on prod, uploading a .env.png would end up in in a temp dir that this homebuilt monstrosity would then pick up. Yes, any internet user could inject most configuration into our production app.
Because a developer built a solution to a problem that was long solved, if only he had researched the problem a bit longer.
We "fixed" it by ripping out thousands of LOCs, a dependency (with dependencies) and putting one line back in the READMe: use an env loader like ....
Turned out that not only was it a security issue, it was an inotify hogger, memory hog, and io bottleneck on boot. We could downsize some production infra afterwards.
Yes, the dev built bad software. But, again, the problem wasn't that quality, but the fact it was considered to be built in the first place.
> What problem does it solve that hasn't been solved by other people?
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
How would you solve this at the OS level across Linux, macOS and Windows?
I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.
If something is solved at the OS level it probably needs to vary by OS. Just like how an application layer solution to parsing data must vary slightly between nodeJS and java.
For a solution to be truly generic to OS, it's likely better done at the network level. Like by putting your traffic through a proxy that only allows traffic to certain whitelisted / blacklisted destinations.
The proxy thing solved for betroth access but not for filesystem access.
With proxies the challenge becomes how to ensure the untrusted code in the programming language only accesses the network via the proxy. Outside of containers and iptables I haven't seen a way to do that.
I guess my point was that we have different OS's precisely because people want to do things in different ways. So we can't have generic ways to do them.
OS generic filesystem permissions would be like a OS generic UI framework, it's inherently very difficult and ultimately limited.
Separately, I totally sympathise with you that the OS solutions to networking and filesystem permissions are painful to work with. Even though I'm reasonably comfortable with rwx permissions, I'd never allow untrusted code on a machine which also had sensitive files on it. But I think we should fix this by coming up with better OS tooling, not by moving the problem to the app layer.
But you are asking the developer to make these restrictions... Node.js is the user-space program, controlled by developers. Ops shouldn't (need to) deal with it.
> Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.
I am pretty sure they've considered "I could just run this script under a different user"
(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)
OS level checks will inevitably work differently on different OSes and different versions. Having a check like this in the app binary itself means you can have a standard implementation regardless of the OS running the app.
I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.
OK, I'll bite. Do you think Node.js implementation is aware of DNS search path? (My guess would be that it's unaware with 90% certainty).
If you don't know what DNS search path is, here's my informal explanation: your application may request to connect to foo.bar.com or to bar.com, and if your /etc/resolv.conf contains "search foo", then these two requests are the same request.
This is an important feature of corporate networks because it allows macro administrative actions, temporary failover solutions etc. But, if a program is configured with Node.js without understanding this feature, none of these operations will be possible.
From my perspective, as someone who has to perform ops / administrative tasks, I would hate it if someone used these Node.js features. They would get in the way and cause problems because they are toys, not a real thing. Application cannot deal with DNS in a non-toy way. It's the task for the system.
Oh I'd be very surprised if Node's implementation would handle such situations.
I also wouldn't really expect it to though, that depends heavily on the environment the app is run in, and if the deployment environment intentionally includes resolv.conf or similar I'd expect the developer(s) to either use a more elegant solution or configure Node to expect those resolutions.
Putting network restrictions in the application layer also causes awkward issues for the org structures of many enterprises.
For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.
This is non consensual devops being forced upon us, where everyone has to learn everything.
My experience with DevOps has been they know a lot about deploying and securing Java, or Kotlin, or Python but they know scant about node js and its tooling and often refuse to learn the ecosystem
This leads to the node js teams to have to learn DevOps anyway because the DevOps teams do a subpar job with it otherwise.
Same with doing frontend builds and such. In other languages I’ve noticed (particularly Java / Kotlin) DevOps teams maintain the build tools and configurations around it for the most part. The same has not been true for the node ecosystem, whether it’s backend or Frontend
Genuine question, as I've not invested much into understanding this. What features of the OS would enable these kinds of network restrictions? Basic googling/asking AI points me in the direction of things that seem a lot more difficult in general, unless using something like AppArmor, at which point it seems like you're not quite in OS land anymore.
How many apps do you think has properly set user and access rights only to what they need? In production? If even that percentage was high, how about developers machines, people that run some node scripts which might import whoever knows what? It is possible to have it running safely, but I doubt it's a high percentage of people. Feature like this can increase that percentage
Wouldn't "simplifying" or even awareness of the existence of such OS features be a much better solution than re-building it in a runtime?
If an existing feature is used too little, then I'm not sure if rebuilding it elsewhere is the proper solution. Unless the existing feature is in a fundamentally wrong place. Which this isn't: the OS is probably the only right place for access permissions.
An obvious solution would be education. Teach people how to use docker mounts right. How to use chroot. How Linux' chmod and chown work. Or provide modern and usable alternatives to those.
Your point about OS caring about this stuff is solid, but saying a solution is education seems a little bit naive. How are you going to teach people? Or who is going to do that? If node runtime makes its use safer by implementing this, that helps a lot of people. To say people need to learn themselves helps noone.
It's similar to the refrain "they shouldn't add that feature to language X, people should just use language Y instead" ("just" when said by software developers is normally a red flag IME)
Nope, they don't add. They confuse. From administrator perspective, it sucks when the same conceptual configuration can be performed in many different places using different configuration languages, governed by different upgrade policies, owned by unintended users, logged into unintended places.
Also, I'd bet my monthly salary on that Node.js implementation of this feature doesn't take into account multiple possible corner cases and configurations that are possible on the system level. In particular, I'd be concerned about DNS search path, which I think would be hard to get right in userspace application. Also, what happens with /etc/hosts?
From administrator perspective I don't want applications to add another (broken) level of manipulating of discovery protocol. It usually very time consuming and labor intensive task to figure out why two applications which are meant to connect aren't. If you keep randomly adding more variables to this problem, you are guaranteed to have a bad time.
Oh, so we are launching attacks on personality now? Well. To start with: you aren't an admin at all, and you don't even understand the work admins do. Why are you getting into an argument that is clearly above your abilities?
And, a side note: you also don't understand English all that well. "Confusion" is present in any situation that needs analysis. What's different is the degree to which it's present. Increasing confusion makes analysis more costly in terms of resources and potential for error. The "solution" offered by Node.js offers to increase confusion, but offers nothing in return. I.e. it creates waste. Or, put differently, is useless, and, by extension, harmful, because you cannot take resources and do nothing and still be neutral: if you waste resources while produce nothing of value, you limit resources to other actors who could potentially make a better use of them.
Not necessarily, in selinux for example you would configure a domain for the "main process" which can transition into a lower permission domain for "app" code.
I wouldn't trust it to be done right. It's like a bank trusting that all their customers will do the right thing. If you want MAC (as opposed to DAC), do it in the kernel like it's supposed to be; use apparmor or selinux. And both of those methods will allow you to control way more than just which files you can read / write.
Yeah but you see, this requires to be deployed along side the application somehow with the help of the ops team. While changing the command line is under control of the application developer.
So security theatre is the best option? I'm not saying this to be cheeky, but it just seems to be an overly shallow option that is trivially easy to end run.
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
Node allows native addons in packages via the N-API so any native module aren't restricted by those permissions. Deno deals with this via --allow-ffi but these experimental Node permissions have nothing to disable the N-API, they just restrict the Node standard library.
> Node allows native addons in packages via the N-API so any native module aren't restricted by those permissions. (...) Node permissions (...) just restrict the Node standard library.
So what? That's clearly laid out in Node's documentation.
> What is the point of a permissions system that can be trivially bypassed?
You seem to be confused. The system is not bypassed. The only argument you can make is that the system covers calls to node:fs, whereas some modules might not use node:fs to access the file system. You control what dependencies you run in your system, and how you design your software. If you choose to design your system in such a way that you absolutely need your Node.js app to have unrestricted access to the file systems, you have the tools to do that. If instead you want to lock down file system access, just use node:fs and flip a switch.
Path restrictions look simple, but they're very difficult to implement correctly.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
Last time a major runtime tried implementing such restrictions on VM level, it was .NET - and it took that idea from Java, which did it only 5 years earlier.
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
Even that doesn't protect you from bind mounts. The rationale seems to be that only root can create bind mounts. But guess what, unprivileged users can also create all sorts of crazy mounts with fuse.
The whole idea of a hierarchical directory structure is an illusion. There can be all sorts of cross-links and even circular references.
The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
You can get download progress with fetch. You can't get upload progress.
Edit: Actually, you can even get upload progress, but the implementation seems fraught due to scant documentation. You may be better off using XMLHttpRequest for that. I'm going to try a simple implementation now. This has piqued my curiosity.
It took me a couple hours, but I got it working for both uploads and downloads with a nice progress bar. My uploadFile method is about 40 lines of formatted code, and my downloadFile method is about 28 lines. It's pretty simple once you figure it out!
Note that a key detail is that your server (and any intermediate servers, such as a reverse-proxy) must support HTTP/2 or QUIC. I spent much more time on that than the frontend code. In 2025, this isn't a problem for any modern client and hasn't been for a few years. However, that may not be true for your backend depending on how mature your codebase is. For example, Express doesn't support http/2 without another dependency. After fussing with it for a bit I threw it out and just used Fastify instead (built-in http/2 and high-level streaming). So I understand any apprehension/reservations there.
Overall, I'm pretty satisfied knowing that fetch has wide support for easy progress tracking.
const supportsRequestStreams = (() => {
let duplexAccessed = false;
const hasContentType = new Request('http://localhost', {
body: new ReadableStream(),
method: 'POST',
get duplex() {
duplexAccessed = true;
return 'half';
},
}).headers.has('Content-Type');
return duplexAccessed && !hasContentType;
})();
Safari doesn't appear to support the duplex option (the duplex getter is never triggered), and Firefox can't even handle a stream being used as the body of a Request object, and ends up converting the body to a string, and then setting the content type header to 'text/plain'.
Oops. Chrome only! I stand very much corrected. Perhaps I should do less late night development.
It seems my original statement that download, but not upload, is well supported was unfortunately correct after all. I had thought that readable/transform streams were all that was needed, but as you noted it seems I've overlooked the important lack of duplex option support in Safari/Firefox[0][1]. This is definitely not wide support! I had way too much coffee.
Thank you for bringing this to my attention! After further investigation, I encountered the same problem as you did as well. Firefox failed for me exactly as you noted. Interestingly, Safari fails silently if you use a transformStream with file.stream().pipeThrough([your transform stream here]) but it fails with a message noting lack of support if you specifically use a writable transform stream with file.stream().pipeTo([writable transform stream here]).
I came across the article you referenced but of course didn't completely read it. It's disappointing that it's from 2020 and no progress has been made on this. Poking around caniuse, it looks like Safari and Firefox have patchy support for similar behavior in web workers, either via partial support or behind flags. So I suppose there's hope, but I'm sorry if I got anyone's hope too far up :(
The thing I'm unsure about is if the streams approach is the same as the xhr one. I've no idea how the xhr one was accomplished or if it was even standards based in terms of impl - so my question is:
Does xhr track if the packet made it to the destination, or only that it was queued to be sent by the OS?
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
If you're happy with tRPC and don't need proper REST functionality it might not be worth it.
However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.
Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.
Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.
For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.
I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.
Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
Effect provides a pretty good engine for compile-time schema validation that can be composed with various fetching and processing pipelines, with sensible error handling for cases when external data fails to comply with the schema or when network request fails.
The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation
Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.
While true, in practice you'd only write this code once as a utility function; compare two extra bits of code in your own utility function vs loading 36 kB worth of JS.
Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
Agreed, but I think that in every project I've done I've put at least a minimal wrapper function around axios or fetch - so adding a teeny bit more to make fetch nicer feels like tomayto-tomahto to me.
You’re shooting yourself in the foot if you put naked fetch calls all over the place in your own client SDK though. Or at least going to extra trouble for no benefit
Depends on your definition of clean, I consider this to be "clever" code, which is harder to read at a glance.
You'd probably put the code that runs the request in a utility function, so the call site would be `await myFetchFunction(params)`, as simple as it gets. Since it's hidden, there's no need for the implementation of myFetchFunction to be super clever or compact; prefer readability and don't be afraid of code length.
Except you might want different error handling for different error codes. For example, our validation errors return a JSON object as well but with 422.
So treating "get a response" and "get data from a response" separately works out well for us.
The first `await` is waiting for the response-headers to arrive, so you know the status code and can decide what to do next. The second `await` is waiting for the full body to arrive (and get parsed as JSON).
It's designed that way to support doing things other than buffering the whole body; you might choose to stream it, close the connection early etc. But it comes at the cost of awkward double-awaiting for the common case (always load the whole body and then decide what happens next).
let r = await fetch(...);
if(!r.ok) ...
let len = response.headers.get("Content-Length");
if(!len || new Number(len) > 1000 * 1000)
throw new Error("Eek!");
IMU because you don't necessarily want the response body. The first promise resolves after the headers are received, the .json() promise resolves only after the full body is received (and JSON.parse'd, but that's sync anyway).
Honestly it feels like yak shaving at this point; few people would write low-level code like this very often. If you connect with one API, chances are all responses are JSON so you'd have a utility function for all requests to that API.
Code doesn't need to be concise, it needs to be clear. Especially back-end code where code size isn't as important as on the web. It's still somewhat important if you run things on a serverless platform, but it's more important then to manage your dependencies than your own LOC count.
There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.
[1] convenient capability - otherwise you'd use XMLHttpRequest
1. This is not 100ms latency for requests. It's 100ms latency for the init of a process that loads this code. And this was specifically in the context of a Lambda function that may only have 128MB RAM and like 0.25vCPU. A hello world app written in Java that has zero imports and just prints to stdout would have higher init latency than this.
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
Totally get that! I think it depends on your context. For Lambda where every KB and millisecond counts, native fetch wins, but for a full app where you need robust HTTP handling, the axios plugin ecosystem was honestly pretty nice. The fragmentation with fetch libraries is real. You end up evaluating 5 different retry packages instead of just grabbing axios-retry.
I think that's the sweet spot. Native fetch performance with axios-style conveniences. Some libraries are moving in that direction, but nothing's really nailed it yet. The challenge is probably keeping it lightweight while still solving the evaluating 5 retry packages problem.
Ky is definitely one of the libraries moving in that direction. Good adoption based on those download numbers, but I think the ecosystem is still a bit fragmented. You've got ky, ofetch, wretch, etc. all solving similar problems. But yeah, ky is probably the strongest contender right now, in my opinion.
I'm actually not a big fan of the async .json from fetch, because when it fails (because "not json"), then you can't peak at the text instead. Of course, you can clone the response, apparently, and then read text from the clone... and if you're wrapping for some other handling, it isn't too bad.
I've had Claude decide to replace my existing fetch-based API calls with Axios (not installed or present at all in the project), apropos of nothing during an unrelated change.
Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
No idea how much compatibility breakage there is, but it's probably going to have to happen at some point, and reducing dependencies sounds worth it to me.
Interceptors (and extensions in general) are the killer feature for axios still. Fetch is great for scripts, but I wouldn't build an application on it entirely; you'll be rewriting a lot or piecing together other libs.
As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.
The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.
I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.
It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!
For simple projects you needed now to add rollup or other build system that didn't have or need it before. For complex systems (with non-trivial exports), now you have a mess since it wouldn't work straight away.
Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.
This is where I actually appreciated Deno's start with a clean break from npm, and later in pushing jsr. I'm mixed on how much of Node has come into Deno, however.
The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.
It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.
Node.js made many decisions that have massive impact on ESM adoption. From forcing extensions and dropping index.js to loaders and complicated package.json "exports". In addition to node.js steamrolling everyone, tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes.
Requiring file extensions and not supporting automatic "index" imports was a requirement from Browsers where you can't just scan a file system and people would be rightfully upset if their browser modules sent 4-10 HEAD requests to find the file it was looking for.
"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).
> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes
I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.
You're right. It wasn't a design choice or technical limitation, but a troubling third thing: certain contributors consistently spreading misinformation about ESM being inherently async (when it's only conditionally async), and creating a hostile environment that “drove contributors away” from ESM work - as the implementer themselves described.
Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.
There were some legitimate technical decisions, that said, imho, Node should have just stayed compatible with Babel's implementation and there would have been significantly less friction along the way. It was definitely a choice not to do so, for better and worse.
It's interesting to see how many ideas are being taken from Deno's implementations as Deno increases Node interoperability. I still like Deno more for most things.
Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
With node:fetch you're going to have to write a wrapper for error handling/logging/retries etc. in any app/service of size. After a while, we ended up with something axios/got-like anyway that we had to fix a bunch of bugs in.
It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.
Also, "fetch" is lousy naming considering most API calls are POST.
That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own
I read this as OP commenting on the double meaning of the category. In English, “fetch” is a synonym of “GET”, so it’s silly that “fetch” as a category is independent of the HTTP method
Node was created with first-class native http server and client support. Wrapper libraries can smooth out some rough edges with the underlying api as well as make server-side js (Node) look/work similar to client-side js (Browser).
Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
It's into core but not exposed to users directly. you still need to install the npm module if you want to use it, which is required if you need for example to go through an outgoing proxy in your production environment
It kills me that I keep seeing axios being used instead of fetch, it is like people don't care, copy-paste existing projects as starting point and that is it.
There is no cleverness involved. The escape sequences are decades old and universally supported as a de facto standard. In this case the escape sequences are assigned to variables that are attached to other strings. This is as clever as using an operator. These escape sequences are even supported by Chromes dev tools console directly in the browser.
The real issue is invented here syndrome. People irrationally defer to libraries to cure their emotional fear of uncertainty. For really large problems, like complete terminal emulation, I understand that. However, when taken to an extreme, like the left pad debacle, it’s clear people are loading up on dependencies for irrational reasons.
I despise these microlibraries as much as anyone, but your solution will also print escape codes when they're not needed (such as when piping output to e.g. grep). If it's something that makes sense only in interactive mode, then fine, but I've seen enough broken programs that clearly weren't designed to be run as a part of a UNIX shell, even when it makes a lot of sense.
It's easy to solve though, simply assign empty strings to escape code variables when the output is not an interactive shell.
Yes, the tree view of dependencies in pnpm breaks my terminal environment when I attempt to pipe it through |less. Several JS related tools seem to have this undesirable behavior. I assume most users never view dependencies at that depth or use a more elaborate tool to do so. I found this symptomatic of the state of the JS ecosystem.
That is not a string output problem. That is a terminal emulator problem. It is not the job of an application to know the modes and behaviors of the invoking terminal/shell. This applies exactly the same for all other applications that write to stdout. There is no cleverness here. But, if you really really want to avoid the ANSI descriptors for other reasons, maybe you just don't liked colored output, my applications features a no-color option that replaces the ANSI control string values with empty strings.
The terminal emulator is not involved in piping to other processes. Grep will search over escape codes if you don’t suppress them in non-interactive environments, so this most definitely is a string output problem.
If you want to do it yourself, do it right—or defer to one of the battle-tested libraries that handle this for you, and additional edge cases you didn’t think of (such as NO_COLOR).
Typically JS developers define that as assuming something must be safe if enough people use it. That is a huge rift between the typical JS developer and organizations that actually take security more seriously. There is no safety rating for most software on NPM and more and more highly consumed packages are being identified as malicious or compromised.
If you do it yourself and get it wrong there is still a good chance you are in a safer place than completely throwing dependency management to the wind or making wild guesses upon vetted by the community.
I mean we’re talking about libraries to format strings here, not rendering engines. If you doubt the quality of such a lib, go read the source on GitHub. That’s what I usually do before deciding if I install something or implement it myself.
It is 100% the job of the application. This is why a lot of programs have options like --color=auto so it infers the best output mode based on the output FD type e.g. use colours for terminals but no colours for pipes.
It depends on the audience / environment where your app is used. Public, a library is better. Internal / defined company environment, you don't need extra dependencies (but only when it comes to such simple solutions, that could be replaced easy with a lib).
that's needlessly pedantic. the GP is noting that it's built into node's standard library, which might discourage you from installing a library or copying a table of ansi escapes.
I have a "ascii.txt" file ready to copy/paste the "book emoji" block chars to prepend my logs. It makes logs less noisy. HN can't display them, so I'll have to link to page w/ them: https://www.piliapp.com/emojis/books/
Caz it has more than those book emojis. It makes writing geometric code docstrings easier. Here's the rest of it (HN doesn't format it good, try copy/paste it).
cjk→⋰⋱| | ← cjk space btw | |
thinsp | |
deg°
⋯ …
‾⎻⎼⎽ lines
_ light lines
⏤ wide lines
↕
∧∨
┌────┬────┐
│ │ ⋱ ⎸ ← left bar, right bar: ⎹
└────┴────┘
⊃⊂ ⊐≣⊏
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯====›‥‥‥‥
I tried node:test and I feel this is very useful for tiny projects and library authors who need to cut down on 3rd party dependencies, but it's just too barebones for larger apps and node:assert is a bit of a toy, so at a minimum you want to pull in a more full-fledged assertion library. vitest "just works", however, and paves over a lot of TypeScript config malarkey. Jest collapsed under its own weight.
As someone who eschewed jest and others for years for the simplicity of mocha, I still appreciate the design decision of mocha to keep the assertions library separate from the test harness. Which is to point out that chai [1] is still a great assertions library and only an assertions library.
(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)
Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
On a separate note, I’ve found that LLMs have been writing me really shitty tests. Which is really sad cause that was the first use case I had for copilot, and I was absolutely smitten.
They’re great at mocking to kingdom come for the sake of hitting 90% coverage. But past that it seems like they just test implementation enough to pass.
Like I’ve found that if the implementation is broken (even broken in hilariously obvious ways, like if (dontReturnPancake) return pancake; ), they’ll usually just write tests to pass the bad code instead of saying “hey I think you messed up on line 55…”
I've heard it recommended; other than speed, what does it have to offer? I'm not too worried about shaving off half-a-second off of my personal projects' 5-second test run :P
I don’t think it’s actually faster than Jest in every circumstance. The main selling point, IMO, is that Vitest uses Vite’s configuration and tooling to transform before running tests. This avoids having to do things like mapping module resolution behaviour to match your bundler. Not having to bother with ts-jest or babel-jest is also a plus.
It has native TS and JSX support, excellent spy, module, and DOM mocking, benchmarking, works with vite configs, and parallelises tests to be really fast.
Nice post! There's a lot of stuff here that I had no idea was in built-in already.
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
I have a blog post[1] and accompanying repo[2] that shows how to use SEA to build a binary (and compares it to bun and deno) and strip it down to 67mb (for me, depends on the size of your local node binary).
It's not insane at all. Any binary that gets packed with the entire runtime will be in MBs. But that's the point, the end user downloads a standalone fragment and doesn't need to give a flying fuck about what kind of garbage has to be preinstalled for the damn binary to work. You think people care if a binary is 5MB or 50MB in 2025? It's more insane that you think it's insane than it is actually insane. Reminds me of all the Membros and Perfbros crying about Electron apps and meanwhile these things going brrrrrrr with 100MB+ binaries and 1GB+ eaten memory on untold millions of average computers
The fact that it’s normalized to use obscene amounts of memory for tiny apps should not be celebrated.
I assure you, at scale this belief makes infra fall apart, and I’ve seen it happen so, so many times. Web devs who have never thought about performance merrily chuck huge JSON blobs or serialized app models into the DB, keep clicking scale up when it gets awful, and then when that finally doesn’t work, someone who _does_ care gets hired to fix it. Except that person or team now has to not only fix years of accumulated cruft, but also has to change a deeply embedded culture, and fight for dev time against Product.
Yeah, many people here are saying this is AI written. Possibly entirely.
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
Same, but, I'm struggling with the idea that even if I learn things I haven't before, at the limit, it'd be annoying if we gave writing like this a free pass continuously - I'd argue filtered might not be the right word - I'd be fine with net reduction. Theres something bad about adding fluff (how many game changers were there?)
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
You can critique the writing without calling into question how it was written. Speculation on the tools used to write it serves no purpose beyond making a, possibly unfounded, value judgement against the writer.
I think this is both valuable, and yet, it is also the key to why the forest will become dark.
I'm not speculating - I have to work with these things so darn much that the tells are blindingly obvious - and the tells are well-known, ex. there's a gent who benchmarks "it's not just x - it's y" shibboleths for different models.
However, in a rigorous sense I am speculating: I cannot possibly know an LLM was used.
Thus, when an LLM is used, I am seeing an increasing fraction of conversation litigating whether is appropriate, whether it matters, if LLMs are good, and since anyone pointing it out could be speculating, now, the reaction hinges on how you initially frame this observation.
Ex. here, I went out of my way to make a neutral-ish comment given an experience I had last week (see other comment by me somewhere down stream)
Lets say I never say LLM, instead, frame it as "Doesn't that just mean it's a convention?" and "How are there so many game-changers?", which is obvious to audience is a consequence of using an LLM, and yet, also looks like you're picking on someone (are either of those bad writing? I only had one teacher who ever would take umbrage at somewhat subtle fluff like this)
Anyways this is all a bunch of belly-aching to an extent, you're right, and its the way to respond. There's a framing where the only real difficulty here is critiquing the writing without looking like you're picking on someone.
EDIT: Well, except for one more thing: what worries me the most when I see someone using the LLM and incapable of noticing tells and incapable of at least noticing the tells are weakening writing is...well, what else did they miss? What else did the LLM write that I have to evaluate for myself? So it's not so much as somewhat-bad writing, 90%+ still, that bothers me: its that idk what's real, and it feels like a waste of time even being offered it to read if I have to check everything.
Critique of the output is fine in my eyes. If you don't enjoy the style, format, choice of words, etc I think that's fair game even if it's superficial/subjective. It often is with art.
Placing a value judgement on someone for how the art was produced is gatekeeping. What if the person is disabled and uses an LLM for accessibility reasons as one does with so many other tools? I dunno, that seems problematic to me but I understand the aversion to the output.
For example maybe it's like criticising Hawking for not changing his monotone voice vs using the talker all together. Perhaps not the best analogy.
The author can still use LLMs to adjust the style according to criticism of the output if they so choose.
Cheers, it's gotta be the "I see this every day for hours" thing - I have a hard time mentioning it because there's a bunch of people who would like to think they have similar experience and yet don't see the same tells. But for real, I've been on these 8+ hours a day for 2 years now.
And it sounds like you have the same surreal experience as me...it's so blindingly. obvious. that the only odd thing is people not mentioning it.
And the tells are so tough, like, I wanted to a bang a drum over and over again 6 weeks ago about the "It's not X-it's Y" thing, I thought it was a GPT-4.1 tell.
Then I found this under-publicized gent doing God's work: ton of benchmarks, one of them being "Not X, but Y" slop and it turned out there was 40+ models ahead of it, including Gemini (expected, crap machine IMHO), and Claude, and I never would have guessed the Claudes. https://x.com/sam_paech/status/1950343925270794323
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
There's just so many tells in this one though and they aren'tn new ones. Like a dozen+, besides just the entire writing style being one, permeating through every word.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
I really wish ESM was easier to adopt. But we're halfway through 2025 and there are still compatibility issues with it. And it just gets even worse now that so many packages are going ESM only. You get stuck having to choose what to cut out. I write my code in TS using ESM syntax, but still compile down to CJS as the build target for my sanity.
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
Don’t use enums. They are problematic for a few reasons, but the ability to run TS code without enums without a build step should be more than enough of a reason to just use a const object instead.
In Node 22.7 and above you can enable features like enums and parameter properties with the --experimental-transform-types CLI option (not to be confused with the old --experimental-strip-types option).
It's still not ready for use. I don't care Enum. But you can not import local files without extensions. You can not define class properties in constructor.
Enums and parameter properties can be enabled with the --experimental-transform-types CLI option.
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
Importing without extensions is not a TypeScript thing at all. Node introduced it at the beginning and then stopped when implementing ESM. Being strict is a feature.
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
Anyone else find they discover these sorts of things by accident. I never know when a feature was added but vague ideas of "thats modern". Feels different to when I only did C# and you'd read the new language features and get all excited. In a polyglot world and just the rate even individual languages evolve its hard to keep up! I usually learn through osmosis or a blog post like this (but that is random learning).
Maybe there's an idea in here, a website that shows you all release notes since the last time you've used something, removing those that have been superseeded by later ones, ranked by importance.
I think slowly Node is shaping up to offer strong competition to Bun.js, Deno, etc. such that there is little reason to switch. The mutual competition is good for the continued development of JS runtimes
Slowly, yes, definitely welcome changes. I'm still missing Bun's `$` shell functions though. It's very convenient to use JS as a scripting language and don't really want to run 2 runtimes on my server.
Starting a new project, I went with Deno after some research. The NPM ecosystem looked like a mess; and if Node's creator considers Deno the future and says it addresses design mistakes in Node, I saw no reason to doubt him.
I was really trying to use deno but it still is not there yet. Node might not be the cool kid, but it works and if you get stuck the whole internet is here to help (or at least stack overflow).
We once reported an issue and it got fixed really quickly. But then we had troubles connecting via TLS (mysql on google cloud platform) and after a long time debugging found out the issue is actually not in deno, but in RustTLS, which is used by deno. Even a known issue in RustTLS - still hard to find out if you don't already know what you are searching for.
It was then quicker to switch to nodejs with a TS runner.
Well I don't think there is much choice. They write the core in rust. Plus openssl is a bit old and bloated. So it's not wrong to pick RustTLS if you want to be hip.
I've been away from the node ecosystem for quite some time. A lot of really neat stuff in here.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
Most people (including the author apparently) don't know they can chain errors with cause option in-built way in node and in browser. It is not just arbitrary extending and it is relatively a new thing. https://nodejs.org/api/errors.html#errorcause
Most importantly, Node has Typescript support even in LTS (starting with v22.18).
I highly recommend the `erasableSyntaxOnly` option in tsconfig because TS is most useful as a linter and smarter Intellisense that doesn't influence runtime code:
Something's missing in the "Modern Event Handling with AsyncIterators" section.
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
It's definitely ai slop. See also the nonsensical attempt to conditionally load SQLite twice, in the dynamic imports example.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
Between browsers and Electron, even those of us who hate this ecosystem are forced to deal with it, and if one does, at least one can do it with slightly more comfort using the newer tooling.
Be honest. Did you write this comment with an LLM?
Why should it matter beyond correctness of the content, which you and the author need to evaluate either way.
Personally, I'm exhausted with this sentiment. There's no value in questioning how something gets written, only the output matters. Otherwise we'd be asking the same about pencils, typewriters, dictionaries and spellcheck in some pointless persuit of purity.
What, surely you’re not implying that bangers like the following are GPT artifacts!? “The changes aren’t just cosmetic; they represent a fundamental shift in how we approach server-side JavaScript development.”
True! But writing an egregiously erroneous blogpost used to take actual effort. The existence of a blogpost was a form of proof of work that a human at least thought they knew enough about a topic to write it down and share it.
Now the existence of this blogpost is only evidence that the author has sufficient AI credits they are able to throw some release notes at Claude and generate some markdown, which is not really differentiating.
I'm not sure - a lot of the top comments are saying that this article is great and they learned a lot of new things. Which is great, as long as the things they learned are true things.
This might seem fine at a glance, but a big grip I have with node/js async/promise helper functions is that you can't differ which promise returned/threw an exception.
In this example, if you wanted to handle the `config.json` file not existing, you would need to somehow know what kind of error the `readFile` function can throw, and somehow manage to inspect it in the 'error' variable.
This gets even worse when trying to use something like `Promise.race` to handle promises as they are completed, like:
const result = Promise.race([op1, op2, op3]);
You need to somehow embed the information about what each promise represents inside the promise result, which usually is done through a wrapper that injects the promise value inside its own response... which is really ugly.
You are probably looking for `Promise.allSettled`[1]. Which, to be fair, becomes quite convulated with destructuring (note that the try-catch is not necessary anymore, since allSettled doesn't "throw"):
// Parallel execution of independent operations
const [
{ value: config, reason: configError },
{ value: userData, reason: userDataError },
] = await Promise.allSettled([
readFile('config.json', 'utf8'),
fetch('/api/user').then(r => r.json())
]);
if (configError) {
// Error with config
}
if (userDataError) {
// Error with userData
}
When dealing with multiple parallel tasks that I care about their errors individually, I prefer to start the promises first and then await for their results after all of them are started, that way I can use try catch or be more explicit about resources:
IMO when you do control-flow in catch blocks, you're fighting against the language. You lose Typescripts type-safety, and the whole "if e instanceof ... else throw e"-dance creates too much boilerplate.
If the config file not existing is a handleable case, then write a "loadConfig" function that returns undefined.
About time! The whole dragging the feet on ESM adoption is insane. The npm are still stuck on commonjs is quite a lot. In some way glad jsr came along.
For a more modern approach to .env files that includes built-in validation and type-safety, check out https://varlock.dev
Instead of a .env.example (which quickly gets out of date), it uses a .env.schema - which contains extra metadata as decorator comments. It also introduces a new function call syntax, to securely load values from external sources.
Am I the only one believing common js was super ok and don't like esm? Or put differently I didn't see the necessity of having esm at all in Node. Let alone the browser, imagine loading tons of modules over the wire instead of bundle them
The fact that ESM exports are static - and imports are mostly static - allows for a lot more dead code removal during bundling. That alone is a big win IMHO.
`import * as` is still treeshakeable and Browsers will do it in-memory as the imports form weak proxies.
Bundlers can treeshake it, it is just harder to do, and so it hasn't always been a priority feature. esbuild especially in the last few years has done a lot of work to treeshake `import * as` in many more scenarios than before. Sure it isn't treeshaking "all" scenarios yet, but it's still getting better.
Hi, regarding streams interoperability I've documented how to handle file streams a while ago, after experimenting with Next.js old system (Node.js based) and new system (web based) : https://www.ericburel.tech/blog/nextjs-stream-files#2024-upd....
It sums up as "const stream = fileHandle.readableWebStream()" to produce a web stream using Node.js fs, rather than creating a Node.js stream.
I love Node's built-in testing and how it integrates with VSCode's test runner. But I still miss Jest matchers. The Vitest team ported Jest matchers for their own use. I wish there were a similar compatibility between Jest matchers and Node testing as well.
Currently for very small projects I use the built in NodeJS test tooling.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
It is based on vite and a bundler has no place in my backend. Vite is based on roll-up, roll-up uses some other things such as swc. I want to use typescript projects and npm workspaces which vite doesn't seem to care about.
Good to see Node is catching up although Bun seems to have more developer effort behind it so I'll typically default to Bun unless I need it to run in an environment where node is better for compatibility.
Some good stuff in here. I had no idea about AsyncIterators before this article, but I've done similar things with generators in the past.
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
As a primary backend developer, I want to add my two cents:
> Top-Level Await: Simplifying Initialization
This feels absolutely horrible to me. There is no excuse for not having a proper entry-point function that gives full control to the developer to execute everything that is needed before anything else happens. Such as creating database connections, starting services and connecting to APIs, warming up caches and so on. All those things should be run (potentially concurrent).
Until this is possible, even with top-level await, I personally have to consider node.js to be broken.
> Modern Testing with Node.js Built-in Test Runner
Sorry, but please do one thing and do it well.
> Async/Await with Enhanced Error Handling
I wish had JVM-like logging and stack traces (including cause-nesting) in node.js...
> 6. Worker Threads: True Parallelism for CPU-Intensive Tasks
This is the biggest issue. There should be really an alternative that has builtin support for parallelism that doesn't force me to de/serialize things by hand.
---
Otherwise a lot of nice progress. But the above ones are bummers.
Javascript is missing some feature that will take it to the next level, and I'm not sure what it is.
Maybe it needs a compile-time macro system so we have go full Java and have magical dependency injection annotations, Aspect-Oriented-Programming, and JavascriptBeans (you know you want it!).
Or maybe it needs to go the Ruby/Python/SmallTalk direction and add proper metaprogramming, so we can finally have Javascript on Rails, or maybe uh... Djsango?
Perhaps the technology that you are using is loaded with hundreds of foot-guns if you have to spend time on enforcing these patterns.
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
JS on the backend was arguably an even bigger mistake when the JS ecosystem was less sophisticated. The levels of duct tape are dizzying. Although we might go back even further and ask if JS was also a mistake when it was added to the browser.
I often wonder about a what-if, alternate history scenario where Java had been rolled out to the browser in a more thoughtful way. Poor sandboxing, the Netscape plugin paradigm and perhaps Sun's licensing needs vs. Microsoft's practices ruined it.
> JS on the backend was arguably an even bigger mistake when the JS ecosystem was less sophisticated.
I see it being used since over 25 years for the Austrian national broadcaster. Based at least originally on rhino, so it's also mixed with the Java you love. Fail to see the big issue as it's working just fine for such a long time.
Node.js is a runtime, not a language. It is quite capable, but as per usual, it depends on what you need/have/know, ASP.NET Core is a very good choice too.
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
It has terrible half-completed versions of everything, all of which are subtly incompatible with everything else.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
I used to agree but when you have libraries like Mediatr, mass transit and moq going/looking to go paid I’m not confident that the wider ecosystem is in a much better spot.
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
This is the first I'm hearing of this, and a quick Google search found me a bunch of conflicting "methods" just within the NestJS ecosystem, and no clear indication of which one actually works.
nest build --webpack
nest build --builder=webpack
... and of course I get errors with both of those that I don't get with a plain "nest build". (The error also helpfully specifies only the directory in the source, not the filename! Wtf?)
Is this because NestJS is a "squishy scripting system" designed for hobbyists that edit API controller scripts live on the production server, and this is the first time that it has been actually built, or... is it because webpack has some obscure compatibility issue with a package?
... or is it because I have the "wrong" hieroglyphics in some Typescript config file?
Who knows!
> There's this thing called worker_threads.
Which are not even remotely the same as the .NET runtime and ASP.NET, which have a symmetric threading model where requests are handled on a thread pool by default. Node.js allows "special" computations to be offloaded to workers, but not HTTP requests. These worker threads can only communicate with the main thread through byte buffers!
In .NET land I can simply use a concurrent dictionary or any similar shared data structure... and it just works. Heck, I can process a single IEnumerable, list, or array using parallel workers trivially.
If you read my comment I said there are downsides:
"But yes there are downsides. But the biggest ones you brought up are not true."
My point is.. what you said is NOT true. And even after you're reply, it's still not true. You brought up some downsides in your subsequent reply... but again, your initial reply wasn't true.
That's all. I acknowledge the downsides, but my point remains the same.
No. Because C#, while far from perfect, is still a drastically better language than JS (or even TS), and .NET stdlib comes with a lot of batteries included. Also because the JS package ecosystem is, to put it bluntly, insane; everything breaks all the time. The probability of successfully running a random Node.js project that hasn't been maintained for a few years is rather low.
Unless it changed how NodeJS handles this you shouldn't use Promise.all(). Because if more than one promise rejects then the second rejection will emit a unhandledRejection event and per default that crashes your server. Use Promise.allSettled() instead.
Promise.all() itself doesn't inherently cause unhandledRejection events. Any rejected promise that is left unhandled will throw an unhandledRejection, allSettled just collects all rejections, as well as fulfillments for you. There are still legitimate use cases for Promise.all, as there are ones for Promise.allSettled, Promise.race, Promise.any, etc. They each serve a different need.
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
Weird. Also just tried it with v8 and it doesn't behave like I remember and also not like certain descriptions that I can find online. I remember it because I was so extremely flabbergasted when I found out about this behavior and it made me go over all of my code replacing any Promise.all() with Promise.allSettled(). And it's not just me, this blog post talks about that behavior:
Maybe my bug was something else back then and I found a source claiming that behavior, so I changed my code and as a side effect my bug happened to go away coincidentally?
I am being sincere and a little self deprecating when I say: because I prefer Gen X-coded projects (Node, and Deno for that matter) to Gen Z-coded projects (Bun).
Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
I think I kind of get you, there's something I find off putting about Bun like it's a trendy ORM or front end framework where Node and Deno are trying to be the boring infrastructure a runtime should be.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Yes, that's it. I don't want a cute runtime, I want a boring and reliable one.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
Because Bun is still far less mature (and the stack on which it builds is even less so - Zig isn't even 1.0).
Because its Node.js compat isn't perfect, and so if you're running on Node in prod for whatever reason (e.g. because it's an Electron app), you might want to use the same thing in dev to avoid "why doesn't it work??" head scratches.
Because Bun doesn't have as good IDE integration as Node does.
I haven't used it for a few months but in my experience, its package/monorepo management features suck compared to pnpm (dependencies leak between monorepo packages, the command line is buggy, etc), bun --bun is stupid, build scripts for packages routinely blow up since they use node so i end up needing to have both node and bun present for installs to work, packages routinely crash because they're not bun-compatible, most of the useful optimizations are making it into Node anyway, and installing ramda or whatever takes 2 seconds and I trust it so all of Bun's random helper libraries are of marginal utility.
We’ve made a lot of progress on bun install over the last few months:
- isolated, pnpm-style symlink installs for node_modules
- catalogs
- yarn.lock support (later today)
- bun audit
- bun update —interactive
- bun why <pkg> helps find why a package is installed
- bun info <pkg>
- bun pm pkg get
- bun pm version (for bumping)
We will also support pnpm lockfile migration next week. To do that, we’re writing a YAML parser. This will also unlock importing YAML files in JavaScript at runtime.
because bun is written in a language that isn't even stable (zig) and uses webkit. None of the developer niceties will cover that up. I also don't know if they'll be able to monetize, which means it might die if funding dries up.
I could never get into node but i've recently been dabbling with bun which is super nice. I still don't think i'll give node a chance but maybe i'm missing out.
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
Not really, from everything I can see, authors are basically forced to ship both, so it’s just another schism. Libraries that stopped shipping CJS we just never adopted, because we’re not dropping mature tech for pointless junior attitudes like this.
No idea why you think otherwise, I’m over here actually shipping.
Architecture astronaut is a term I hadn't heard but can appreciate. However I fail to see that here. It's a fair overview of newish Node features... Haven't touched Node in a few years so kinda useful.
It's a good one with some history and growing public knowledge now. I'd encourage a deep dive, it goes all the way back to at least CPP and small talk.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Did you read the article? Your comments feel entirely disconnected from its contents - mostly low level piece or things that can replace libraries you probably used anyway
One of the core things Node.js got right was streams. (Anyone remember substack’s presentation “Thinking in streams”?) It’s good to see them continue to push that forward.
I think there are several reasons. First, the abstraction of a stream of data is useful when a program does more than process a single realtime loop. For example, adding a timeout to a stream of data, switching from one stream processor to another, splitting a stream into two streams or joining two streams into one, and generally all of the patterns that one finds in the Observable pattern, in unix pipes, and more generally event based systems, are modelled better in push and pull based streams than they are in a real time tight loop. Second, for the same reason that looping through an array using map or forEach methods is often favored over a for loop and for loops are often favored over while loops and while loops are favored over goto statements. Which is that it reduces the amount of human managed control flow bookkeeping, which is precisely where humans tend to introduce logic errors. And lastly, because it almost always takes less human effort to write and maintain stream processing code than it does to write and maintain a real time loop against a buffer.
Streams have backpressure, making it possible for downstream to tell upstream to throttle their streaming. This avoids many issues related to queuing theory.
That also happens automatically, it is abstracted away from the users of streams.
A stream is not necessarily always better than an array, of course it depends on the situation. They are different things. But if you find yourself with a flow of data that you don't want to buffer entirely in memory before you process it and send it elsewhere, a stream-like abstraction can be very helpful.
Why is an array better than pointer arithmetic and manually managing memory? Because it's a higher level abstraction that frees you from the low level plumbing and gives you new ways to think and code.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Backpressure signaling can be handled with your own "event loop" and array syntax.
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.
what? This is an overview of modern features provided in a programming language runtime. Are you saying the author shouldn’t be wasting their time writing about them and should be writing for loops instead? Or are you saying the core devs of a language runtime shouldn’t be focused on architecture and should instead be writing for loops?
The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.
If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.
Do One Thing (and do it well).
A special domain specific scheduler microservice? One of the many Cron replacements? One of the many "SaaS cron"? Systemd?
This problem has been solved. Corner cases ironed out. Free to use.
Same for ENV var as configurations (as opposed to inventing yet another config solution), file permissions, monitoring, networking, sandboxing, chrooting etc. the amount of broken, insecure or just inefficient DIY versions of stuff handled in an OS I've had to work around is mind boggling. Causing a trice a loss: the time taken to build it. That time not spent on the business domain, and the time to them maintain and debug it for the next fifteen years.
[0] https://www.karltarvas.com/macos-app-sandboxing-via-sandbox-...
Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).
The problem with the "solutions" s.a. the one in Node.js is that Node.js doesn't get to decide how eg. domain names are resolved. So, it's easy to fool it to allow or to deny access to something the author didn't intend for it.
Historically, we (the computer users) decided that operating system is responsible for domain name resolution. It's possible that today it does that poorly, but, in principle we want the world to be such that OS takes care of DNS, not individual programs. From administrator perspective, it spares the administrator the need to learn the capabilities, the limitations and the syntax of every program that wants to do something like that.
It's actually very similar thing with logs. From administrator perspective, logs should always go to stderr. Programs that try to circumvent this rule and put them in separate files / send them into sockets etc. are a real sore spot of any administrator who'd spent some times doing his/her job.
Same thing with namespacing. Just let Linux do its job. No need for this duplication in individual programs / runtimes.
I dunno how GP would do it, but I run a service (web app written in Go) under a specific user and lock-down what that user can read and write on the FS.
For networking, though, that's a different issue.
Meaning Windows? It also has file system permissons on an OS level that are well-tested and reliable.
> not all Node developers know or want to know much about the underlying operating system
Thing is, they are likely to not feel up for understanding this feature either, nor write their code to play well with it.
And if they at some point do want to take system permissions seriously, they'll find it infinitely easier to work with the OS.
Just locally, that seems like a huge pain in the ass... At least you can suggest containers which has an easier interface around it generally speaking.
https://man.openbsd.org/pledge.2
This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.
The argument often heard though is 'but windows'. Though if windows lacks env (or Cron, or chroot, etc) the solution would be to either move to an env that does support it, or introduce some tooling only for the windows users.
Not build a complex, hierarchical directory scanner that finds and merges all sorts of .env .env.local and whatnots.
On dev I often do use .ENV files, but use zenv or a loadenv tool or script outside of the projects codebase to then load these files into the env.
Tooling such as xenv, a tiny bash script, a makefile etc. that devs can then replace with their own if they wish (A windows user may need something different from my zsh built-in). That isn't present at all in prod, or when running in k8s or docker compose locally.
A few years ago, I surfaced a security bug in an integrated .env loader that partly leveraged a lib and partly was DIY/NIH code. A dev built something that would traverse up and down file hierarchies to search for .env.* files and merge them runtime and reload the app if it found a new or changed one. Useful for dev. But on prod, uploading a .env.png would end up in in a temp dir that this homebuilt monstrosity would then pick up. Yes, any internet user could inject most configuration into our production app.
Because a developer built a solution to a problem that was long solved, if only he had researched the problem a bit longer.
We "fixed" it by ripping out thousands of LOCs, a dependency (with dependencies) and putting one line back in the READMe: use an env loader like .... Turned out that not only was it a security issue, it was an inotify hogger, memory hog, and io bottleneck on boot. We could downsize some production infra afterwards.
Yes, the dev built bad software. But, again, the problem wasn't that quality, but the fact it was considered to be built in the first place.
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.
For a solution to be truly generic to OS, it's likely better done at the network level. Like by putting your traffic through a proxy that only allows traffic to certain whitelisted / blacklisted destinations.
With proxies the challenge becomes how to ensure the untrusted code in the programming language only accesses the network via the proxy. Outside of containers and iptables I haven't seen a way to do that.
OS generic filesystem permissions would be like a OS generic UI framework, it's inherently very difficult and ultimately limited.
Separately, I totally sympathise with you that the OS solutions to networking and filesystem permissions are painful to work with. Even though I'm reasonably comfortable with rwx permissions, I'd never allow untrusted code on a machine which also had sensitive files on it. But I think we should fix this by coming up with better OS tooling, not by moving the problem to the app layer.
Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.
I am pretty sure they've considered "I could just run this script under a different user"
(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)
I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.
If you don't know what DNS search path is, here's my informal explanation: your application may request to connect to foo.bar.com or to bar.com, and if your /etc/resolv.conf contains "search foo", then these two requests are the same request.
This is an important feature of corporate networks because it allows macro administrative actions, temporary failover solutions etc. But, if a program is configured with Node.js without understanding this feature, none of these operations will be possible.
From my perspective, as someone who has to perform ops / administrative tasks, I would hate it if someone used these Node.js features. They would get in the way and cause problems because they are toys, not a real thing. Application cannot deal with DNS in a non-toy way. It's the task for the system.
I also wouldn't really expect it to though, that depends heavily on the environment the app is run in, and if the deployment environment intentionally includes resolv.conf or similar I'd expect the developer(s) to either use a more elegant solution or configure Node to expect those resolutions.
In other words: Node.js doesn't do anything better, but, actually does some things worse. No advantages, only disadvantages... then why use it?
For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.
This is non consensual devops being forced upon us, where everyone has to learn everything.
This leads to the node js teams to have to learn DevOps anyway because the DevOps teams do a subpar job with it otherwise.
Same with doing frontend builds and such. In other languages I’ve noticed (particularly Java / Kotlin) DevOps teams maintain the build tools and configurations around it for the most part. The same has not been true for the node ecosystem, whether it’s backend or Frontend
If an existing feature is used too little, then I'm not sure if rebuilding it elsewhere is the proper solution. Unless the existing feature is in a fundamentally wrong place. Which this isn't: the OS is probably the only right place for access permissions.
An obvious solution would be education. Teach people how to use docker mounts right. How to use chroot. How Linux' chmod and chown work. Or provide modern and usable alternatives to those.
Also, I'd bet my monthly salary on that Node.js implementation of this feature doesn't take into account multiple possible corner cases and configurations that are possible on the system level. In particular, I'd be concerned about DNS search path, which I think would be hard to get right in userspace application. Also, what happens with /etc/hosts?
From administrator perspective I don't want applications to add another (broken) level of manipulating of discovery protocol. It usually very time consuming and labor intensive task to figure out why two applications which are meant to connect aren't. If you keep randomly adding more variables to this problem, you are guaranteed to have a bad time.
And, a side note: you also don't understand English all that well. "Confusion" is present in any situation that needs analysis. What's different is the degree to which it's present. Increasing confusion makes analysis more costly in terms of resources and potential for error. The "solution" offered by Node.js offers to increase confusion, but offers nothing in return. I.e. it creates waste. Or, put differently, is useless, and, by extension, harmful, because you cannot take resources and do nothing and still be neutral: if you waste resources while produce nothing of value, you limit resources to other actors who could potentially make a better use of them.
That's a cool feature. Using jlink for creating custom JVMs does something similar.
That's a good feature. What you are saying is still true though, using the OS for that is the way to go.
How can we offer a solution that is as low or lower friction and does the right thing instead of security theater.
At least we could consider this part of a defense in depth.
We; humans; always reach for instant gratification. The path of low resistance is the one that wins.
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
So what? That's clearly laid out in Node's documentation.
https://nodejs.org/api/permissions.html#file-system-permissi...
What point do you think you're making?
You seem to be confused. The system is not bypassed. The only argument you can make is that the system covers calls to node:fs, whereas some modules might not use node:fs to access the file system. You control what dependencies you run in your system, and how you design your software. If you choose to design your system in such a way that you absolutely need your Node.js app to have unrestricted access to the file systems, you have the tools to do that. If instead you want to lock down file system access, just use node:fs and flip a switch.
> need to demonstrate security compliance.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
e.x. https://go.dev/blog/osroot
The whole idea of a hierarchical directory structure is an illusion. There can be all sorts of cross-links and even circular references.
Edit: Actually, you can even get upload progress, but the implementation seems fraught due to scant documentation. You may be better off using XMLHttpRequest for that. I'm going to try a simple implementation now. This has piqued my curiosity.
Note that a key detail is that your server (and any intermediate servers, such as a reverse-proxy) must support HTTP/2 or QUIC. I spent much more time on that than the frontend code. In 2025, this isn't a problem for any modern client and hasn't been for a few years. However, that may not be true for your backend depending on how mature your codebase is. For example, Express doesn't support http/2 without another dependency. After fussing with it for a bit I threw it out and just used Fastify instead (built-in http/2 and high-level streaming). So I understand any apprehension/reservations there.
Overall, I'm pretty satisfied knowing that fetch has wide support for easy progress tracking.
It seems my original statement that download, but not upload, is well supported was unfortunately correct after all. I had thought that readable/transform streams were all that was needed, but as you noted it seems I've overlooked the important lack of duplex option support in Safari/Firefox[0][1]. This is definitely not wide support! I had way too much coffee.
Thank you for bringing this to my attention! After further investigation, I encountered the same problem as you did as well. Firefox failed for me exactly as you noted. Interestingly, Safari fails silently if you use a transformStream with file.stream().pipeThrough([your transform stream here]) but it fails with a message noting lack of support if you specifically use a writable transform stream with file.stream().pipeTo([writable transform stream here]).
I came across the article you referenced but of course didn't completely read it. It's disappointing that it's from 2020 and no progress has been made on this. Poking around caniuse, it looks like Safari and Firefox have patchy support for similar behavior in web workers, either via partial support or behind flags. So I suppose there's hope, but I'm sorry if I got anyone's hope too far up :(
[0] https://caniuse.com/mdn-api_fetch_init_duplex_parameter [1] https://caniuse.com/mdn-api_request_duplex
Does xhr track if the packet made it to the destination, or only that it was queued to be sent by the OS?
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
- https://orpc.unnoq.com/
- https://github.com/unnoq/orpc
However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.
- https://orpc.unnoq.com/docs/openapi/integrations/trpc
Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.
I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.
[0] https://lucia-auth.com/
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
[1] https://hono.dev/docs/guides/validation#zod-validator-middle...
[2] https://hono.dev/docs/guides/rpc#client
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
1. Shared TypeScript types
2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety
3. RTK (redux toolkit) query style: codegen'd frontend client
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
{ throwNotOk, parseJson }
they know that's 99% of fetch calls, i do t see why it can't be baked in.
The following seems cleaner than either of your examples. But I'm sure I've missed the point.
I share this at the risk of embarrassing myself in the hope of being educated.You'd probably put the code that runs the request in a utility function, so the call site would be `await myFetchFunction(params)`, as simple as it gets. Since it's hidden, there's no need for the implementation of myFetchFunction to be super clever or compact; prefer readability and don't be afraid of code length.
So treating "get a response" and "get data from a response" separately works out well for us.
It's designed that way to support doing things other than buffering the whole body; you might choose to stream it, close the connection early etc. But it comes at the cost of awkward double-awaiting for the common case (always load the whole body and then decide what happens next).
Code doesn't need to be concise, it needs to be clear. Especially back-end code where code size isn't as important as on the web. It's still somewhat important if you run things on a serverless platform, but it's more important then to manage your dependencies than your own LOC count.
[1] convenient capability - otherwise you'd use XMLHttpRequest
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
You can obviously do that with fetch but it is more fragmented and more boilerplate
I haven't used it but the weekly download count seems robust.
The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.
Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...
"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).
> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes
I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.
Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.
It's interesting to see how many ideas are being taken from Deno's implementations as Deno increases Node interoperability. I still like Deno more for most things.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
Also, "fetch" is lousy naming considering most API calls are POST.
That's said there are npm packages that are ridiculously obsolete and overused.
`const { styleText } = require('node:util');`
Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...
Using a library which handles that (an a thousand other quirks) makes much more sense
The real issue is invented here syndrome. People irrationally defer to libraries to cure their emotional fear of uncertainty. For really large problems, like complete terminal emulation, I understand that. However, when taken to an extreme, like the left pad debacle, it’s clear people are loading up on dependencies for irrational reasons.
It's easy to solve though, simply assign empty strings to escape code variables when the output is not an interactive shell.
If you want to do it yourself, do it right—or defer to one of the battle-tested libraries that handle this for you, and additional edge cases you didn’t think of (such as NO_COLOR).
Typically JS developers define that as assuming something must be safe if enough people use it. That is a huge rift between the typical JS developer and organizations that actually take security more seriously. There is no safety rating for most software on NPM and more and more highly consumed packages are being identified as malicious or compromised.
If you do it yourself and get it wrong there is still a good chance you are in a safer place than completely throwing dependency management to the wind or making wild guesses upon vetted by the community.
Also, I'm guessing if I pipe your logs to a file you'll still write escapes into it? Why not just make life easier?
1. Node has built in test support now: looks like I can drop jest!
2. Node has built in watch support now: looks like I can drop nodemon!
(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)
[1] https://www.chaijs.com/
At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)
They’re great at mocking to kingdom come for the sake of hitting 90% coverage. But past that it seems like they just test implementation enough to pass.
Like I’ve found that if the implementation is broken (even broken in hilariously obvious ways, like if (dontReturnPancake) return pancake; ), they’ll usually just write tests to pass the bad code instead of saying “hey I think you messed up on line 55…”
The problem isn't in the writing, but the reading!
I’m sure there’s a package for jest to do that (idk maybe that’s what jest extended is?) but the vitest experience is really nice and complete
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
[1]: https://nodejs.org/api/single-executable-applications.html
[2]: https://brr.fyi/posts/engineering-for-slow-internet
[1]: https://notes.billmill.org/programming/javascript/Making_a_s...
[2]: https://github.com/llimllib/node-esbuild-executable#making-a...
I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…
I assure you, at scale this belief makes infra fall apart, and I’ve seen it happen so, so many times. Web devs who have never thought about performance merrily chuck huge JSON blobs or serialized app models into the DB, keep clicking scale up when it gets awful, and then when that finally doesn’t work, someone who _does_ care gets hired to fix it. Except that person or team now has to not only fix years of accumulated cruft, but also has to change a deeply embedded culture, and fight for dev time against Product.
Not that I find it particularly egregious, but my rust (web-server) apps not even optimized are under 10mb easily.
Go binaries weight 20mb for example.
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
* It is and it's too late.
https://hbr.org/2025/08/research-the-hidden-penalty-of-using...
I'm not speculating - I have to work with these things so darn much that the tells are blindingly obvious - and the tells are well-known, ex. there's a gent who benchmarks "it's not just x - it's y" shibboleths for different models.
However, in a rigorous sense I am speculating: I cannot possibly know an LLM was used.
Thus, when an LLM is used, I am seeing an increasing fraction of conversation litigating whether is appropriate, whether it matters, if LLMs are good, and since anyone pointing it out could be speculating, now, the reaction hinges on how you initially frame this observation.
Ex. here, I went out of my way to make a neutral-ish comment given an experience I had last week (see other comment by me somewhere down stream)
Lets say I never say LLM, instead, frame it as "Doesn't that just mean it's a convention?" and "How are there so many game-changers?", which is obvious to audience is a consequence of using an LLM, and yet, also looks like you're picking on someone (are either of those bad writing? I only had one teacher who ever would take umbrage at somewhat subtle fluff like this)
Anyways this is all a bunch of belly-aching to an extent, you're right, and its the way to respond. There's a framing where the only real difficulty here is critiquing the writing without looking like you're picking on someone.
EDIT: Well, except for one more thing: what worries me the most when I see someone using the LLM and incapable of noticing tells and incapable of at least noticing the tells are weakening writing is...well, what else did they miss? What else did the LLM write that I have to evaluate for myself? So it's not so much as somewhat-bad writing, 90%+ still, that bothers me: its that idk what's real, and it feels like a waste of time even being offered it to read if I have to check everything.
Placing a value judgement on someone for how the art was produced is gatekeeping. What if the person is disabled and uses an LLM for accessibility reasons as one does with so many other tools? I dunno, that seems problematic to me but I understand the aversion to the output.
For example maybe it's like criticising Hawking for not changing his monotone voice vs using the talker all together. Perhaps not the best analogy.
The author can still use LLMs to adjust the style according to criticism of the output if they so choose.
It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.
And it sounds like you have the same surreal experience as me...it's so blindingly. obvious. that the only odd thing is people not mentioning it.
And the tells are so tough, like, I wanted to a bang a drum over and over again 6 weeks ago about the "It's not X-it's Y" thing, I thought it was a GPT-4.1 tell.
Then I found this under-publicized gent doing God's work: ton of benchmarks, one of them being "Not X, but Y" slop and it turned out there was 40+ models ahead of it, including Gemini (expected, crap machine IMHO), and Claude, and I never would have guessed the Claudes. https://x.com/sam_paech/status/1950343925270794323
The forest is darkening, and quickly.
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
[0] - https://www.youtube.com/watch?v=cIyiDDts0lo
[1] - https://blog.platformatic.dev/http-fundamentals-understandin...
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
Hoisting/import order especially when trying to mock tests.
Whether or not to include extensions, and which extension to use, .js vs .ts.
Things like TS enums will not work.
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
Sometimes I also read the proposals, https://github.com/tc39/proposals
I really want the pipeline operator to be included.
[0] https://nodeweekly.com
https://github.com/sindresorhus/execa/blob/main/docs/bash.md
We once reported an issue and it got fixed really quickly. But then we had troubles connecting via TLS (mysql on google cloud platform) and after a long time debugging found out the issue is actually not in deno, but in RustTLS, which is used by deno. Even a known issue in RustTLS - still hard to find out if you don't already know what you are searching for.
It was then quicker to switch to nodejs with a TS runner.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
new Error("something bad happened", {cause:innerException})
I highly recommend the `erasableSyntaxOnly` option in tsconfig because TS is most useful as a linter and smarter Intellisense that doesn't influence runtime code:
https://www.typescriptlang.org/tsconfig/#erasableSyntaxOnly
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
They've also been around for years as another poster mentioned.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
It's definitely awesome but doesn't seem newsworthy. The experimental stuff seems more along the lines of newsworthy.
Yes. It's been around and relatively stable in V8/Node.js for years now.
Also hadn't caught up with the the `node:` namespace.
1. new technologies
2. vanity layers for capabilities already present
It’s interesting to watch where people place their priorities given those two segments
Such as?
Why should it matter beyond correctness of the content, which you and the author need to evaluate either way.
Personally, I'm exhausted with this sentiment. There's no value in questioning how something gets written, only the output matters. Otherwise we'd be asking the same about pencils, typewriters, dictionaries and spellcheck in some pointless persuit of purity.
Now the existence of this blogpost is only evidence that the author has sufficient AI credits they are able to throw some release notes at Claude and generate some markdown, which is not really differentiating.
In this example, if you wanted to handle the `config.json` file not existing, you would need to somehow know what kind of error the `readFile` function can throw, and somehow manage to inspect it in the 'error' variable.
This gets even worse when trying to use something like `Promise.race` to handle promises as they are completed, like:
You need to somehow embed the information about what each promise represents inside the promise result, which usually is done through a wrapper that injects the promise value inside its own response... which is really ugly.[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
If the config file not existing is a handleable case, then write a "loadConfig" function that returns undefined.
probably 70 to 80% of JS users have barely any idea of the difference because their tooling just makes it work.
Instead of a .env.example (which quickly gets out of date), it uses a .env.schema - which contains extra metadata as decorator comments. It also introduces a new function call syntax, to securely load values from external sources.
Sure, but Bun's implementation is a confusing mess a lot of times. I prefer them separate.
Note: This is no shade toward Bun. I'm a fan of Bun and the innovative spirit of the team behind it.
Also CommonJS does not support tree shaking.
Edit: the proof my point reside in the many libraries which have an open issue because, even if ESM, they don't support tree shaking
Bundlers can treeshake it, it is just harder to do, and so it hasn't always been a priority feature. esbuild especially in the last few years has done a lot of work to treeshake `import * as` in many more scenarios than before. Sure it isn't treeshaking "all" scenarios yet, but it's still getting better.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
Node test also I dont think is great, because in isomorphic apps you’ll have 2 syntax for testing.
I think the permissions are the core thing we should do, even if we run the apps in docker/dev containers.
Aliases is nice, node:fetch but I guess will break all isomorphic code.
https://caniuse.com/?search=top%20level%20await
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
> Top-Level Await: Simplifying Initialization
This feels absolutely horrible to me. There is no excuse for not having a proper entry-point function that gives full control to the developer to execute everything that is needed before anything else happens. Such as creating database connections, starting services and connecting to APIs, warming up caches and so on. All those things should be run (potentially concurrent).
Until this is possible, even with top-level await, I personally have to consider node.js to be broken.
> Modern Testing with Node.js Built-in Test Runner
Sorry, but please do one thing and do it well.
> Async/Await with Enhanced Error Handling
I wish had JVM-like logging and stack traces (including cause-nesting) in node.js...
> 6. Worker Threads: True Parallelism for CPU-Intensive Tasks
This is the biggest issue. There should be really an alternative that has builtin support for parallelism that doesn't force me to de/serialize things by hand.
---
Otherwise a lot of nice progress. But the above ones are bummers.
Maybe it needs a compile-time macro system so we have go full Java and have magical dependency injection annotations, Aspect-Oriented-Programming, and JavascriptBeans (you know you want it!).
Or maybe it needs to go the Ruby/Python/SmallTalk direction and add proper metaprogramming, so we can finally have Javascript on Rails, or maybe uh... Djsango?
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
I often wonder about a what-if, alternate history scenario where Java had been rolled out to the browser in a more thoughtful way. Poor sandboxing, the Netscape plugin paradigm and perhaps Sun's licensing needs vs. Microsoft's practices ruined it.
I see it being used since over 25 years for the Austrian national broadcaster. Based at least originally on rhino, so it's also mixed with the Java you love. Fail to see the big issue as it's working just fine for such a long time.
> Perhaps the technology that you are using is loaded with hundreds of foot-guns
"Modern features" in Node.js means nothing given the entire ecosystem and its language is extremely easy to shoot yourself in the foot.
I have found this to not be true.
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
What's the downside?
The breadth of npm packages is a good reason to use node. It has basically everything.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
I used to agree but when you have libraries like Mediatr, mass transit and moq going/looking to go paid I’m not confident that the wider ecosystem is in a much better spot.
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
But yes there are downsides. But the biggest ones you brought up are not true.
This is the first I'm hearing of this, and a quick Google search found me a bunch of conflicting "methods" just within the NestJS ecosystem, and no clear indication of which one actually works.
... and of course I get errors with both of those that I don't get with a plain "nest build". (The error also helpfully specifies only the directory in the source, not the filename! Wtf?)Is this because NestJS is a "squishy scripting system" designed for hobbyists that edit API controller scripts live on the production server, and this is the first time that it has been actually built, or... is it because webpack has some obscure compatibility issue with a package?
... or is it because I have the "wrong" hieroglyphics in some Typescript config file?
Who knows!
> There's this thing called worker_threads.
Which are not even remotely the same as the .NET runtime and ASP.NET, which have a symmetric threading model where requests are handled on a thread pool by default. Node.js allows "special" computations to be offloaded to workers, but not HTTP requests. These worker threads can only communicate with the main thread through byte buffers!
In .NET land I can simply use a concurrent dictionary or any similar shared data structure... and it just works. Heck, I can process a single IEnumerable, list, or array using parallel workers trivially.
"But yes there are downsides. But the biggest ones you brought up are not true."
My point is.. what you said is NOT true. And even after you're reply, it's still not true. You brought up some downsides in your subsequent reply... but again, your initial reply wasn't true.
That's all. I acknowledge the downsides, but my point remains the same.
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
Maybe a bug in userspace promises like Bluebird? Or an older Node where promises were still experimental?
I love a good mystery!
https://chrysanthos.xyz/article/dont-ever-use-promise-all/
Maybe my bug was something else back then and I found a source claiming that behavior, so I changed my code and as a side effect my bug happened to go away coincidentally?
If you did something like:
It's possible that the heuristic didn't trigger?Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
Because its Node.js compat isn't perfect, and so if you're running on Node in prod for whatever reason (e.g. because it's an Electron app), you might want to use the same thing in dev to avoid "why doesn't it work??" head scratches.
Because Bun doesn't have as good IDE integration as Node does.
- isolated, pnpm-style symlink installs for node_modules
- catalogs
- yarn.lock support (later today)
- bun audit
- bun update —interactive
- bun why <pkg> helps find why a package is installed
- bun info <pkg>
- bun pm pkg get
- bun pm version (for bumping)
We will also support pnpm lockfile migration next week. To do that, we’re writing a YAML parser. This will also unlock importing YAML files in JavaScript at runtime.
(closing the circle)
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
Just because a new feature can't always easily be slipped into old codebases doesn't make it a bad feature.
Yes, it’s 100% junior, amateur mentality. I guess you like pointless toil and not getting things done.
No idea why you think otherwise, I’m over here actually shipping.
Feels unrelated to the article though.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Hopefully this helps! :D
That also happens automatically, it is abstracted away from the users of streams.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.