Am I missing something? I'm genuinely surprised it was not deployed from the start on a dedicated server. Don't you make a cost analysis before deploy? And if the cost analysis was ok at initial deploy, why wait to have such a difference in cost before migrating? How much money goes wasted in such situations?
I moved two servers, one from Linode and the other from DO to Hetzner a few months ago, with similar savings. The best part was that the two servers had tens of different sites running, implemented in different languages, with obsolete libraries, MySQL and Redis instances. A total mess. Well: Claude Code migrated it all, sometimes rewriting parts when the libraries where no longer available. Today complex migrations are much simpler to perform, which, I believe, will increase the mobility across providers a lot.
What do you mean? All I said was that I recommended everyone watch the excellent Common People episode of the Netflix show Black Mirror available in most regions.
The migration sharing is admirable and useful teaching, thank you!
I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).
I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
One is about all the steps of zero downtime migration. It's widely applicable.
The other is the decision to replace a cloud instance with bare metal. It saves a lot in costs, but also the loss of fast failover and data backups is priced in.
If I were doing this, I would run a hot spare for an extra $200, and switched the primary every few days, to guarantee that both copies work well, and the switchover is easy. It would be a relatively low price for a massive reduction of the risk of a catastrophic failure.
> Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
You're describing Hetzner Cloud, which has been like this for many years. At least 6.
Hetzner also offers Hetzner Cloud API, which allows us to not have to click any button and just have everything in IaC.
What are you doing for DB backups? Do you have a replica/standby? Or is it just hourly or something like that?
Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.
Hetzner normally advertises their hardware servers as 2x 1 TB SSD, because it's strongly recommended to run them in SWraid1 for net 1TB. (Their image installer will default to that)
Once the first SSD fails after some years, and your monitoring catches that, you can either migrate to a new box, find another intermediate solution/replica, or let them hotswap it while the other drive takes on.
Of course though, going to physical servers loses redundency of the cloud, but that's something you need to price in when looking at the savings and deciding your risk model.
And yes, running this without also at least daily snapshotting/backup to remote storage is insane - that applies to cloud aswell, albeit easier to setup there.
For over a decade I ran a small scale dedicated and virtual hosting business (hundreds of machines) and the sort of setup you describe works very well. Software RAID across 2 devices, redundant power supplies, backups. We never had a significant data loss event that I recall (significant = beyond user accidentally removing files).
For quite a while we ran single power supplies because they were pretty high quality, but then Supermicro went through a ~6 month period where basically every power supply in machines we got during that time failed within a year, and replacements were hard to come by (because of high demand, because of failures), and we switched to redundant. This was all cost savings trade-offs. When running single power supplies, we had in-rack Auto Transfer Switches, so that the single power supplies could survive A or B side power failure.
But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.
> Because with a single-server setup like this, I'd imagine that hardware ...
Yeah. This blog post reads like it was written by someone who didn't think things through and just focused on hyper-agressive cost-cutting.
I bet their DigitalOcean vm did live migrations and supported snapshots.
You can get that at Hetzner but only in their cloud product.
You absolutely will not get that in Hetzner bare-metal. If your HD or other component dies, it dies. Hetzner will replace the HD, but its up to you to restore from scratch. Hetzner are very clear about this in multiple places.
Can you elaborate? I'm coming up with similar designs recently (static site plus redundant servers) but my designs so far assume no database and ephemeral interactions. (Realtime multiplayer arcade games.)
Curious what the delta to pain-in-ass would be if I want to deal with storing data. (And not just backups / migrations, but also GDPR, age verification etc.)
It's possible no one will care much if it's down even for that long. I couldn't care less if my HOA mobile app was down even for a week for example. We don't need constant uptime for everything.
Don’t forget that integrity matters as much as availability in many applications. You might not mind if your HOA takes time to bring a server back up but you’d care a lot more if they lost the financial records or weren’t able to recover from a ransomware attack.
If that's the tradeoff they're willing to make, who are you to say that they're doing it wrong?
Not every app needs 24/7 availability. The vast majority of websites out there will not suffer any serious consequences from a few hours of downtime (scheduled or otherwise) every now and then. If the cost savings outweigh the risk, it can be a perfectly reasonable business decision.
A more interesting question would be what kind of backup and recovery strategy they have, and which aspects of it (if any) they had to change when they moved to Hetzner.
This is something we've[0] done a number of times for customers coming from various cloud providers. In our case we move customers onto a multi-server (sometimes multi-AZ) deployment in Hetzner, using Kubernetes to distribute workloads across servers and provide HA. Kubernetes is likely a lot for a single node deployment such as the OP, but it makes a lot more sense as soon as multiple nodes are involved.
For backups we use both Velero and application-level backup for critical workloads (i.e. Postgres WAL backups for PITR). We also ensure all state is on at least two nodes for HA.
We also find bare metal to be a lot more performant in general. Compared to AWS we typically see service response times halve. It is not that virtualisation inherently has that much overhead, rather it is everything else. Eg, bare metal offers:
- Reduced disk latency (NVMe vs network block storage)
- Reduced network latency (we run dedicated fibre, so inter-az is about 1/10th the latency)
- Less cache contention, etc [1]
Anyway, if you want to chat about this sometime just ping me an email: adam@ company domain.
Given the premise that zero day exploits are going to be frequent going forward, I feel like there is a new standard for secure deployment.
Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).
That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.
That's a trend which is more and more common nowadays.
I wish the industry would adopt more zero knowledge methods in this regards. They are existing and mathematically proven but it seems there is no real adoption.
- OpenAI wants my passport when topping up 100 USD
- Bolt wanted recently my passport number to use their service
- Anthropic seems wants to have passports for new users too
- Soon age restriction in OS or on websites
I wished there would be a law (in Europe and/or US) to minify or forbid this kind of identity verification.
I want to support the companies to not allow misuse of their platforms, at the same time my full passport photo is not their concern, especially in B2B business in my opinion.
They have to operate within the laws of the countries they’re physically located in. Those countries want to know that they’re not hosting illegal content, providing services to crime rings, Russia or North Korea, etc.
If Hetzner allows you to host something and you use it for illegal acts, they aren’t going to jail to shield you for €10/month.
I don't think it's fair to call AWS a scam. It's complicated and powerful and it charges a lot for many services compared to a DIY approach. But you can see the prices transparently on its site, it provides a free tier to try most services out, it is fairly good about long term support for services and how it handles forced upgrades when they become necessary, and generally it has an OK reputation for customer support even if something unexpected and very bad happens. You're certainly paying a price for the convenience and the brand but I don't think that's a scam if you're making an informed choice. If you want to save money then you can replace RDS with Postgres running on VMs but the trade off is then you have to manage your database infrastructure yourself.
Each has their trade offs. AWS absolutely has a high premium but Hetzner has some quirks.
Recently we had several of our VMs offline because they apparently have these large volume storage pools they were upgrading and suddenly disks died in two large pools. It took them 3 days to resolve.
Hetzner has no integrated option to backup volumes and its roll your own :/ You also can't control volume distribution on their storage nodes for redundancy.
I've had excellent experiences with Percona xtrabackup for MySQL migration and backups in general. It runs live with almost no performance penalty on the source. It works so well that I always wait for them to release a new matching version before upgrade to a new MySQL version.
I did the same this year. I really liked Digital Ocean though, compared to more complex cloud offerings like AWS. AWS feels like spending more for the same complexity. At least DO feels like it does save time and mental band width. Still though, the performance of cloud VPS is abysmal for the price. I'm now on Hetzner + K3's plus Flux CD (with Cloudflare for file storage (R2) and caching. I run postgres on the same machine with frequent dump backups. If I ever need realtime read replicas, I'll likely just migrate the DB to Neon or something and keep Hetzner with snapshots for running app containers.
I wish we had something like Hetzner dedicated near us-east-1.
They do offer VPS in the US and the value is great. I was seriously looking at moving our academic lab over from AWS but server availability was bad enough to scare me off. They didn't have the instances we needed reliably. Really hoping that calms down.
I had my fair share of Hyperscaler -> $something_else migrations during the past year. I agree, especially with rented hardware the price-difference is kind of ridiculous.
The issue is though, that you loose the managed part of the whole Cloud promise. For ephemeral services this not a big deal, but for persistent stuff like databases where you would like to have your data safe this is kind of an issue because it shifts additional effort (and therefore cost) into your operations team.
For smaller setups (attention shameless self-promotion incoming) I am currently working on https://pellepelster.github.io/solidblocks/cloud/index.html which allows to deploy managed services to the Hetzner Cloud from a Docker-Compose like definition. E.g. a PostgreSQL database with automatic backup and disaster recovery.
The problem with actually owning hardware is that you need a lot of it, and need to be prepared to manage things like upgrading firmware. You need to keep on top of the advisories for your network card, the power unit, the enterprise management card, etc. etc. If something goes wrong someone might need to drive in and plug in a keyboard.
Eventually we admitted to ourselves we didn't want those problems.
“Your own server in a colo” means going to the colo to swap RAM or an SSD when something goes wrong. You rent a server and the benefit is the rentor has spare parts on hand and staff to swap parts out.
You have to deal with a lot more stuff. You have to order/pay for a server (capex), mount it somewhere, wire up lights-out-mgmt and recovery and do a few more tasks that the provider has already done.
Then, say if the motherboard gives up, you have to do quite a bit of work to get it replaced, you might be down for hours or maybe days.
For a single server I don't think it makes sense. For 8 servers, maybe. Depends on the opportunity cost.
Have you done this yourself? If you haven't I think you'd discover server hardware is actually shockingly reliable. You could go years without needing to physically touch anything on a single machine. I find that people who are used to cloud assume stuff is breaking all the time. That's true at scale, but when you have a handful of machines you can go a very long time between failures.
If you have failover redundancy of services across your systems of some kind to mitigate then great. With proper setup no worries. I guess it depends how much you want to take on vs hand off.
When some component in OP's dedicated server fails, they will find out what that extra DO money was going toward. The DO droplet will live migrate to a healthy server. OP gets to take an extended outage while they file a Hetzner service ticket and wait for a human to perform the hardware replacement. Do some online research and see how long this often takes. I don't believe this Hetzner dedicated server model even has redundant PSUs.
Anyone who thinks DO and Hetzner dedicated servers are fungible products is making a mistake. These aren't the same service at all. There are savings to be had but this isn't a direct "unplug DO, plug in Hetzner" situation.
I moved from Heztner to DO because my Hetzner IPs kept getting spoofed and then Hetzner would shut down my servers for "abuse". This hasn't happened once on DO, and I'm happy to pay a little more.
The comparison is somewhat skewed, since they went from an (expensive) virtual server to a cheaper dedicated server (hardware).
One of the new risks is if anything critical happens with the hardware, network, switch etc. then everything is down, until someone at Hetzner go fixes it.
With a virtual server it’ll just get started on a different server straight away. Usually hypervisors also has 2 or more network connections etc.
And hopefully they also got some backup setup.
It’s still a huge amount of of savings and I’d probably do the same of I were in their shoes, but there is tradeoffs when going from virtual- to dedicated hardware.
> We need more competition across the board. These savings are insane and DO should be sweating, right?
As the other person already said here, this blog post comparison is skewed.
BUT
EU cloud providers are much better value for money than the US providers.
The US providers will happily sit there nickle and diming you, often with deliberately obscure price sheets (hello AWS ;).
EU cloud provider pricing is much clearer and generally you get a lot more bang for your buck than you would with a US provider. Often EU providers will give you stuff for free that US providers would charge you for (e.g. various S3 API calls).
Therefore even if this blog post is skewed and incorrect, the overall argument still stands that you should be seriously looking at Hetzner or Upcloud or Exoscale or Scaleway or any of the other EU providers.
In addition there is the major benefit of not being subject to the US CLOUD and PATRIOT acts. Which despite what the sales-droids will tell you, still applies to the fake-EU provided by the US providers.
And DigitalOcean customer support is non-existent. I had a mail server down and they cut service instead of trying to contact me in any other way. But worse, when they do that, they immediately destroy your data without any possibility to restore. Or at least that's what they told me with their bog standard, garbage support replies. I was a customer for nearly a decade. After it happened, I realized that never would have happened on GCP, AWS, etc. Because they take billing seriously with multiple contact info, a recovery period, etc. All the things a company would be expected to do to maintain good relationships with customers during a billing issue that lasts a few weeks. That was a couple of years ago, so maybe they fixed some stuff. But the complete lack of support and unprofessional B2B practices was an eye opener.
DigitalOcean just absolutely is just not an enterprise solution. Don't trust it with your data.
Oh, and did I mention I had been paying the upcharge for backups the entire time?
It's tough to work with these publicly traded companies. They need to boost prices to show revenue growth. At some point, they become a bad deal. I've already migrated from DO. Not because of service or quality, but solely because of price.
As such, I doubt the noted price reduction is reproducible. Combine this with Hetzner's sudden deletions of user accounts and services without warning, and it's a bad proposition. Search r/hetzner and r/vps for hetzner for: banned, deleted, terminated; there are many reports. What should stun you even more about it is that Hetzner could ostensibly be closely spying on user data and workloads, even offline workloads, without which they won't even know who to ban.
The only thing that Hetzner might potentially be good for is to add to an expendable distributed compute pool, one that you can afford to lose, but then you might as well also use other bottom-of-the-barrel untrustworthy providers for it, e.g. OVH.
It's a nice chunk of change, which you could use for other purposes. It might not make or break the company, but it could pay for something that actually generates business.
If you only have Rs. 100 in your pocket, you will think hard before spending Rs. 10. If you have Rs. 1000 in your pocket, you will not mind spending Rs. 10. That said, even if you are financially sound, why in the world would you want to pay $14k extra for a similar service that is available cheaper? That money could be better utilised elsewhere.
I suspect with that money you could get a full time customer support person for your business. Now think about it, what's creating more value to your customers: having your infra on Digital Ocean or having a better customer support?
How deep does this go?
https://www.netflix.com/gb/title/70264888?s=a&trkid=13747225...
So it's a Claude ad inside a Hetzner ad inside a decent grammar ad.
I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).
I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
One is about all the steps of zero downtime migration. It's widely applicable.
The other is the decision to replace a cloud instance with bare metal. It saves a lot in costs, but also the loss of fast failover and data backups is priced in.
If I were doing this, I would run a hot spare for an extra $200, and switched the primary every few days, to guarantee that both copies work well, and the switchover is easy. It would be a relatively low price for a massive reduction of the risk of a catastrophic failure.
You're describing Hetzner Cloud, which has been like this for many years. At least 6.
Hetzner also offers Hetzner Cloud API, which allows us to not have to click any button and just have everything in IaC.
https://docs.hetzner.cloud/
Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.
Once the first SSD fails after some years, and your monitoring catches that, you can either migrate to a new box, find another intermediate solution/replica, or let them hotswap it while the other drive takes on.
Of course though, going to physical servers loses redundency of the cloud, but that's something you need to price in when looking at the savings and deciding your risk model.
And yes, running this without also at least daily snapshotting/backup to remote storage is insane - that applies to cloud aswell, albeit easier to setup there.
For quite a while we ran single power supplies because they were pretty high quality, but then Supermicro went through a ~6 month period where basically every power supply in machines we got during that time failed within a year, and replacements were hard to come by (because of high demand, because of failures), and we switched to redundant. This was all cost savings trade-offs. When running single power supplies, we had in-rack Auto Transfer Switches, so that the single power supplies could survive A or B side power failure.
But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.
Yeah. This blog post reads like it was written by someone who didn't think things through and just focused on hyper-agressive cost-cutting.
I bet their DigitalOcean vm did live migrations and supported snapshots.
You can get that at Hetzner but only in their cloud product.
You absolutely will not get that in Hetzner bare-metal. If your HD or other component dies, it dies. Hetzner will replace the HD, but its up to you to restore from scratch. Hetzner are very clear about this in multiple places.
They could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.
I would not have written the post I did if they had presented a multi-node bare-metal cluster or whatever more realistic config.
What do you feel was misleading?
I agree with the other poster, this is fine for a toy site or sites but low quality manual DR isn't good for production.
Curious what the delta to pain-in-ass would be if I want to deal with storing data. (And not just backups / migrations, but also GDPR, age verification etc.)
Recently, I did it in PostgreSQL using pg_auto_failover. I have 1 monitor node, 1 primary, and 1 replica.
Surprisingly, once you get the hang of PostgreSQL configuration and its gotchas, it’s also very easy to replicate.
I’m guessing MySQL is even easier than PostgreSQL for this.
I also achieved zero downtime migration.
Not every app needs 24/7 availability. The vast majority of websites out there will not suffer any serious consequences from a few hours of downtime (scheduled or otherwise) every now and then. If the cost savings outweigh the risk, it can be a perfectly reasonable business decision.
A more interesting question would be what kind of backup and recovery strategy they have, and which aspects of it (if any) they had to change when they moved to Hetzner.
For backups we use both Velero and application-level backup for critical workloads (i.e. Postgres WAL backups for PITR). We also ensure all state is on at least two nodes for HA.
We also find bare metal to be a lot more performant in general. Compared to AWS we typically see service response times halve. It is not that virtualisation inherently has that much overhead, rather it is everything else. Eg, bare metal offers:
- Reduced disk latency (NVMe vs network block storage)
- Reduced network latency (we run dedicated fibre, so inter-az is about 1/10th the latency)
- Less cache contention, etc [1]
Anyway, if you want to chat about this sometime just ping me an email: adam@ company domain.
[0] https://lithus.eu
[1] I wrote more on this 6 months ago: https://news.ycombinator.com/item?id=45615867
Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).
That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.
Anyone else doing something similar?
How much latency does this add?
I wish the industry would adopt more zero knowledge methods in this regards. They are existing and mathematically proven but it seems there is no real adoption.
- OpenAI wants my passport when topping up 100 USD
- Bolt wanted recently my passport number to use their service
- Anthropic seems wants to have passports for new users too
- Soon age restriction in OS or on websites
I wished there would be a law (in Europe and/or US) to minify or forbid this kind of identity verification.
I want to support the companies to not allow misuse of their platforms, at the same time my full passport photo is not their concern, especially in B2B business in my opinion.
Absolutely no to this - reason enough to go with AWS or alternatives. And why are ppl willingly giving it to hosting provider?
Unnecessarily exposing yourself to identity theft if they get compromised.
If Hetzner allows you to host something and you use it for illegal acts, they aren’t going to jail to shield you for €10/month.
Sure, it cost me £6/mo to serve ONE lambda on AWS (and perhaps 500 requests per month). Sure it was awesome and "proper". But crazy expensive.
I host it now (and 5 similar things) for free on Cloudflare.
But if you need what AWS provides, you'll get that. And that means sometimes it's not the most cost-effective place.
Recently we had several of our VMs offline because they apparently have these large volume storage pools they were upgrading and suddenly disks died in two large pools. It took them 3 days to resolve.
Hetzner has no integrated option to backup volumes and its roll your own :/ You also can't control volume distribution on their storage nodes for redundancy.
It's worse than Oracle and they don't even use lawyery contracts.
The technology itself is the tendrils.
https://slitherworld.com
My foray into multiplayer games.
They do offer VPS in the US and the value is great. I was seriously looking at moving our academic lab over from AWS but server availability was bad enough to scare me off. They didn't have the instances we needed reliably. Really hoping that calms down.
The issue is though, that you loose the managed part of the whole Cloud promise. For ephemeral services this not a big deal, but for persistent stuff like databases where you would like to have your data safe this is kind of an issue because it shifts additional effort (and therefore cost) into your operations team.
For smaller setups (attention shameless self-promotion incoming) I am currently working on https://pellepelster.github.io/solidblocks/cloud/index.html which allows to deploy managed services to the Hetzner Cloud from a Docker-Compose like definition. E.g. a PostgreSQL database with automatic backup and disaster recovery.
Asking the obvious question: why not your own server in a colo?
The problem with actually owning hardware is that you need a lot of it, and need to be prepared to manage things like upgrading firmware. You need to keep on top of the advisories for your network card, the power unit, the enterprise management card, etc. etc. If something goes wrong someone might need to drive in and plug in a keyboard.
Eventually we admitted to ourselves we didn't want those problems.
At one point in the early 2000's, my brother was soldering new capacitors onto dell raid cards. (I like to call that full-stack ops.)
Have you seen what the LLM crowd have done to server prices ?
Then, say if the motherboard gives up, you have to do quite a bit of work to get it replaced, you might be down for hours or maybe days.
For a single server I don't think it makes sense. For 8 servers, maybe. Depends on the opportunity cost.
Anyone who thinks DO and Hetzner dedicated servers are fungible products is making a mistake. These aren't the same service at all. There are savings to be had but this isn't a direct "unplug DO, plug in Hetzner" situation.
One of the new risks is if anything critical happens with the hardware, network, switch etc. then everything is down, until someone at Hetzner go fixes it.
With a virtual server it’ll just get started on a different server straight away. Usually hypervisors also has 2 or more network connections etc.
And hopefully they also got some backup setup.
It’s still a huge amount of of savings and I’d probably do the same of I were in their shoes, but there is tradeoffs when going from virtual- to dedicated hardware.
As the other person already said here, this blog post comparison is skewed.
BUT
EU cloud providers are much better value for money than the US providers.
The US providers will happily sit there nickle and diming you, often with deliberately obscure price sheets (hello AWS ;).
EU cloud provider pricing is much clearer and generally you get a lot more bang for your buck than you would with a US provider. Often EU providers will give you stuff for free that US providers would charge you for (e.g. various S3 API calls).
Therefore even if this blog post is skewed and incorrect, the overall argument still stands that you should be seriously looking at Hetzner or Upcloud or Exoscale or Scaleway or any of the other EU providers.
In addition there is the major benefit of not being subject to the US CLOUD and PATRIOT acts. Which despite what the sales-droids will tell you, still applies to the fake-EU provided by the US providers.
Sounds like from the requirement to live migrate you can't really afford planned downtime, so why are you risking unplanned downtime?
Full of scanners, script kiddies and maybe worse.
DigitalOcean just absolutely is just not an enterprise solution. Don't trust it with your data.
Oh, and did I mention I had been paying the upcharge for backups the entire time?
Moving away from the US also felt great.
As such, I doubt the noted price reduction is reproducible. Combine this with Hetzner's sudden deletions of user accounts and services without warning, and it's a bad proposition. Search r/hetzner and r/vps for hetzner for: banned, deleted, terminated; there are many reports. What should stun you even more about it is that Hetzner could ostensibly be closely spying on user data and workloads, even offline workloads, without which they won't even know who to ban.
The only thing that Hetzner might potentially be good for is to add to an expendable distributed compute pool, one that you can afford to lose, but then you might as well also use other bottom-of-the-barrel untrustworthy providers for it, e.g. OVH.
> $1,432 to $233
a difference of 5/6 in price does not materially change the decision to move between providers, even with a 40% price increase
Cloud is ludicrously marked up.
Not everyone likes wasting money.