The sweet spot thing is the real insight here and nobody seems to be talking about it.
Frontier models get hyped for their maximum task horizon, but that's also where they're 10-30x more expensive per hour than their optimal range. You're paying a massive premium for the hardest tasks and still failing half the time.
Honestly the practical takeaway is pretty boring: just break your work into smaller chunks. Not because the models can't handle longer tasks, but because the economics at shorter task lengths are just way better. The labs are racing to push the horizon out; the smart move for anyone actually paying the bills is to stay near the sweet spot and orchestrate from there.
Model specialization is in all likelihood going to be the way forward, both for cost and quality of output. Smaller, cheaper models specialized in their task domains. Many of the current model vendors are already (attempting) to do this under the hood.
Generalist models have similar problems as generalist humans. The proverbial "Jack of all trades, master of none."
The crazy part about this is if you compare it not to US wages but european, for instance in the UK where the median software hourly wage is somewhere around $35-40 an hour, then humans are already cheaper than the best models.
that's what I'm telling my non-tech friends when they say "Looking to how fast AI progress and robotics, me as an electrician, will a robot soon take my job?"
I reply to them
"My job as a software engineer will be replaced sooner than yours, because for your job, the robot will be much more expensive than the minimal wave, and you don't need to buy a human"
I wouldn't discount the worries outside of tech however. A cheap human laborer that leans on AI to provide their checklist and described actions for tasks is definitely in scope to replace hard won hands on knowledge from experience in industry professionals. You no longer have to watch 30 YouTube videos to learn and distill a task as a layperson in a field involving manual labor.
I rebuilt my house from the studs, did my own electrical and plumbing, etc. This took a significant amount of training and research back in the day. I worked under my father for a decade before making this attempt. My father is a journeyman electrician and carpenter. I think any able bodied human could now forgo much of that and simply get a breakdown of actions to perform in a particular order and get similar results.
I tried to use gpt for various handy work. While it does help I don't think it can adequately substitute for hard won hands. Maybe next gen if you provide a video stream and the llm can view the exact situation. Even then though I wouldn't discount the difficulty of learning dexterity when you've been a coddled white collar worker your whole life
I wasn't suggesting white collar workers attempt blue collar work. I'm merely saying that cheap day laborers with basic experience won't have to lean on their industry mentorship model (journeyman etc) as much and can complete jobs on their own. On the cheap.
Today's models are insufficient for someone with 0 hands on experience, especially when limited to text modalities. However, I don't doubt the future ones you describe are coming though, if they're not already here.
Humans are not cheaper than AI models. Let's go with $35 an hour.
24365 = 8760
8760$35 = $306,600
Yeah, a human working non stop will run $300k.
Now you said, the "best" models. I personally reckon that 80-90% of most work don't need the best models. They need a good model, and good models are super cheap. i.e, the tiny gemma4 or qwen3.6 models will be sufficient for most of those work.
AI cloud usage cost goes up near linearly, but local cost doesn't. So say someone built an under $10k system, with perhaps dual RTX 5090. That same system will be able to easily run 20 parallel requests. The only cost is electricity. You can run it 24/7. For 1 year, that's ~$6million. 20 humans will also have overhead of electricity, real estate and other things which far exceed the cost of electricity for just AI.
The thing AI agents are lacking is agency and autonomy. As they get closer and closer, the majority of humans competing in the same sort of tasks will have no chance.
I'm going by the graph in the original artical, not stating a point. I mean if their cost lines on the graph are to be believed, then the number I quoted is cheaper.
I have a lot of AI written software, and it doesn’t cost me anywhere close to what I’ve been quoted for other software projects in the past. I’ve had a guy spend over six months, full-time, on a CRUD application for permits. He didn’t even finish. I made a working prototype in Django, which was tossed to re-implement in PHP for some reason.
My understanding is that this is normalized to the "best human" for the tasks.
An AI only doing a task correctly 50% of the time may in-fact be better than your N% chance of hiring a highly capable human for that task, and especially for contracting a human to a 1-2 hour task.
But your successful use of AI is still predicated on a human who can judge output and break the work into smaller tasks that fit the skill ceiling of the AI, which is currently no more than tasks that take a skilled human 2 hours.
Once a model is stable and good enough, for example Sonnet 4.6 or GPT 5.4 (or something else in future), it can be burned into hardware like Talaas chip reducing the cost many times and increasing the speed. At some point we can rely on old model while being productive with it.
No, burning models into hardware won't make them faster or reduce the cost. It will cost way more for similar performance as what you would get with a gpu. I am not telling you why, you can go figure that out on your own.
With some research, that chip appears like it would cost about $300-$400 to manufacture, die only.
For an 8B parameter model.
Opus is estimated at 500B-2T parameters. At that scale you’re past reticle limits and need HBM and multi-die packaging, which means you’ve essentially built an inference ASIC (like Groq or Etched) rather than something categorically cheaper than GPUs. The “burned into silicon” advantage mostly evaporates at frontier scale.
The cutting edge, max size models will likely stay in the GPU space for a long time.
But these models are not needed for most general requests.
With a fine tuned 30B quantisized model you can serve a large portion of requests with around 32GB of RAM.
Free users will likely only get these kinds of models.
At some point we will get these models in hardware and the cost per token will be minimal.
Does the cost scale linearly/superlinearly? What does the $300-$400 price data point tell us with relationship to the parameter density?
No gotchas here. I genuinely don't know that 8B parameters is in a zone with significant decreasing marginal returns -- too far out of my knowledge area but genuinely curious.
Die size increases cost exponentially, by decreasing chips per wafer and decreasing yield.
I expect that this kind of burned-in model is also very difficult to verify (how do you know if some of the weights are off), and not amenable to partial disablement to increase yield. For CPUs, you just laser disable bad cores. Can't forego part of a neural net.
While I understand why they used the METR data, a cleaner look would be against the current cost-optimal frontier of open models (e.g. GLM-5.1 and MiniMax-M2.7). That paints a very different picture. Comparing just the frontier models at the time of the METR report invariably leads to looking at providers who are pushing the limits of cost at the time of the report.
GPT-5 was shown as being on the costly end, surpassed by o3 at over $100/hr. I can't directly compare to METR's metrics, but a good proxy is the cost of the Artificial Analysis suite. GLM-5.1 is less than half the cost to complete the suite of GPT-5 and is dramatically more capable than both GPT-5 and o3.
So while their analysis is interesting, it points towards the frontier continuing to test the limits of acceptable pricing (as Mythos is clearly reinforcing) and the lagging 6-12 months of distillation and refinement continuing to bring the cost of comparable capabilities to much more reasonable levels.
Calculating hourly costs for these models makes me think that the decision of when to hire an SWE vs. increase use of AI may follow a similar pattern to the decision to use cloud compute vs. on-premises. I don’t cost $120/hr (incl. fringe), but my employer pays my salary all year long, no matter if I am working or on vacation. Whereas if they use an AI model to do the same work, they may be happy to pay $120/hr or more, since they may only use the model for a small fraction of 2080 hours per year, so they’d still save money, and not have a messy human to deal with.
I remain convinced we won’t look at project estimates as time based in software engineering as our primary cost estimate. And this is transition will happen rapidly. We’re going to shift to a capex/token spend model for project estimates where the business will say “ok I do want that feature for $1000 in tokens”.
I agree with you directionally that project estimates are/will be affected by this but I don't see a scenario in which time is completely removed from the equation with respects to projects & estimates to execute on them. We're all constrained by time, finite resource. It's always a factor in business.
> On many task lengths (including those near their plateau) they cost 10 to 100 times as much per hour. For instance, Grok 4 is at $0.40 per hour at its sweet spot, but $13 per hour at the start of its final plateau. GPT-5 is about $13 per hour for tasks that take about 45 minutes, but $120 per hour for tasks that take 2 hours. And o3 actually costs $350 per hour (more than the human price) to achieve tasks at its full 1.5 hour task horizon. This is a lot of money to pay for an agent that fails at the task you’ve just paid for 50% of the time — especially in cases where failure is much worse than not having tried at all.
Ord's frontier-cost argument is right as far as it goes, but the piece doesn't engage with the counter-trend: inference cost for a fixed capability level has been falling faster than Moore's law. Pushing the frontier will likely keep getting more expensive and concentrated among a few players, while the intelligence needed for more mundane tasks keeps getting cheaper.
That raises a question: if practical-tier inference commoditizes, how does any company justify the ever-larger capex to push the frontier?
OpenAI's pitch is that their business model should "scale with the value intelligence delivers." Concretely, that means moving beyond API fees into licensing and outcome-based pricing in high-value R&D sectors like drug discovery and materials science, where a single breakthrough dwarfs compute cost. That's one possible answer, though it's unclear whether the mechanism will work in practice.
This effect is likely even larger when you consider that the raw cost per inferred token grows linearly with context, rather than being constant. So longer tasks performed with higher-context models will cost quadratically more. The computational cost also grows super-linearly with model parameter size: a 20B-active model is more than four times the cost of a 5B-active model.
Context caching is really storing the KV-cache for reuse. It saves running prefill for that part of the context, but tokens referencing that KV-cache will still cost more.
Where are you getting hourly costs for private models ? The rate limits are pretty arbitrary. If you max out by api tokens it would be like $10k / hour
No, but the AI labs would love to frame it this way so they can continue to nerf models and increase prices while they use the cheap, highly performant, highly powerful models internally to replace all of your businesses.
I'm an AI engineer with a computer science and some actual AI background. I am trying to make Claude good motivation letters for applying to jobs. It currently scores a 6 out of 10. I'm much better still. And it has access to all the relevant parts of my psychology degree and data about writing good motivation letters.
All I can say is: the motivation letters don't look like they're written by AI anymore.
Pretty much every major American inference provider claims to make a profit on API-based inference. Consumer plans might be subsidized overall, but it's hard to say since they're a black box and some consumers don't fully use their plans
Selling inference is not fundamentally different from selling compute - you amortize the lifetime cost of owning and operating the GPUs and then turn that into a per-token price. The risk of loss would be if there is low demand (and thus your facilities run underutilized), but I doubt inference providers are suffering from this.
Where the long-term payoff still seems speculative, is for companies doing training rather than just inference.
There’s a lot of debate over what the useful lifespan of the hardware is though. A number that seems very vibes based determines if these datacenters are a good investment or disastrous.
I specifically remember this debate coming up when the H100 was the only player on the table and AMD came out with a card that was almost as fast in at least benchmarks but like half the cost. I haven't seen a follow up with real world use though and as a home labber I know that in the last three weeks the support for AMD stuff at least has gotten impressively useful covering even cuda if you enjoy pain and suffering.
What I'm curious about are what about the other stuff out there such as the ARM and tensor chips.
If they were they would show evidence because they'd pull in more investment. I don't believe their claim that they make profits on inference, especially not with reports like this coming out.
All of them. It's simply impossible to sell tokens by usage at a loss now. You'll be arbitraged to death in a few days. It only makes sense to subsidize cost if you're selling a subscription.
Interesting read. I don't know if I quite buy the evidence, but it's definitely enough to warrant further investigation. It also matches up with my personal experience, which is that tools like Claude Code are burning through more and more tokens as we push them to do bigger and bigger work. But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.
So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.
> But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.
Do we? Because elsewhere in the thread there's people claiming they are profitable in API billing and might be at least close to break even on subscription, given that many people don't use all of their allowance.
Until there is some drastic new hardware, we are going to see a similar situation to proof of work, where a small group hordes the hardware and can collude on prices.
Difference is that the current prices have a lot of subsidies from OPM
Once the narrative changes to something more realistic, I can see prices increase across the board, I mean forget $200/month for codex pro, expect $1000/month or something similar.
So its a race between new supply of hardware with new paradigm shifts that can hit market vs tide going out in the financial markets.
For inference, there is already a 10x improvement possible over a setup based on NVIDIA server GPUs, but volume production, etc... will take a while to catch up.
During inference the model weights are static, so they can be stored in High Bandwidth Flash (HBF) instead of High Bandwidth Memory (HBM). Flash chips are being made with over 300 layers and they use a fraction of the power compared to DRAM.
NVIDIA GPUs are general purpose. Sure, they have "tensor cores", but that's a fraction of the die area. Google's TPUs are much more efficient for inference because they're mostly tensor cores by area, which is why Gemini's pricing is undercutting everybody else despite being a frontier model.
New silicon process nodes are coming from TSMC, Intel, and Samsung that should roughly double the transistor density.
There's also algorithmic improvements like the recently announced Google TurboQuant.
Not to mention that pure inference doesn't need the crazy fast networking that training does, or the storage, or pretty much anything other than the tensor units and a relatively small host server that can send a bit of text back and forth.
> Flash chips are being made with over 300 layers and they use a fraction of the power compared to DRAM.
Isn't reading from flash significantly more power intensive than reading DRAM? Anyway, the overhead of keeping weights in memory becomes negligible at scale because you're running large batches and sharding a single model over large amounts of GPU's. (And that needs the crazy fast networking to make it work, you get too much latency otherwise.)
For a given capacity of memory, Flash uses far less power than DRAM, especially when used mostly for reads.
> becomes negligible at scale
Nothing is negligible at scale! Both the cost and power draw of the HBMs is a limiting factor for the hyperscalers, to the point that Sam Altman (famously!) cornered the market and locked in something like 40% of global RAM production, driving up prices for everyone.
> sharding a single model over large amounts of GPUs
A single host server typically has 4-16 GPUs directly connected to the motherboard.
A part of the reason for sharding models between multiple GPUs is because their weights don't fit into the memory of any one card! HBF could be used to give each GPU/TPU well over a terabyte of capacity for weights.
Last but not least, the context cache needs to be stored somewhere "close" to the GPUs. Across millions of users, that's a lot of unique data with a high churn rate. HBF would allow the GPUs to keep that "warm" and ready to go for the next prompt at a much lower cost than keeping it around in DRAM and having to constantly refresh it.
> For a given capacity of memory, Flash uses far less power than DRAM, especially when used mostly for reads.
Flash has no idle power being non-volatile (whereas DRAM has refresh) but active power for reading a constantly-sized block is significantly larger for Flash. You can still use Flash profitably, but only for rather sparse and/or low-intensity reads. That probably fits things like MoE layers if the MoE is sparse enough.
Also, you can't really use flash memory (especially soldered-in HBF) for ephemeral data like the KV context for a single inference, it wears out way too quickly.
Modern flash memory, with multi-bit cells, indeed requires more power for reading than DRAM, for the same amount of data.
However, for old-style 1-bit per cell flash memory I do not see any reason for differences in power consumption for reading.
Different array designs and sense amplifier designs and CMOS fabrication processes can result in different power consumptions, but similar techniques can be applied to both kinds of memories for reducing the power consumption.
Of course, storing only 1 bit per cell instead of 3 or 4 reduces a lot the density and cost advantages of flash memory, but what remains may still be enough for what inference needs.
The basic physics of reading from Flash vs. DRAM are broadly similar, and it's true that reading from SLC flash is a bit cheaper, but you'll still need way higher voltages and reading times to read from flash compared to DRAM. It's not really the same.
Doubtful, local models are the competitive future that will keep prices down.
128GB is all you need.
A few more generations of hardware and open models will find people pretty happy doing whatever they need to on their laptop locally with big SOTA models left for special purposes. There will be a pretty big bubble burst when there aren't enough customers for $1000/month per seat needed to sustain the enormous datacenter models.
Apple will win this battle and nvidia will be second when their goals shift to workstations instead of servers.
Weird how you're leaving stuff like Strix Halo out. Also weird you think 128gb is the future with all of the research done to reduce that to something around 12GB being a target with all of these papers out now. I assume we'll end up with less general purpose models and more specific small ones swapped out for whatever work you are asking to do.
Batch inference is much more efficient. Using the hardware round the clock is much more efficient. Cloud can absolutely pay more for hardware and still make money off you.
Cloud can pay more for RAM until all the RAM producers withdraw from the consumer market, then prices will go back down.
End users will still get access to RAM. The cloud terminal they purchase from Apple, Google, Samsung, or HP will have all the RAM it will ever need directly soldered onto it.
The next step, I think, will be a "cash for clunkers" program to permit people to trade in old computer hardware to the government—especially since operating systems that do not collect KYC data on their users will soon be illegal to operate.
Doesn’t Apple place RAM directly into the SoC package? We aren’t even talking about soldering it to mother boards anymore, it is coming in with the CPU like it would as a GPU.
More like RAM producers are providing supplies to the highest bidder, no? If this doesn't peter out supply will normalize at a higher but less insane price eventually.
AI feels more like a gamble.
People like gambling.
From casinos (win-loose), to lootboxes (uncertainty) or even extramarital sex (whose baby is it?).
This way - AI work is like a slot machine - will this work or not?
Either way - casino gets paid and casino always wins.
Nevertheless - if the idea or product is very good (filling high market pain) and not that difficult to build - it can enable non-coders to "gamble" for the outcome with AI for $.
Sadly - from by experiences hiring Devs - hiring people is also a gamble...
This is the weirdest example of "gambling" I have seen in my life.
If you'd've written "unprotected sex" I'd see the gambling part, but "extramartial sex" covers so much more than the tiny subset of "whose baby is it" (how many people are there having sex to gamble on who will be the father of a baby? 10?).
There used to be forums without voting. It was discovered that forums with voting attract more engagement because of the emotions produced by the voting.
It also used to be that reddit comments were the epitome of quality in their time, much closer to current HN if not better. I attributed that to the voting mechanism; clearly I was mistaken.
Exactly. AI is not really replacing people but it's definitely allowing them to do more and more interesting things. You should offset the cost of having an AI do something against the cost of doing that manually. Your mileage may vary of course. But I am definitely getting things done that I wouldn't even have started without AI assistance. And that stuff is valuable to me. Although you could argue that anything AI can do is actually deflating in value as well. The economics here will get pretty interesting. But all things considered, I'm not spending an unreasonable amount on all this AI stuff. Probably around 60-100$/month currently. It varies a bit.
If you still need that thing done, the value is basically however you value your time. Would you pay extra for having someone or something do that for you instead?
My expectation: demand going up, prices will rise, supply will saturate to the point of ubiquitous "utility" status, and prices will drop, probably a bell curve shape with sine-wave undulations along the way.
Frontier models get hyped for their maximum task horizon, but that's also where they're 10-30x more expensive per hour than their optimal range. You're paying a massive premium for the hardest tasks and still failing half the time.
Honestly the practical takeaway is pretty boring: just break your work into smaller chunks. Not because the models can't handle longer tasks, but because the economics at shorter task lengths are just way better. The labs are racing to push the horizon out; the smart move for anyone actually paying the bills is to stay near the sweet spot and orchestrate from there.
Generalist models have similar problems as generalist humans. The proverbial "Jack of all trades, master of none."
That said, I've made my career as a generalist :)
I rebuilt my house from the studs, did my own electrical and plumbing, etc. This took a significant amount of training and research back in the day. I worked under my father for a decade before making this attempt. My father is a journeyman electrician and carpenter. I think any able bodied human could now forgo much of that and simply get a breakdown of actions to perform in a particular order and get similar results.
Today's models are insufficient for someone with 0 hands on experience, especially when limited to text modalities. However, I don't doubt the future ones you describe are coming though, if they're not already here.
24365 = 8760 8760$35 = $306,600
Yeah, a human working non stop will run $300k.
Now you said, the "best" models. I personally reckon that 80-90% of most work don't need the best models. They need a good model, and good models are super cheap. i.e, the tiny gemma4 or qwen3.6 models will be sufficient for most of those work.
AI cloud usage cost goes up near linearly, but local cost doesn't. So say someone built an under $10k system, with perhaps dual RTX 5090. That same system will be able to easily run 20 parallel requests. The only cost is electricity. You can run it 24/7. For 1 year, that's ~$6million. 20 humans will also have overhead of electricity, real estate and other things which far exceed the cost of electricity for just AI.
The thing AI agents are lacking is agency and autonomy. As they get closer and closer, the majority of humans competing in the same sort of tasks will have no chance.
An AI only doing a task correctly 50% of the time may in-fact be better than your N% chance of hiring a highly capable human for that task, and especially for contracting a human to a 1-2 hour task.
But your successful use of AI is still predicated on a human who can judge output and break the work into smaller tasks that fit the skill ceiling of the AI, which is currently no more than tasks that take a skilled human 2 hours.
For an 8B parameter model.
Opus is estimated at 500B-2T parameters. At that scale you’re past reticle limits and need HBM and multi-die packaging, which means you’ve essentially built an inference ASIC (like Groq or Etched) rather than something categorically cheaper than GPUs. The “burned into silicon” advantage mostly evaporates at frontier scale.
At some point we will get these models in hardware and the cost per token will be minimal.
No gotchas here. I genuinely don't know that 8B parameters is in a zone with significant decreasing marginal returns -- too far out of my knowledge area but genuinely curious.
I expect that this kind of burned-in model is also very difficult to verify (how do you know if some of the weights are off), and not amenable to partial disablement to increase yield. For CPUs, you just laser disable bad cores. Can't forego part of a neural net.
GPT-5 was shown as being on the costly end, surpassed by o3 at over $100/hr. I can't directly compare to METR's metrics, but a good proxy is the cost of the Artificial Analysis suite. GLM-5.1 is less than half the cost to complete the suite of GPT-5 and is dramatically more capable than both GPT-5 and o3.
So while their analysis is interesting, it points towards the frontier continuing to test the limits of acceptable pricing (as Mythos is clearly reinforcing) and the lagging 6-12 months of distillation and refinement continuing to bring the cost of comparable capabilities to much more reasonable levels.
That raises a question: if practical-tier inference commoditizes, how does any company justify the ever-larger capex to push the frontier?
OpenAI's pitch is that their business model should "scale with the value intelligence delivers." Concretely, that means moving beyond API fees into licensing and outcome-based pricing in high-value R&D sectors like drug discovery and materials science, where a single breakthrough dwarfs compute cost. That's one possible answer, though it's unclear whether the mechanism will work in practice.
AGI. [waves hands at the infinite money machine]
I think you're overestimating, or oversimplifying. Maybe both.
Assuming you used o3, that would cost $58800 per week. That’s an expensive bet for only 50% odds in your favor.
Of course the agents are only that good on benchmarks, in reality your odds are worse. Maybe roulette instead?
> I think you're overestimating, or oversimplifying
Yeah if you only read comments on HN but not the actual linked article you will get oversimplified conclusion. Like, duh?
Curiously, for most submissions it's the opposite - comments are much more useful and nuanced than the source being discussed.
Measuring Claude 4.7's tokenizer costs - https://news.ycombinator.com/item?id=47807006 (309 comments)
All I can say is: the motivation letters don't look like they're written by AI anymore.
Writing maintainable code that scales.
Where the long-term payoff still seems speculative, is for companies doing training rather than just inference.
What I'm curious about are what about the other stuff out there such as the ARM and tensor chips.
So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.
Do we? Because elsewhere in the thread there's people claiming they are profitable in API billing and might be at least close to break even on subscription, given that many people don't use all of their allowance.
Step 1) Bubble callers will be proven wrong in 2026 if not already (no excess capacity)
Step 2) Models are not profitable are proven wrong (When Anthropic files their S1)
Step 3) FOMO and actual bubble (say around 2028/29)
I have no data to support this, but I think they just about break even on API usage and take overall loss on subscriptions/free plans.
Difference is that the current prices have a lot of subsidies from OPM
Once the narrative changes to something more realistic, I can see prices increase across the board, I mean forget $200/month for codex pro, expect $1000/month or something similar.
So its a race between new supply of hardware with new paradigm shifts that can hit market vs tide going out in the financial markets.
For inference, there is already a 10x improvement possible over a setup based on NVIDIA server GPUs, but volume production, etc... will take a while to catch up.
During inference the model weights are static, so they can be stored in High Bandwidth Flash (HBF) instead of High Bandwidth Memory (HBM). Flash chips are being made with over 300 layers and they use a fraction of the power compared to DRAM.
NVIDIA GPUs are general purpose. Sure, they have "tensor cores", but that's a fraction of the die area. Google's TPUs are much more efficient for inference because they're mostly tensor cores by area, which is why Gemini's pricing is undercutting everybody else despite being a frontier model.
New silicon process nodes are coming from TSMC, Intel, and Samsung that should roughly double the transistor density.
There's also algorithmic improvements like the recently announced Google TurboQuant.
Not to mention that pure inference doesn't need the crazy fast networking that training does, or the storage, or pretty much anything other than the tensor units and a relatively small host server that can send a bit of text back and forth.
Isn't reading from flash significantly more power intensive than reading DRAM? Anyway, the overhead of keeping weights in memory becomes negligible at scale because you're running large batches and sharding a single model over large amounts of GPU's. (And that needs the crazy fast networking to make it work, you get too much latency otherwise.)
> becomes negligible at scale
Nothing is negligible at scale! Both the cost and power draw of the HBMs is a limiting factor for the hyperscalers, to the point that Sam Altman (famously!) cornered the market and locked in something like 40% of global RAM production, driving up prices for everyone.
> sharding a single model over large amounts of GPUs
A single host server typically has 4-16 GPUs directly connected to the motherboard.
A part of the reason for sharding models between multiple GPUs is because their weights don't fit into the memory of any one card! HBF could be used to give each GPU/TPU well over a terabyte of capacity for weights.
Last but not least, the context cache needs to be stored somewhere "close" to the GPUs. Across millions of users, that's a lot of unique data with a high churn rate. HBF would allow the GPUs to keep that "warm" and ready to go for the next prompt at a much lower cost than keeping it around in DRAM and having to constantly refresh it.
Flash has no idle power being non-volatile (whereas DRAM has refresh) but active power for reading a constantly-sized block is significantly larger for Flash. You can still use Flash profitably, but only for rather sparse and/or low-intensity reads. That probably fits things like MoE layers if the MoE is sparse enough.
Also, you can't really use flash memory (especially soldered-in HBF) for ephemeral data like the KV context for a single inference, it wears out way too quickly.
However, for old-style 1-bit per cell flash memory I do not see any reason for differences in power consumption for reading.
Different array designs and sense amplifier designs and CMOS fabrication processes can result in different power consumptions, but similar techniques can be applied to both kinds of memories for reducing the power consumption.
Of course, storing only 1 bit per cell instead of 3 or 4 reduces a lot the density and cost advantages of flash memory, but what remains may still be enough for what inference needs.
128GB is all you need.
A few more generations of hardware and open models will find people pretty happy doing whatever they need to on their laptop locally with big SOTA models left for special purposes. There will be a pretty big bubble burst when there aren't enough customers for $1000/month per seat needed to sustain the enormous datacenter models.
Apple will win this battle and nvidia will be second when their goals shift to workstations instead of servers.
My guy, look around.
They are coming for personal compute.
Where are you going to get these 128GBs? Aquaman? [0]
The ones who make RAM are inexplicably attaching their fate to the future being all LLMs only everywhere.
[0] https://www.youtube.com/watch?v=0-w-pdqwiBw
End users will still get access to RAM. The cloud terminal they purchase from Apple, Google, Samsung, or HP will have all the RAM it will ever need directly soldered onto it.
This way - AI work is like a slot machine - will this work or not? Either way - casino gets paid and casino always wins.
Nevertheless - if the idea or product is very good (filling high market pain) and not that difficult to build - it can enable non-coders to "gamble" for the outcome with AI for $.
Sadly - from by experiences hiring Devs - hiring people is also a gamble...
This is the weirdest example of "gambling" I have seen in my life. If you'd've written "unprotected sex" I'd see the gambling part, but "extramartial sex" covers so much more than the tiny subset of "whose baby is it" (how many people are there having sex to gamble on who will be the father of a baby? 10?).
This made my day.
Happy to run it on your repos for a free report: hi@repogauge.org
If they can do a task that takes 1 unit of computation for 1 dollar they will cost 100 dollars for a 10 unit task and 10,000 for a 100 unit task.
Project costs from Claude Code bear this out in the real world.
the first model to outcompete its competitors while using less compute would be purchased more than anything else
that depends on the ability to produce supply at a saturation rate.
It did work for internet backhaul links - ala, those dark fibres. However, i reckon those fibres are easier to manufacture than silicon chips.
I wonder if saturation is possible for ai capable chips.