> Instead of expecting it to understand my requests, I almost always build tooling first to give us a shared language to discuss the project.
This is probably the key. I’ve found this to be true in general. Building simple tools that the model can use help frame the problem in a very useful way.
>I still occasionally hand write code in NeoVim on the bits I care the most about (CSS, design and early architecture like API patterns)
I find it amazing how people's opinions differ here. This is the first stuff I'd trust to Claude and co. because it is very much in-distribution for training data. Now if I had sensitive backend code or a framework/language/library that is pretty new or updated frequently, I'd be much more cautious about trusting LLMs or at least I would want to understand every bit of the code.
I think OP nailed it with 'the bits I care the most about'—if you like those things a certain way, then you'll want to make sure they are that way, not accept whatever Claude does. If you don't care, you just want something done, then you'll have Claude do it while you work on what you do care more about.
I've been having pretty good success with unity as a 3d llm tool. In addition to the iso views I've included a perspective mode that can focus on a list of game object ids with a custom camera origin. The agent is required to send instructions along with the VLM request each time in order to condition how the view is interpreted. E.g.: "How does ambient occlusion look in A vs B?".
The VLM is invoked as a nested operation within a tool call, not as part of the same user-level context. This provides the ability to analyze a very large number of images without blowing token budgets.
I've observed that GPT5.4 can iteratively position the perspective camera and stop once it reaches subjectively interesting arrangements. I don't know how to quantify this, but it does seem to have some sense of world space.
I think much of it comes down to conditioning the vision model to "see" correctly, and willingness to iterate many times.
Just yesterday I used Claude to great effect in FreeCAD to model a church tower. The tower has a square base and an octagonal top, but connecting the two by creating a loft using the GUI in FreeCAD results in a wrong and ugly abomination.
Claude understood the problem and produced elegant Python code that worked perfectly the first time.
So I continued and described the other features of the tower to Claude, who coded them.
It's sometimes difficult to properly describe what you want in English, and Claude does a lot of thinking, and sometimes goes deep into a wrong direction of which it won't come out easily; but in the end the result is almost perfect.
Great article. I've been trying to achieve something similar with a Revit. It's an old CAD application for Windows which means there's a few additional hurdles in exposing a cli interface that allows the LLM to drive it. However, once that is done, the loop of "write code, take a screenshot, repeat" works pretty well.
Gemini's best ability is it's 3d spatial reasoning. It's downright terrible at a lot of things (toolcalling is an absolute nightmare), but it consistently wins in stuff like 3d modeling, reasoning through 3d problems, and even 2d layout and animation tasks like the infamous pelican riding a bycicle benchmark
Honestly understanding and applying 3d transformations should be a new LLM benchmark. Three.js, OpenSCAD, even Nano Banano prompts. The moment you add that extra dimension any semblance of ‘intelligence’ goes right out the window. Every model out there seems to spin themselves in circles trying to logic through it with no success.
As soon as LLMs are doing serious junior level 3D modeling and mechanical CAD design, that's going to lead to some wild iteration loops with rapid prototyping. Very exciting.
Claude is terrible. I've been using Codex for a few months and decided to give Opus a try and see how it is.
After asking it to review a single file in a simple platformer game, it goes:
> Coyote jump fires in the wrong direction (falling UP with inverted gravity)
var fallVelocity: float = body.velocity.y * body.up_direction.y
I'm like ok, suggest a fix
> I owe you a correction: after re-analyzing the math more carefully, lines 217–223 are actually correct — my original review point was wrong. Let me walk through why.
Oh boy. It's had several other gaffes like this, and the UI/UX is still crap (fonts don't get applied, it doesn't catch up with the updated working state after editing files etc.) Codex helped me save time but Claude is just wasting my time. Can I get a refund?
> Instead of expecting it to understand my requests, I almost always build tooling first to give us a shared language to discuss the project.
This is probably the key. I’ve found this to be true in general. Building simple tools that the model can use help frame the problem in a very useful way.
I find it amazing how people's opinions differ here. This is the first stuff I'd trust to Claude and co. because it is very much in-distribution for training data. Now if I had sensitive backend code or a framework/language/library that is pretty new or updated frequently, I'd be much more cautious about trusting LLMs or at least I would want to understand every bit of the code.
If you setup your skeleton in a way it is familiar to you, reviewing new features afterwards is easier.
If you let the LLM start with the skeleton, they may use different patterns and in the long run it's harder to keep track of it.
"Bad" is the word you're looking for, not "different".
Engineers are an opinionated bunch, safe to say at least a small chunk of us will disagree with what goes into the training pile.
For me, it's preferring Deno-style pinned imports vs traditional require() or even non-versioned ecmascript import syntax.
The VLM is invoked as a nested operation within a tool call, not as part of the same user-level context. This provides the ability to analyze a very large number of images without blowing token budgets.
I've observed that GPT5.4 can iteratively position the perspective camera and stop once it reaches subjectively interesting arrangements. I don't know how to quantify this, but it does seem to have some sense of world space.
I think much of it comes down to conditioning the vision model to "see" correctly, and willingness to iterate many times.
Claude understood the problem and produced elegant Python code that worked perfectly the first time.
So I continued and described the other features of the tower to Claude, who coded them.
It's sometimes difficult to properly describe what you want in English, and Claude does a lot of thinking, and sometimes goes deep into a wrong direction of which it won't come out easily; but in the end the result is almost perfect.
all i wanted was some opinions on if my bad idea would work, but it instead wrote me files for making my own sony earphones in 3ish parts.
and when i sewed it together, it worked!
that said, it did have full access to a mini CAD app, but i think it wrote all its own calculations inline
After asking it to review a single file in a simple platformer game, it goes:
> Coyote jump fires in the wrong direction (falling UP with inverted gravity)
I'm like ok, suggest a fix> I owe you a correction: after re-analyzing the math more carefully, lines 217–223 are actually correct — my original review point was wrong. Let me walk through why.
Oh boy. It's had several other gaffes like this, and the UI/UX is still crap (fonts don't get applied, it doesn't catch up with the updated working state after editing files etc.) Codex helped me save time but Claude is just wasting my time. Can I get a refund?
I am currently using Claude as I find it to be better than the others at the free tier.