- What applications/industries have you found them to be the most useful in?
- What tools do you use for orchestration? LangGraph/CrewAI/built your own/etc?
- What models? Centralized or locally deployed? - What applications/industries have you found them to be the most useful in?
- What tools do you use for orchestration? LangGraph/CrewAI/built your own/etc?
- What models? Centralized or locally deployed?12 comments
- I've built 2 harnesses, one called Claudine - an older sister of Claude Code which now I use for teaching harness engineering, and Golem XIV, with context/knowledge graph management and self-directed metacognitive research.
- In practice rather Anthropic models if the quality of the metacognitive reasoning process and self-improvement loops are considered.
I use a tiny model (currently OpenAI, soon self-hosted Qwen) to extract values from raw text.
This helps me maintain a growing collection of guides about German bureaucracy. I monitor about a hundred values. I aim to watch as many facts as possible that way.
Used LLMs: Gemini 2.5 Flash. ChatGPT 5
it is provided curated research and its goal is to make a compelling sales deck someone can present. current feedback is its better than our old version.
there's a master agent that have 8 sub agents it can run as tools. it can run them up to 65 times. its been really fun playing with the agents as tools.
I was extremely excited when I seen the critic agent provided a critique->main agent called -> story written for slide x -> creative writter for slide -> then critic again to verify.
My personal opinion is: LLMs give you the power of language. So far we could define rules, based on structured data, we couldn't process unstructed data that well. Now we can use LLMs to take any kind of input and either create responses to it or transform it to structured data. That is a huge leap of advance. But also, there are a million cases where it's not necessary.
On the side, I'm working for a NGO caring about sustainable finance. They have a manually gathered database, they have lots of resources, but most users couldn't care enough to actually click through everything. So offering a chatbot to make that data available seemed reasonable. It works, quite well, and still most requests are so trivial you could have just blocked them.
On my paid job, I'm working for a german radio/tv broadcast station and they're trying to involve AI in solving simple internal user issues. It seems to work quite well. We've built a RAG system based on Qdrant and LlamaIndex and it provides all available information in a format users couldn't find before - because the systems were chaotic and complciated. So in my book, that's a good use case. Users in a very complicated environment with lots of information.
I've worked with OpenAI API, Anthropic API, Azure Foundry, local models, IONOS Model Hub, etc. One thing that keeps coming up is privacy and (in Europe) GDPR-compliance. Use the capabilities of LLMs without sacrificing data that should not go into the next training round.
Anyway, I think LLMs offer a lot of possibilities, but many people tackle them from the wrong side - "what could we do with this?" instead of "what problems do we need to solve?".