I'm curious -- let's say we have claude code hooked up to MCPs for jaeger, grafana, and the usual git/gh CLIs it can use out-of-the-box, and we let claude's planner work through investigations with whatever help we give it. Would TraceRoot do anything clever wrt the AI that such as a setup wouldn't/couldn't?
(I'm asking b/c we're planning a setup that's basically that, so real question.)
Adding model provider abstraction would significantly improve adoption, especially for organizations with specific LLM preferences or air-gapped environments that can't use OpenAI.
It's been 2.5 years since ChatGPT came out, and so many projects still don't allow for easy switching of the OPEN_AI_BASE_URL or affiliated parameters.
There are so many inferencing libraries that serve an OpenAI-compatible API that any new project being locked in to OpenAI only is a large red flag for me.
Thanks for the feedback! Totally hear you on the tight OpenAI coupling - we're aware and already working to make BYOM easier. Just to echo what Zecheng said earlier: broader model flexibility is definitely on the roadmap.
Appreciate you calling it out — helps us stay honest about the gaps.
Yes, there is a roadmap to support more models. For now there is a in progress PR to support Anthropic models https://github.com/traceroot-ai/traceroot/pull/21 (contributed by some active open source contributors) Feel free to let us know which (open source) model or framework (VLLM etc.) you want to use :)
(I'm asking b/c we're planning a setup that's basically that, so real question.)
There are so many inferencing libraries that serve an OpenAI-compatible API that any new project being locked in to OpenAI only is a large red flag for me.
Appreciate you calling it out — helps us stay honest about the gaps.