20 comments

  • momentmaker 12 hours ago
    Installed it just now. The most surprising line for me was "Conversation $225 / 496 turns". Basically half my turn count this month was chat, not building. Had no idea that ratio was so off.

    Cache hit rate is another metric I wouldn't have looked at otherwise. 98.2% on Opus 4.6 here. Apparently that's the difference between a $2k month and something much worse.

    Activity classification is the actually useful feature though. Most token trackers just tell you total spend. This tells you what kind of work the spend went to.

    Nice work!

  • yashjadhav2102 1 day ago
    That’s really helpful – token usage is definitely something everybody knows about yet can’t quite put their finger on. This split between non-tool conversations and coding looks rather shocking, like there’s a ton of inefficiency in our interaction with such services. I like the fact that it doesn’t require any LLM to run – nice and simple concept.
  • R00mi 1 day ago
    The 56% conversation vs 21% coding split is a really interesting finding — it lines up with trajectory studies on SWE-bench where ~38% of an agent's actions are pure exploration (grep, find, file reads). The remaining "no-tool" turns are likely the agent digesting what it read and planning its next move. These two costs are linked: the less efficiently the agent localizes, the more thinking turns it needs to piece things together. PatchPilot (ICML 2025) quantified this — localization capability accounts for ~47% of an agent's total improvement. One thing that would be really interesting in your tool: separating exploration turns (grep/find/read) from pure thinking turns, and seeing how the ratio scales with project size. On large monorepos, exploration should blow up non-linearly.
  • Normal_gaussian 1 day ago
    This is cool.

    I do find the activities a little suspect - it has 1 turn of planning for me in the last 30 days. I have claude write plans first before every coding session, often using one agent session to plan and then output a plan file, and then others to execute on it. I also have several repos dedicated to 'planning' in the sense of what should I do next based on what emails/tickets/bugs etc. I have. In other words - I do all kinds of planning!

  • zzy824 16 hours ago
    The 56% on conversation turns with no tool usage matches what I've seen too. A lot of that is Claude thinking out loud, context management, and planning before it touches anything. It feels wasteful until you realize the alternative is it immediately starts editing files without understanding the codebase first.

    The JSONL transcript parsing is clever. I've been reading those same files for a different purpose (rendering conversation history in a menu bar app) and the format is more reliable than I expected. Each tool call has enough metadata to reconstruct what happened without needing to re-parse the full conversation.

    One thing that would be interesting to see: cost broken down by session when you're running multiple sessions in parallel. Right now I have no idea which of my 4 running sessions is burning the most tokens.

    • agentseal 35 minutes ago
      Noted!! I will create an issue on this and as soon as we provide a fix i will notify you here
    • rachel_rig 7 hours ago
      [dead]
  • stanvojtko 4 hours ago
    Is not this antropic's yesterday's release?
  • giancarlostoro 1 day ago
    > The interface is an interactive terminal UI built with Ink (React for terminals)

    Just like Claude Code btw.

    I'm working on a custom harness because I don't like or trust some of the ones out there, so I'm going to build one purely for myself and my own needs to see just how they work, and figure out some of what you've learned by looking at how Claude works, so I might add your project to my list of tooling to look at.

    • agentseal 1 day ago
      yeah Ink, it was the fastest path to something that felt native next to Claude Code itself
  • Isolated_Routes 1 day ago
    I like this a lot An interesting next iteration would be to add a functionality that evaluates a user's work for inefficiencies and suggests where they can improve cut cost. Might be outside the scope of your project, but it could be interesting.
  • hmokiguess 1 day ago
    Very cool! I saw a similar product recently that I liked but I much prefer your approach to theirs[1]

    [1] https://github.com/cordwainersmith/Claudoscope

  • ieie3366 1 day ago
    "Built this after realizing I was spending ~$1400/week on Claude Code with almost no visibility into what was actually consuming tokens."

    holy slop. the $200/month plan has NEVER hit rate limits for me and I often run 5+ tabs of concurrent agents in a large 300k LoC codebase

    • ethan_smith 1 day ago
      The $200/month plan throttles you when you hit limits - you just wait in a queue. API usage at $1400/week means unthrottled, parallel execution with no waiting. These are very different use cases, and for teams or heavy automation workflows the API cost can make sense if the time savings justify it.
      • weird-eye-issue 1 day ago
        That's not how it works

        You've never actually hit the limit have you? If you have you would know it's a hard limit.

        • cududa 1 day ago
          You’ve never used the API version versus the $200 plan and set the two at the exact same task, have you?
          • weird-eye-issue 1 day ago
            I used the API version for quite a while before using a subscription, which I now have used extensively for many months.

            So, is your claim that they just slow down and queue the subscription version, or are you accusing them of using nerfed models, or is it something else? The only time I ever get some slowness has to do with the models being overloaded and has nothing to do with limits. Those are two separate concepts you seem to be confusing. And luckily, this is pretty rare for me since I don't work during US time zones.

    • agentseal 1 day ago
      not $1,400 out of pocket, thats the API equivalent cost of the tokens. I am on the $200/month Max plan :D.

      In my case I mostly consume every bit of the weekly subscription.

  • jimbokun 1 day ago
    So how much did the tokens to build this cost?
  • halostatue 1 day ago
    Doesn't seem to work with Cursor Agent (which may store its data in ~/.cursor).
    • agentseal 1 day ago
      you are right. thats cursor-agent (the CLI), not the Cursor IDE. CodeBurn only parses the IDE's state.vscdb right now. cursor agent keeps transcripts under ~/.cursor/projects/*/agent-transcripts/ which we dont read yet.

      filed an issue to add it: https://github.com/AgentSeal/codeburn/issues/55

      cursor support only landed yesterday, so CLI is next. thanks for catching it.

      • halostatue 1 day ago
        Cursor Agent itself suggests that this probably won't be easy as some of the data is missing.
  • dovelome 22 hours ago
    Love it
  • coatol5 1 day ago
    Made something similar a while back: https://www.clauderank.com/ Completely open source
  • potter098 3 hours ago
    [dead]
  • mockbolt 7 hours ago
    [dead]
  • cafecito_dev 14 hours ago
    [flagged]
  • Sim-In-Silico 1 day ago
    [dead]
  • KaiShips 1 day ago
    [flagged]
  • maroondlabs 1 day ago
    [flagged]