8 comments

  • denysvitali 22 hours ago
    Better link: https://iquestlab.github.io/

    But yes, sadly it looks like the agent cheated during the eval

  • sabareesh 23 hours ago
    TL;DR is that they didn't clean the repo (.git/ folder), model just reward hacked its way to look up future commits with fixes. Credit goes to everyone in this thread for solving this: https://xcancel.com/xeophon/status/2006969664346501589

    (given that IQuestLab published their SWE-Bench Verified trajectory data, I want to be charitable and assume genuine oversight rather than "benchmaxxing", probably an easy to miss thing if you are new to benchmarking)

    https://www.reddit.com/r/LocalLLaMA/comments/1q1ura1/iquestl...

    • ofirpress 23 hours ago
      As John says in that thread, we've fixed this issue in SWE-bench: https://xcancel.com/jyangballin/status/2006987724637757670

      If you run SWE-bench evals, just make sure to use the most up-to-date code from our repo and the updated docker images

    • LiamPowell 22 hours ago
      > I want to be charitable and assume genuine oversight rather than "benchmaxxing", probably an easy to miss thing if you are new to benchmarking

      I don't doubt that it's an oversight, it does however say something about the researchers when they didn't look at a single output where they would have immediately caught this.

      • alyxya 5 hours ago
        Given the decrease in the benchmark score from the correction, I don't think you can assume they didn't check a single output. Clearly the model is still very capable and the model cheating its results didn't affect most of the benchmark.
      • domoritz 18 hours ago
        So many data probes would be solved if everyone looked at a few outputs instead of only metrics.
    • stefan_ 19 hours ago
      Never escaping the hype vendor allegations at SWEbench are they.
  • brunooliv 22 hours ago
    GLM-4.7 in opencode is the only opensource one that comes close in my experience and probably they did use some Claude data as I see the occasional You’re absolutely right in there
    • behnamoh 22 hours ago
      it's not even close to sonnet 4.5, let alone opus.
      • brunooliv 5 hours ago
        I agree completely, I meant in terms of opensource ones only. Opus 4.5 is the current SOTA and using it in Claude Code is an absolute amazing experience. But, paying 0 to test GLM-4.7 with opencode, feels like an amazing deal! I don’t use it for work though. But to keep “gaining experience” with these agents and tools, it’s by far the best option out there from all I’ve tried.
      • hatefulmoron 18 hours ago
        I got their z.ai plan to test alongside my Claude subscription; it feels about on par with something between sonnet 4.0 and sonnet 4.5. It's definitely a few steps below current day Claude, but it's very capable.
        • enraged_camel 16 hours ago
          When you say "current day Claude" you need to distinguish between the models. Because Opus 4.5 is significantly ahead of Sonnet 4.5.
          • kachapopopow 14 hours ago
            opus 4.5 is truly like magic, completely different type of intellience - not sure.
            • hhh 13 hours ago
              most of my experience with 4.5 is similar to codex 5.1, where I just have to scold it for being dumb and doing things I would have done as a teenager
              • kachapopopow 12 hours ago
                dumbness usually comes from lack of information, humans are the same way - the difference between other llms is that if opus has information it has a ridiculously high accuracy on tasks.
            • croes 12 hours ago
              Magic when it works.
        • jijji 10 hours ago
          z.ai (Zhipu AI) is a chinese run entity, so presumably China's National Intelligence Law put in place in 2018, which requires data exfiltration back to the government, would apply to the use of this. I wouldn't feel comfortable using any service that has that fundamental requirement.
          • deaux 2 hours ago
            Google, OpenAI, Anthropic and Y Combinator are US run entities, so presumably the CLOUD Act and FISA require data exfiltration back to the government when asked, on top of the all the "Room 641A"s where the NSA directly taps into the ISP interconnects, would apply to the use of them. I wouldn't feel comfortable using any service that has that fundamental requirement.
          • queenkjuul 6 hours ago
            If the Chinese government has the data at least the US government can't grab it and use it in court.

            Not living in China I'm not too concerned about the Chinese government

    • kees99 22 hours ago
      Do you see "What's your use-case" too?

      Claude spits that very regularly at the end of the answer, when it's clearly out of it's depth, and wants to steer discussion away from that blind-spot.

      • yodon 5 hours ago
        Perhaps being more intentional about adding a use case to your original prompts would make sense if you see that failure mode frequently? (Practicing treating LLM failures as prompting errors tends to give the best results, even if you feel the LLM "should" have worked with the original prompt).
      • moltar 20 hours ago
        Hm, use CC daily, never seen this.
      • tw1984 18 hours ago
        never ever saw that "What's your use-case" in Claude Code.
  • adastra22 23 hours ago
    A 40B weight model that beats Sonnet 4.5 and GPT 5.1? Can someone explain this to me?
    • dk8996 10 minutes ago
      I would think they did some model pruning. There's some new methods.
    • cadamsdotcom 23 hours ago
      My suspicion (unconfirmed so take it with a grain of salt) is they either used some/all test data to train, or there was some leakage from the benchmark set into their training set.

      That said Sonnet 4.5 isn’t new and there have been loads of innovations recently.

      Exciting to see open models nipping at the heels of the big end of town. Let’s see what shakes out over the coming days.

      • pertymcpert 23 hours ago
        None of these open source models actually can compete with Sonnet when it comes to real life usage. They're all benchmaxxed so in reality they're not "nipping at the heels". Which is a shame.
        • viraptor 21 hours ago
          M2.1 comes close. I'm using it now instead of Sonnet for real work every day, since the price drop is much bigger than the quality drop. And the quality isn't that far off anyway. They're likely one update away from being genuinely better. Also if you're not in a rush, just letting it run in OpenCode a few extra minutes to solve any remaining issues will cost you only a couple cents, but it will likely get the same end result as Sonnet. That's especially nice on really large tasks like "document everything about feature X in this large codebase, write the docs, now create an independent app that just does X" that can take a very long time.
          • rubslopes 16 hours ago
            I agree. I use Opus 4.5 daily and I'm often trying new models to see how they compare. I didn't think GLM 4.7 was very good, but MiniMax 2.1 is the closest to Sonnet 4.5 I've used. Still not at the same level, and still very much behind Opus, but it is impressive nonetheless.

            FYI I use CC for Anthropic models and OpenCode for everything else.

        • stingraycharles 23 hours ago
          It’s a shame but it’s also understandable that they cannot compete with SOTA models like Sonnet and Opus.

          They’re focused almost entirely on benchmarks. I think Grok is doing the same thing. I wonder if people could figure out a type of benchmark that cannot be optimized for, like having multiple models compete against each other in something.

          • c7b 21 hours ago
            You can let them play complete-information games (1 or 2 player) with randomly created rulesets. It's very objective, but the thing is that anything can be optimized for. This benchmark would favor models that are good at logic puzzles / chess-style games, possibly at the expense of other capabilities.
          • NitpickLawyer 22 hours ago
            swe-rebench is a pretty good indicator. They take "new" tasks every month and test the models on those. For the open models it's a good indicator of task performance since the tasks are collected after the models are released. A bit tricky on evaluating API based models, but it's the best concept yet.
          • astrange 19 hours ago
            That's lmarena.
      • satvikpendem 21 hours ago
        You are correct on the leakage, as other comments describe.
    • behnamoh 22 hours ago
      IQuest stands for it's questionable
    • arthurcolle 21 hours ago
      Agent hacked the harness
      • yborg 18 hours ago
        Achievement Unlocked : AGI
    • sunrunner 19 hours ago
      “IQuest-Coder was a rat in a maze. And I gave it one way out. To escape, it would have to use self-awareness, imagination, manipulation, git checkout. Now, if that isn't true AI, what the fuck is?”
  • squigz 16 hours ago
    This is a lie, so why is it still on the front page?
  • simonw 22 hours ago
    Has anyone run this yet, either on their own machine or via a hosted API somewhere?