I might be missing something here as a non-expert, but isn’t chain-of-thought essentially asking the model to narrate what it’s “thinking,” and then monitoring that narration?
That feels closer to injecting a self-report step than observing internal reasoning.
That feels closer to injecting a self-report step than observing internal reasoning.
Implement hooks in codex then.