With Claude Code I created an agent that spawns 5 copies of itself branching git worktrees from main branch using subagents so no context leaks into their instructions. The agent will every 60 seconds analyze the performance of each of the copies which run for about 40 minutes answering the question "what would you do different?". After they finish the task, the parent will update the .claude/ files enhancing itself reverting if the copies performed worse or enhancing if they performed better. Then it creates 5 copies of itself branching git worktrees from main branch ..........
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
Let us perform a thought experiment. You do this. Many others, enthusiastic about both LLMs, and stocks/options, have similar ideas. Do these trading strategies interfere with each other? Does this group of people leveraging Claude for trading end up doing better in the market than those not? What are your benchmarks for success, say, a year into it? Do you have a specific edge in mind which you can leverage, that others cannot?
you can have it build an execution engine that interfaces with any broker with minimal effort.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years.
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
I see, I said "stock quote" instead of "minute aggregates". You are correct that data set is much larger and at ~1.5TB a year [0] I did not download 6TB of data onto my laptop.
To quote The Godfather II, "This is the business we have chosen."
The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.
For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.
not really, mostly its self explanatory, it has poweruser things that are discoverable within a few minutes of reading the help. Weirdly the cheat sheet is actually missing things that you can find inside claudes help like /keybinds .
I use Claude Code daily but kept forgetting commands, so I had Claude research every feature from the docs and GitHub, then generate a printable A4 landscape HTML page covering keyboard shortcuts, slash commands, workflows, skills system, memory/CLAUDE.md, MCP setup, CLI flags, and config files.
It's a single HTML file - Claude wrote it and I iterated on the layout. A daily cron job checks the changelog and updates the sheet automatically, tagging new features with a "NEW" badge.
Auto-detects Mac/Windows for the right shortcuts. Shows current Claude Code version and a dismissable changelog of recent changes at the top.
There’s something funny about this statement on a description of a key bind cheat sheet. I can’t seem to find ctrl on my phone and I think it may be cmd+p on mac.
The link to the changelog on the page got me wondering what the change history looks like (as best we can see).
I asked chatgpt to chart the number of new bullet points in the CHANGELOG.md file committed by day. I did nothing to verify accuracy, but a cursory glance doesn't disagree:
Thanks for putting this together! It's really nice to have a quick reference of all the features at a glance — especially since new features are being added all the time. Saves a lot of digging through docs.
Claude is actually hilariously bad at knowing about itself. But if you have the secret knowledge that there is a skill on how to use Claude baked into Claude code you can invoke it. Then it’s really pretty decent
Similar to prompting hacks to produce better results. If the machine we built for taking dumb input that will transform it into an answer needs special structuring around the input then it's not doing a good job at taking dumb input.
Reminds me of Vercel's Rauch talking about his aggressive 'any UX mistake is our fault, never the user's' model for evaluating UIX.
(It is/was Guillermo who says that, right?)
This should be all of Information Technology’s take. Your computers get hacked - IT’s fault. Users complain about how hard your software is or that it breaks all the time - IT’s fault.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
Yeah, I think it is. It's printable if you want to have a hard copy and it's up to you when to check for a new version. Since it's auto-updated (ideally) no matter when you visit the site you'll get the most up to date version as of that day. The issues (which I don't think this suffers from) would be if formatting it nice for printing made it less accurate or if updating it regularly made it worse for printing - these feel like two problems you can generally solve with one fix, they aren't opposed.
It’s not as if you need to know every keystroke and command to use the tool. Nor are all the config files and options not a thing in a GUI. There’s lots of inline help and tips in the CLI interface, and you can learn new features as you go.
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
Claude Code will be the first to AGI.
Now what is important is developing techniques for detecting patterns as this can applied to research, science, and medicine.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
Where'd you get the data itself? You sense I suppose everyone's skepticism here.
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
[0] https://massive.com/docs/flat-files/stocks/quotes
I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies.
To quote The Godfather II, "This is the business we have chosen."
The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.
For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.
It's a single HTML file - Claude wrote it and I iterated on the layout. A daily cron job checks the changelog and updates the sheet automatically, tagging new features with a "NEW" badge.
Auto-detects Mac/Windows for the right shortcuts. Shows current Claude Code version and a dismissable changelog of recent changes at the top.
It will always be lightweight, free, no signup required: https://cc.storyfox.cz
Ctrl+P to print. Works on mobile too.
There’s something funny about this statement on a description of a key bind cheat sheet. I can’t seem to find ctrl on my phone and I think it may be cmd+p on mac.
I asked chatgpt to chart the number of new bullet points in the CHANGELOG.md file committed by day. I did nothing to verify accuracy, but a cursory glance doesn't disagree:
https://imgur.com/a/tky9Pkz
edit: removed obnoxious list in favor of the link that @thehamkercat shared below.
My favorite is IS_DEMO=1 to remove a little bit of the unnecessary welcome banner.
On Mac it's the same as Windows, CTRL + V.
You use CMD + V to paste text.
it's almost like if the thing is not intelligent at all and just another abstraction on top of what we already had.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
> Ctrl-F "h"
> 0 results found
Interesting set of shortcuts and slash commands.
This is a bit intense.