When I was developing my little Mac app with Claude Code, I took many screenshots and shared them with the agent as feedback. You know the drill: press the keyboard shortcut to capture the screenshot directly to the clipboard, and then paste it into Claude Code’s message composer. Sometimes I have to capture many screenshots at once. However, there is no built-in feature that allows you to save multiple screenshots as separate items and then paste them individually into the message composer. I have to pick the images I need in Finder, copy and paste each image.1 When things aren’t going well, that process becomes especially painful.
About ten days ago, I saw a post on X from Thorsten Ball, an engineer at Amp. He said:
I didn’t quite understand what he actually did at the time, because I missed his follow-up post.2 But it sounded really compelling. A few days ago, he mentioned the feedback loop again on X. This time, I did find the Amp thread he shared, but I thought the --capture flag was from a custom command or an MCP. I couldn’t figure it out. However, I believe he would talk about it on a live stream someday.
And here it comes, Thorsten Ball and his colleague, Ryan Carson, did a live stream on X. One of the topics is about Thorsten Ball demonstrating how the agent uses the feedback loop to make his terminal emulator correctly display the colors (around 22:43). And I finally understand, in broad strokes, what the --capture mode actually is and how the feedback loop works. As someone who isn’t an engineer, I found this genuinely eye-opening.
What Thorsten Ball did was ask Amp to build a feedback loop for itself, so it could see what was rendered on the GPU. And the result is not an external script or a MCP; it is built into the codebase, the terminal emulator that Thorsten Ball was developing. It is a feature built into the terminal emulator for the agent to use, not for the user.
Ryan Carson asked him, “how long did it take you to build the --capture feedback loop? Was that like a day?” “No, this was 20 minutes,” Thorsten Ball answered.3
Did I say that I found it eye-opening?
You need to watch the demo yourself to feel it. The agent not only wrote the code but also took screenshots to verify that its implementations worked as expected.
Thorsten Ball also made another fun demo. They called it a “prompt shootout” (around 44:47). He opened two terminal windows with Amp running in them, side by side. And then he made two different prompts for each agent, to ask them to fix an htop displaying issue in his toy terminal emulator. He asked one agent to use the feedback loop in the first prompt. And in the second prompt, he didn’t mention there is a --capture mode for the agent to use. You can see that the results are worlds apart, especially for this kind of task.
While I was watching the video, I couldn’t help thinking about my little Mac app’s settings window. Would it have been less painful if I had built a feedback loop—a --capture mode—into the codebase when I was vibe coding the app.
From my perspective—and maybe this analogy is imperfect—it’s like: I know FSD (Full Self-Driving) is a thing, and it’s getting better and better. But it’s another thing entirely when you actually see a car find its own parking spot after you get out, then drive itself back to pick you up when you need it.
At the end of the video, Thorsten Ball said, “I think that’s how codebases are going to change.” “… you want the ability for the agent to get feedback about what it’s working [on] and that’s not just good for the agent; it’s good for the human.”
At this point, if you think I’m exaggerating, hear Ryan Carson’s reaction: “… oh my God, I just—I never thought I’d get goosebumps talking about code, but I just did.”
“I think with these models getting better, I’ve been starting to think that the goalposts have shifted again,” Thorsten Ball said.
No wonder that Zed Industries, the company behind the Zed editor, wants to develop a new kind of database for future collaborations between human engineers and AI agents.3
It’s a great time to witness the AI-driven evolution—if not the outright revolution—of human–computer interaction. I can only imagine how dramatically things will change in the next few years (or months).
-
I guess there are some third party apps can do that. ↩︎
-
Apparently, I missed Ball’s follow-up post. He later shared some threads and showed us how he worked with Amp to add the feedback loop, and how the agent used it to finish the task. ↩︎
-
In Zed Industries’ latest round of fundraising announcement, they introduced DeltaDB, describing it as “a new kind of version control that tracks every operation, not just commits.” “DeltaDB uses CRDTs to incrementally record and synchronize changes as they happen. It’s designed to interoperate with Git, but its operation-based design supports real-time interactions that aren’t supported by Git’s snapshots.” ↩︎
