YPWU
Trying to move text and other things
By Titan Wu
Mastodon Mastodon
  • Subscribe
  • Archive
  • About
  • “The Goalposts Have Shifted Again.”

    Sunday, December 7, 2025
    “The Goalposts Have Shifted Again.”

    When I was developing my little Mac app with Claude Code, I took many screenshots and shared them with the agent as feedback. You know the drill: press the keyboard shortcut to capture the screenshot directly to the clipboard, and then paste it into Claude Code’s message composer. Sometimes I have to capture many screenshots at once. However, there is no built-in feature that allows you to save multiple screenshots as separate items and then paste them individually into the message composer. I have to pick the images I need in Finder, copy and paste each image.1 When things aren’t going well, that process becomes especially painful.

    About ten days ago, I saw a post on X from Thorsten Ball, an engineer at Amp. He said:

    Dude, this was magical.

    I had the agent add a feedback loop to the emulator, so the agent itself can run a command in the terminal emulator and capture the GPU buffer in a PNG file, which it can then look at.

    Then it knocked out a 2D renderer for box drawing characters. https://t.co/GluQt0WVDN pic.twitter.com/Srky1fk4Hz

    — Thorsten Ball (@thorstenball) November 25, 2025

    I didn’t quite understand what he actually did at the time, because I missed his follow-up post.2 But it sounded really compelling. A few days ago, he mentioned the feedback loop again on X. This time, I did find the Amp thread he shared, but I thought the --capture flag was from a custom command or an MCP. I couldn’t figure it out. However, I believe he would talk about it on a live stream someday.

    And here it comes, Thorsten Ball and his colleague, Ryan Carson, did a live stream on X. One of the topics is about Thorsten Ball demonstrating how the agent uses the feedback loop to make his terminal emulator correctly display the colors (around 22:43). And I finally understand, in broad strokes, what the --capture mode actually is and how the feedback loop works. As someone who isn’t an engineer, I found this genuinely eye-opening.

    What Thorsten Ball did was ask Amp to build a feedback loop for itself, so it could see what was rendered on the GPU. And the result is not an external script or a MCP; it is built into the codebase, the terminal emulator that Thorsten Ball was developing. It is a feature built into the terminal emulator for the agent to use, not for the user.

    Ryan Carson asked him, “how long did it take you to build the --capture feedback loop? Was that like a day?” “No, this was 20 minutes,” Thorsten Ball answered.3

    Did I say that I found it eye-opening?

    You need to watch the demo yourself to feel it. The agent not only wrote the code but also took screenshots to verify that its implementations worked as expected.

    Thorsten Ball also made another fun demo. They called it a “prompt shootout” (around 44:47). He opened two terminal windows with Amp running in them, side by side. And then he made two different prompts for each agent, to ask them to fix an htop displaying issue in his toy terminal emulator. He asked one agent to use the feedback loop in the first prompt. And in the second prompt, he didn’t mention there is a --capture mode for the agent to use. You can see that the results are worlds apart, especially for this kind of task.

    While I was watching the video, I couldn’t help thinking about my little Mac app’s settings window. Would it have been less painful if I had built a feedback loop—a --capture mode—into the codebase when I was vibe coding the app?

    From my perspective—and maybe this analogy is imperfect—it’s like: I know FSD (Full Self-Driving) is a thing, and it’s getting better and better. But it’s another thing entirely when you actually see a car find its own parking spot after you get out, then drive itself back to pick you up when you need it.

    At the end of the video, Thorsten Ball said, “I think that’s how codebases are going to change.” “… you want the ability for the agent to get feedback about what it’s working [on] and that’s not just good for the agent; it’s good for the human.”

    At this point, if you think I’m exaggerating, hear Ryan Carson’s reaction: “… oh my God, I just—I never thought I’d get goosebumps talking about code, but I just did.”

    “I think with these models getting better, I’ve been starting to think that the goalposts have shifted again,” Thorsten Ball said.

    No wonder that Zed Industries, the company behind the Zed editor, wants to develop a new kind of database for future collaborations between human engineers and AI agents.3

    It’s a great time to witness the AI-driven evolution—if not the outright revolution—of human–computer interaction. I can only imagine how dramatically things will change in the next few years (or months).


    1. I guess there are some third party apps can do that. ↩︎

    2. Apparently, I missed Ball’s follow-up post. He later shared some threads and showed us how he worked with Amp to add the feedback loop, and how the agent used it to finish the task. ↩︎

    3. In Zed Industries’ latest round of fundraising announcement, they introduced DeltaDB, describing it as “a new kind of version control that tracks every operation, not just commits.” “DeltaDB uses CRDTs to incrementally record and synchronize changes as they happen. It’s designed to interoperate with Git, but its operation-based design supports real-time interactions that aren’t supported by Git’s snapshots.” ↩︎


  • “Treat Agent Threads as One-Off Notes and Rip Them Off Frequently”

    Saturday, December 6, 2025
    “Treat Agent Threads as One-Off Notes and Rip Them Off Frequently”

    → Flowing with agents with Beyang Liu, CTO of Sourcegraph (Changelog Interviews #658)

    When working with a coding agent, do you consciously keep the conversation short and start a new thread for a new task?

    In September, Sourcegraph’s co-founder and CTO Beyang Liu took an interview on the podcast, Changelog Interviews.1 In the show, he shares his observations on how senior software engineers use coding agents. Compared with how non-engineering users tend to use them, he offers the following advice:

    I would actually recommend… you should treat threads sort of like one-and-done, rip-off notes. Rip them off frequently rather than do the whole… like You don’t need to build the entire app inside a single thread. In fact, I would probably recommend against doing that, because you will get lower quality, higher latency, and more cost if you do that.

    (The quote actually starts at 01:01:05, but I set the video to play from 59:06 to give more context.)

    I am writing this post as a reminder to myself, because I was one of those people who kept working with the coding agent in the same thread, even when the tasks weren’t related, until we ran into the context limit.

    A couple of months ago, I used ChatGPT to turn an idea into a little Mac app. I first used ChatGPT’s web app, and I made a workable prototype in an afternoon. (It’s incredible!) Later, I started using ChatGPT’s Mac app alongside Xcode. Eventually, I switched to Claude Code because of its reputation for coding.

    I had been working with Claude Code in the same thread the whole time, handing it one task after another, until auto compact kicked in. Back then, I knew the thread would need compaction once we got close to the context limit, and I never really thought about why there was even a command to compact the conversation manually. Then one day I realized something was off: Claude Code suddenly seemed “dumber.” Even though we were still in the same thread, it seemed to have forgotten what we had talked about earlier and the tasks it had already completed. It was frustrating.

    That was the moment I understood that the thread itself had become the problem.

    Now I get it. After I listened to the podcast, I changed the way I work with coding agents. When I normalize my news database and build things on top of it with Amp, I keep threads short and, when possible, start a new one or use Amp’s handoff feature.

    If you have the same issue I had, you can also read this guide on managing context from Amp. As the guide puts it, “The longer your conversation goes on, the higher the chances are the model goes ‘off the rails’: hallucinating things that don’t exist, failing to do the same things over and over again, declaring victory while standing on a mountain of glass shards.” It clearly explains why the user should manage context consciously, illustrated with diagrams created in Monodraw. And if you happen to be an Amp user, the guide also provides a series of features for working with the context window in Amp.


    1. Now Sourcegraph and Amp are two separate companies. See “Amp, Inc.” for more details. ↩︎


  • My AI-Powered News Clipping Workflow

    Thursday, November 20, 2025
    My AI-Powered News Clipping Workflow

    I’ve been following the news about technology, startups, and venture capital firms for a long time. Every once in a while, I try to keep track of the names, the people and companies involved, and what they did when I’m reading tech news. But it always ended up like my childhood attempts at news clipping: I eventually gave up because it was too time-consuming and labor-intensive.

    I still take notes when I read the news. However, if I could extract that kind of structured information from the news, that would be a good add-on.

    Like many others, I’ve been playing around with AI since ChatGPT came out. One day, I read a post from Simon Willison. Through the links in it, I learned about LLM, a CLI tool and Python library he developed for interacting with LLMs (large language models). (To avoid confusion, I will refer to this tool as “LLM CLI” throughout the post.) As I read through the user manual, I discovered an interesting use of LLM CLI in the Schemas section, and eventually, I put together an AI-powered news clipping workflow.

    In this post, I want to share how I built my AI-powered news clipping workflow. I’ve been using and tweaking this workflow from time to time since the end of May, when I started using LLM CLI. I’ll walk through the tools I used and how the system is structured, and hopefully this post will inspire you to adapt the idea for other types of news that interest you. If you are familiar with CLI tools, I think it would be easy to build. If you’re not, with some help from ChatGPT or Claude, it could also be done. (I’m the latter one.)

    Table of Contents
    • How it Works
    • Why I Built This Workflow and How I Use It
    • The Workflow Breakdown
    • What We Need
    • Step-by-Step Guide to Building the Workflow
    • Some Remaining Issues and Future Potential
    • A Byproduct of Learning

    How it Works

    Basically, I want to have an LLM to do the news clipping job for me.

    It’s like hiring an intern to read the news I assigned, make news clippings, and organize them into collections that I can easily refer to whenever needed.

    What I built is an automatic workflow of extracting and classifying information of tech news, especially the news about tech startups and VC firms, using the LLM I chose. Here is a simplified overview of my workflow, and I will leave some details for later:

    1. I read a tech news article about a startup that raised funding from several venture capital firms. Then I decide whether to add those persons, organizations, and incidents into my database. You may wonder why I don’t record as much startup news as possible without reading it first. I will address this question later.
    2. I send the news text to a remote LLM, such as ChatGPT or Gemini, through LLM CLI. The model extracts structured information from the article, including the people and organizations mentioned, whether they are VCs, their roles, and the actions they took, and returns it in valid JSON format.
    3. I review and import the extracted data into a SQLite database, which I can later query or review as needed.

    Take this news article, 〈Zed Raises $32M Series B Led by Sequoia to Scale Collaborative AI Coding Vision〉, as an example. The returned structured data looks like the following:

    Click to expand or collapse the code block
    {"items":[{"name":"Nathan Sobo","organization":"Zed Industries","role":"CEO and co-founder","is_vc":"unknown","learned":"Nathan Sobo is the CEO and co-founder of Zed, emphasizing the importance of collaborative coding and linking conversations directly to specific sections of the code.","article_headline":"Zed Raises $32M Series B Led by Sequoia to Scale Collaborative AI Coding Vision","article_date":"2025-08-20"},{"name":"Sonya Huang","organization":"Sequoia Capital","role":"Partner","is_vc":"yes","learned":"Sonya Huang is a Partner at Sequoia Capital and expressed excitement about Zed's innovative approach to collaborative coding, indicating it represents a significant shift in software development.","article_headline":"Zed Raises $32M Series B Led by Sequoia to Scale Collaborative AI Coding Vision","article_date":"2025-08-20"},{"name":"none","organization":"Zed Industries","role":"none","is_vc":"unknown","learned":"Zed Industries is the creator of Zed, a high-performance open-source code editor focused on collaborative coding capabilities, which recently raised $32 million in Series B funding led by Sequoia Capital.","article_headline":"Zed Raises $32M Series B Led by Sequoia to Scale Collaborative AI Coding Vision","article_date":"2025-08-20"},{"name":"none","organization":"Sequoia Capital","role":"none","is_vc":"yes","learned":"Sequoia Capital led the $32 million Series B funding round for Zed Industries, indicating their support for innovative technology in developer tools.","article_headline":"Zed Raises $32M Series B Led by Sequoia to Scale Collaborative AI Coding Vision","article_date":"2025-08-20"}]}

    After importing, it will look like the following screenshot in a database:

    • A Datasette table displaying four rows of structured news-extracted records showing names, organizations, roles, learned summaries, article headlines, dates, and VC status.
      ▲ After importing the JSON data into the database, this is the view of that database in the browser using Datasette

    Eventually, I combined parts 2 and 3 into one command. If I want to skip the review step, I can just read the news and send it to an LLM, and the news clipping job gets done.

    Why I Built This Workflow and How I Use It

    Before we move on, I must emphasize that I only apply this workflow to news articles I have personally read. I’m not inclined to process as much news as possible just because it can all be done quickly and automatically. I built this tool to give myself a “sort of” reliable source that helps me keep a close eye on tech and the VC industry.

    Another thing I hope you keep in mind is that my workflow is not intended to achieve 100% accuracy. And that’s why I say this workflow is “sort of” reliable.

    In my experience, an LLM often succeeds in determining whether someone is a VC based on news content, but for various reasons it sometimes misses. The LLM’s accuracy can never reach 100%. For that reason, I don’t count on it to generate 100% correct data. And that’s why I only use it to process what I read, not what I didn’t read. 

    After all, I have some confidence in remembering what I read. If I need to check something, the database is always a few keystrokes away. I can retrieve the source (the news article I read) from it.

    A good way to think about LLMs is to treat them like interns. Sometimes you feel grateful for their contribution and think they have a lot of potential, and sometimes you feel you should have done the work yourself. As technology analyst Benedict Evans often compares AI to interns, he once said:1

    If you have 100 interns, you can ask them to do a bunch of work, and you would need to check the results and some of the results would be bad, but that would still be much better than having to do all of the work yourself from scratch.

    The Workflow Breakdown

    Now, let’s take a closer look at how my workflow operates. The workflow consists of the following steps:

    1. Read a news article
    2. Prepare the text content of the news articles I’ve read
    3. Pipe the news content text to LLM CLI. It will treat the piped content as part of the prompt it’s about to send with other things/arguments to the LLM you designated.
    4. Apply the schema and template. They are the keys to our workflow. We need a way to make sure that the LLM will return the right structured data we need in valid JSON format. To achieve it, we use LLM CLI’s schema and template features. We will talk more about them later.
    5. Get the LLM’s response in a valid JSON format
    6. Import the data into a SQL database
    7. Review, query, or update the data as needed
    • A vertical flowchart of my AI-powered news clipping workflow. It shows the pipeline from extracting article text, sending it through LLM CLI with a template, receiving structured JSON, and importing it into a SQLite database.
      ▲ A vertical flowchart of my AI-powered news clipping workflow

    What We Need

    Most parts of my workflow run in the terminal shell. I use zsh in Ghostty. And to build up this news clipping workflow, we also need:

    • An LLM API key (or you can use a local LLM)
    • A tool that can fetch the news content. I use the Readwise Reader API to retrieve the full text of the articles I’ve saved and their metadata. There are other options, such as Instapaper’s Full Developer API or Jina AI. In the LLM CLI user manual, Simon Willison uses strip-tags in an example. LLM CLI also supports attaching a PDF with a prompt and sending it to the LLM. So if, for some reason, I can’t extract the text, I can use PDF as a fallback
    • A database. I use SQLite, which comes with macOS. I also use sqlite-utils and Datasette to work with the database, both developed by Simon Willison.

    Step-by-Step Guide to Building the Workflow

    Now we have all the ingredients, let’s make the workflow work. Here is an overview of the steps:

    An over view of the steps
    1. Get the text
    2. Install and Set Up LLM CLI
    3. Design the Schema and the Template
    4. Pick a Model
    5. Import the Data
    6. Play with the Database

    1. Get the Text

    The first step is to get the full text of the news article I’ve read. I ask ChatGPT to write me a zsh function that I can use Readwise Reader’s tag feature or the article ID to designate which article I want to use, or use the latest article I saved, and then pipe the full text with some metadata, such as title and date, to LLM CLI.

    If you just want to try it quickly, you can use Jina AI to get the full text of an online post (although it won’t be as clean as the results I got from Readwise Reader):

    
    curl "https://r.jina.ai/https://www.example.com"
    # replace `https://www.example.com` with the url of the news article you want to process
    
    

    The cleaner the full text of the news article you send to the LLM is, the higher the chance you’ll get high-quality output. Or you can simply use a powerful (and usually more expensive) LLM to get a better output.

    2. Install and Set Up LLM CLI

    Before we move to the next step, you need to install LLM CLI. You can find the instructions on the website. I used pipx to install LLM CLI. But I’m considering using uv instead. If you want to use Homebrew to install LLM CLI, please read the warning note.

    
    pipx install llm
    
    

    After installation, you need to set up an LLM API token. If you want to quickly test many different LLMs and find the right one that can return the result you need, you need to install some plugins to use those remote LLMs from Anthropic or Google Gemini.

    
    llm keys set openai
    
    

    Then you will be prompted to enter the key like this:

    
    % llm keys set openai
    Enter key:
    
    

    The default model is gpt-4o-mini. But you can change it as you want. With the corresponding plugin installed, you can use many models other than OpenAI’s.

    After that, you can start designing and setting up the schema and template. Later, you can add the template as an argument to the -t option. You can find very detailed tutorials on setting up LLM CLI and other things we need on its website.

    3. Design the Schema and the Template

    The most crucial part is to make sure the LLM does precisely what you ask and returns the data in the correct format. And that, for LLM CLI, means designing the right schema and template, which determine what and how the LLMs extract information from the article and return it to you.

    Fortunately, Simon Willison already provided a good foundation in the Schemas section of LLM CLI’s user manual. I basically added a few more items and tried it with different models to see what I got from the news articles.

    As Simon Willison wrote in the Schemas section of LLM CLI’s documentation, “Large Language Models are very good at producing structured output as JSON or other formats. LLM’s schema feature allows you to define the exact structure of JSON data you want to receive from a model. [in this context, “LLM” refers to Simon Willison’s LLM CLI]”

    In LLM CLI, a schema defines the exact JSON structure you want the model to return, including fields such as name, organization, role, and so on.

    A template A template builds on top of that schema and includes the system prompt, model selection, and other options. By combining these parts, it tells the LLM what to extract and how the model should perform the extraction.

    In practice, you can use a template to ask an LLM to extract specific information from a news article and return it as structured JSON data, which can then be imported into a database. Together, the schema and template features make it possible to reliably extract structured data from unstructured text.

    Here, you can define a schema for what we want the LLM to return and save it as a template:

    
    llm --schema $'items:\n  name\n  organization\n  role\n  is_vc bool\n  learned\n  article_headline\n  article_date' \
      --save people
    
    

    Next, you can edit the template we just saved to tell the LLM how we want it to do the news clipping job. To edit a saved template, we can run:

    
    llm templates edit people
    
    

    This opens the template in your default shell editor (for example, the editor specified by $EDITOR or $VISUAL). After saving and closing the editor, the template is updated. The edited template may look like this:

    system: |
      Extract all people and organizations from a startup funding article.
    schema_object:
        type: object
        properties:
            items:
                type: string
            name:
                type: string
            organization:
                type: string
            role:
                type: string
            is_vc:
                type: boolean
            learned:
                type: string
            article_headline:
                type: string
            article_date:
                type: string
        required:
        - items
        - name
        - organization
        - role
        - is_vc
        - learned
        - article_headline
        - article_date
    
    

    There are some interesting challenges for LLMs in extracting the correct information. For instance, sometimes Readwise Reader parses the commercial text in an article, and the text happens to be about VCs and startups. To prevent the irrelevant content from contaminating the data, you can added one line in the template: “Do not include any information that comes from advertisements, sponsored content, or promotional material.”

    4. Pick a Model

    The quality of the output we got (using the same template) depends on what model we use. LLM CLI provides a handy interface that lets us quickly switch between many LLMs via its plugin system.

    After several iterations, I found that gpt-4o-mini is quite suitable for its cost, and of course, you can use a more powerful LLM like GPT5 to get better results, but it’s way more expensive. Considering the API cost and the quality of the results, I think Google’s Gemini 2.5 Flash Light is also one of the best choices. With these two models, I consistently get good results across different news articles from various outlets.

    Because I don’t see the LLMs as 100% accurate tools, I think the key is to find the balance between cost and good-enough results.

    If you want to check the LLM’s output before importing the data into the database, you can split the workflow into two parts and make the LLM’s return the breaking point.

    5. Import the Data

    LLM CLI has a great feature: it automatically logs every conversation to a SQLite database. (It can be turned off.) This means we don’t need to capture the model’s output immediately after receiving the returned data, because we may want to ask the LLM to modify the data or try another LLM. Later, we can extract the structured JSON directly from the logs and import it into our own SQLite database using sqlite-utils. (Here’s a link that explains how to work with the LLM CLI logs.)

    To set up the uniqueness key and import the data:

    
    # 1. Retrieve the structured JSON from the most recent LLM response
    llm logs -c --schema t:funding_people --data > /tmp/funding.ndjson
    
    # 2. Insert it into our SQLite database using a unique constraint
    sqlite-utils insert data.db people_orgs /tmp/funding.ndjson --nl \
      --unique name organization article_headline
    
    

    Using --unique here is intentional. Because we may need to make further adjustments in the near term, a unique constraint provides duplicate protection without forcing the table to use a rigid primary key. This makes the pipeline safe to rerun and easier to adjust as the structure of your extracted data changes over time.

    Putting everything together, here is a one-liner that demonstrates how all of these individual steps come together. This example fetches the article, strips the HTML, applies the template, sends it to the remote LLM, and finally imports the returned structured data into your SQLite database using the logged result:

    
    curl "<ARTICLE_URL>" | \
      uvx strip-tags | \
      llm -t people > /dev/null && \
      llm logs -c --schema t:people --data-key items --data | \
      sqlite-utils insert data.db people --nl --unique name organization article_headline
    
    

    This compact pipeline shows the entire workflow executed in one command: retrieve the full text, clean it, extract structured data using your template, read the result back from LLM CLI’s logs, and load it into your database with a unique constraint.

    6. Play with the Database

    After that, you can use Datasette to review the database in my web browser:

    
    datasette "<PATH_TO_DATABASE>"
    
    

    It will show you something like:

    
    INFO:     Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
    
    

    Then you can open the URL in a browser to interact with the database.

    Now that we have a database, we can run some queries. For now, what I need is simple: filtering out VC people and assembling them into a table. So I wrote some scripts using the query to display that information quickly in my terminal. Later, I happened to learn about Nushell and really like the way it displays data. I’ve included two screenshots for reference: one shows the data extracted from the Zed fundraising news as rendered in Nushell, and the other shows part of the list of VC people in the database.

    Four separate Nushell-formatted blocks showing individual records, each listing name, organization, role, article date, headline, and learned fields.
    A terminal table listing multiple investors with columns for name, organization, and role, including entries from General Catalyst, Long Journey Ventures, Chemistry, and others.

    In an ideal circumstance, I would export the data into separate tables such as founders, venture capitalists, startups, and VC firms. I could take it a step further by asking an LLM to return another JSON array describing the funding activities, including the amount of money, the valuation if mentioned in the article, the round type, and so on. For now, I am keeping everything in a single table because the data is still fairly simple.

    Just as some readers may think when they read this series of commands, I found it a bit cumbersome to type them all manually. Therefore, I asked ChatGPT to write some scripts to automate everything, and then I made the scripts into zsh aliases. Now my whole workflow is:

    1. Read the article
    2. Send it to Readwise Reader with a tag using a web browser extension or a single Alfred command
    3. Open Ghostty’s quick terminal and input the alias. Done

    I think I can even make this shorter by combining steps 2 and 3 via a self-made Alfred Workflow.

    Some Remaining Issues and Future Potential

    And considering the other well-known characteristics of LLMs: they can’t produce deterministic results, which means we can’t expect the LLM to generate precisely the same role title in the same writing style for the same person or organization. For instance, some articles refer to the firm as “a16z,” while others use its full name, “Andreessen Horowitz.” Similarly, even for the same role, such as “CEO,” an LLM might output ”ceo” in lowercase or spell it out as “chief executive officer,” not to mention the inconsistency in capitalization. An LLM just can not retain the memory in the way I use the API. (There are many traditional solutions for these kinds of issues, though.)

    Recently, I’ve been playing with Sourcegraph’s coding agent Amp Code. It’s a very good product, the pace of the team’s product development is incredible. I used its smart mode to normalize my flat table, then kept building on it using its free mode. I already completed the database migration and started to build a new version of my workflow. For instance, I can now get information like “Who is involved in Zed Industries’s latest round of fundraising?” more efficiently.

    Or even better, with this database, I can use LLM CLI’s plugin like llm-tools-sqlite to ask an LLM questions about the data in natural language, and the model will generate the necessary SQL queries to retrieve the answer. I can also connect this database to Claude Desktop using an MCP server, which allows me query it in natural language directly from the app.

    A Byproduct of Learning

    I’ve spent more and more time in the terminal and shell environment after I started playing with Raspberry Pi a few years ago. By building up this workflow, I got familiar with LLM CLI and a bunch of other new tools. I’ve learned that shell scripts and Python scripts are nimble and versatile. With the help of ChatGPT and coding agents like Claude Code or Amp Code, I can quickly experiment with many ideas to improve the workflow. In the end, this AI news clipping workflow is more like a byproduct of my learning to use the software. It is really fun.

    Again, I don’t use this workflow to process as many articles as possible. I use it as a reference to what I’ve already read. It’s more like a pitcher’s pitch-by-pitch tracking data: it doesn’t capture the whole game, and it certainly doesn’t replace the conversation between the pitcher and the pitching coach. The core is still the same: reading the news, taking notes, and writing down my own thoughts.

    As I mentioned earlier, it’s possible to apply this workflow to many other kinds of news, like professional sports player trades, business, or politics, once you can break down the news content into structured elements, such as people, organizations, events, and so on. If you have tried it, I’m happy to learn how things are going.


    1. Are better models better? — Benedict Evans ↩︎


  • “Tailscale was made for this”

    Wednesday, October 15, 2025

    → NVIDIA DGX Spark: great hardware, early days for the ecosystem

    This Simon Willison’s post is mainly about NVIDIA’s DGX Spark, which has just started shipping, but what really caught my attention was the part where he mentioned Tailscale, and the subheading is “Tailscale was made for this”:

    Having a machine like this on my local network is neat, but what’s even neater is being able to access it from anywhere else in the world, from both my phone and my laptop.

    Tailscale is perfect for this. I installed it on the Spark (using the Ubuntu instructions here), signed in with my SSO account (via Google)… and the Spark showed up in the “Network Devices” panel on my laptop and phone instantly.

    I can SSH in from my laptop or using the Termius iPhone app on my phone. I’ve also been running tools like Open WebUI which give me a mobile-friendly web interface for interacting with LLMs on the Spark.

    That is what I’ve been talking about with my friends. Although I don’t have a powerful Mac mini or a shiny DGX Spark, the way I use Tailscale is approximately what Simon Willison described in his post.

    When I take a break between bouldering sessions, believe it or not, I sometimes use the Termius app on my iPhone to SSH into my Mac at home, check Claude Code’s work, or assign it new tasks via tmux running in Ghostty on my Mac.1 Or when I come across noteworthy startup fundraising news on the go, I can use my iPhone to pull the same trick: ask an LLM to do the news clipping for me via Simon Willison’s LLM CLI running on my Mac at home. (Hopefully I will write about my LLM news clipping workflow soon. 2025-11-20 Update: I wrote a post about it.)

    Sometimes how I use Tailscale has nothing to do with AI. For instance, I host Linkding, an open source bookmark web app, on my Raspberry Pi, and I want to use it on my iPhone without exposing it on the internet. In this case, I can use Tailscale Serve to securely access it through my tailnet as if I were on the same local network.

    Another use case is about safely using untrusted Wi-Fi. I use one of my Raspberry Pis as a Tailscale exit node, so when I’m at a coffee shop with untrusted Wi-Fi, I can turn on Tailscale on my MacBook Air and securely route all my traffic through the exit node—Tailscale encrypts every packet between my MacBook Air and my Raspberry Pi using WireGuard,2 so even on an untrusted Wi-Fi, no one can snoop on my connection. In fact, I created a Keyboard Maestro automation that connects my MacBook Air to the Tailscale exit node whenever it joins a Wi-Fi network that’s not on my allowlist.

    As a user, I appreciate how easy Tailscale is to set up, even though I’m only using a fraction of its capabilities. As an observer who is interested in the tech startup scene, I’ll definitely keep a close eye on how Tailscale grows as a business.


    1. I wrote about Ghostty earlier this year. It is a project from Mitchell Hashimoto that I can’t recommend highly enough. ↩︎

    2. Tailscale encryption · Tailscale Docs ↩︎


  • Macrowave: An Easy Way to Turn your Mac Into a Radio Station

    Monday, August 11, 2025

    In July, I came across a thread on Mastodon about a new app called Macrowave. It is “a native macOS & iOS app that makes it easy and fun to share system audio with friends to listen to music together,” as the co-creator, Lucas Fischer, said in this toot. A few days ago, Macrowave officially launched.

    Macrowave's Mac app interface in a retro style showing an “ON AIR” button, audio controls,  and text that reads “Turn Your Mac Into a Private Radio Station.”
    ▲ Macrowave’s Mac app in “Broadcaster” mode (Source: Macrowave’s press kit)

    What caught my eye at first was, of course, the retro-style visual design. It resembles a portable radio and reminds me of the Apple Podcasts app from 2012. I don’t know the two developers behind Macrowave personally, but I bet they had fun designing it.

    But the real deal for me is the ability to easily share my Mac’s system audio, which means I can run my own station. I’ve always been into audio broadcasting: not only have I been a long-time radio and podcast listener, but I also produced and hosted several podcasts.

    Three Steps to Start Your Station

    Macrowave streams audio via low-latency WebRTC peer-to-peer connections.1 Its Mac app has two modes: the “Broadcaster” is lets you stream audio, and the “Receiver” is for listening to stations.

    If you want to make a live audio broadcast, you just need three steps:

    1. Sign up using Apple Account
    2. Set up a username
    3. Give your station a name and a short description

    After that, you can share the link to your station with others to let them tune in. If your audience has enabled notifications, they will be notified when your station is on air. People can listen to the station through Macrowave’s Mac and iOS apps, or via a webpage.

    Macrowave app shown on an iPhone and Mac, with green retro-style interfaces, displaying listener count and track info, and captioned “Listeners can tune in from iPhone, Mac or a web browser.”
    ▲ Macrowave’s iOS and Mac apps in Receiver mode, with the iOS version limited to listening to stations. (Source: Macrowave’s press kit)

    You can choose what audio to stream—whether it’s the system audio (everything you normally hear from your Mac), audio from a specific app, or even from a single window. (Important: you must use this app wisely, as Macrowave states at the bottom of its website, “broadcasters are responsible for obtaining appropriate licenses for any copyrighted content they stream.”)

    In an update released shortly after launch, Macrowave revealed its subscription pricing for broadcasting: $3 per week (with a three-day free trial), $8 per month, and $60 per year.

    I’ve been using Rogue Amoeba’s Audio Hijack for years. One of its features, the Broadcast block, can stream audio from your Mac to an internet streaming server powered by Shoutcast or Icecast,2 allowing you to run a live broadcast. While powerful, it requires more setup and technical know-how. Macrowave, on the other hand, makes the process far simpler and more approachable for casual broadcasters.

    My Experience

    From my brief experience using the app, I found that it doesn’t work well with my Audient iD4 audio interface. The mic input works fine, but for some reason, I couldn’t get the music playback to work in conjunction with it. However, if you use the Mac’s built-in mic (or something like EarPods connected via the audio jack), it works well.

    As mentioned earlier, people can listen to your station via a link. In the first version, changing your username didn’t automatically update the link, but yesterday’s update (version 1.0.2) fixed this. If you encounter this issue, make sure you’re running the latest version.

    Since broadcasting your Mac’s system audio requires specific permissions, you might wonder why it asks for “Screen & System Audio Recording” instead of just “System Audio Recording.” According to Lucas Fischer, this is due to a technical limitation of their upstream provider, despite the app only using audio from the screen sharing. They are working on it and plan to release a new version that requests only the “System Audio Recording” permission, but they must wait for the necessary changes from their provider.3

    Thoughts and Questions

    Macrowave has been officially available for less than a week, yet I already have many questions about where it’s headed. Will it become a two-way audio platform like Clubhouse or X Spaces, or stay as a simple broadcasting app? Will it introduce a text-based chatroom alongside each station? How does the app help users discover new stations? While I don’t think Macrowave is necessarily destined to be a major hit, there’s certainly room for a niche product like it.

    Some aspects of the interface and visual design could be improved. For instance, the app lacks safeguards to prevent accidental quitting while broadcasting, and certain UI elements like button shadows don’t feel right to me. Still, Macrowave is genuinely a fun and intuitive app to use.


    1. Macrowave – Radio Broadcaster ↩︎

    2. Notes on Audio Hijack’s Broadcast block – Rogue Amoeba Support ↩︎

    3. post.lurk.org – Lucas ✦: “@mcg Currently, yes. That is a…” ↩︎


  • Customizing MailMate: Display Sender Addresses in Your Message List

    Monday, July 7, 2025

    Although I didn’t join MailMate’s mailing list, I visit the archive from time to time, and sometimes I learn good tips and tweaks from it. Today, I’d like to share a useful tweak I recently came across in this thread: how to add a custom column to display a sender’s email address in MailMate’s message list, along with some variations.

    In short, the OP wanted to have a column in MailMate’s message list to display the sender’s email address, so he can spot potential phishing emails instead of opening them to check From: in the message view.1 By default, the From column in the message list shows only the sender’s name, and there is no such Address column in the View > Columns menu.

    Out of curiosity, I checked Gmail’s web app, Mimestream (a 3rd party Mac email client for Gmail), and Fastmail, but none of them show the sender’s email address in the message list by default. They only show the address information in the message view.

    A few days later, Benny Kjær Nielsen, the developer of MailMate, replied with a solution:

    If you create the following path and then save the attached file then you should have a new column available for the message list (named “From Address”):

    /Users/<username>/Library/Application\ Support/MailMate/Resources/MmMessageListView/

    Here are the content of the plist file attached at the end of Benny’s response:

    {
                columns =
                {
                        fromAddress =
                        {
                                title = "From Address";
                                sortKey = "from.address";
                                formatting =
                                {
                                        formatString = "${from.address}";
                                        placeholderString = "(No Sender)";
                                        doubleClick =
                                        {
                                                titleImage = "NSUser";
                                                titleSymbol = "person.fill";
                                                titleFormatting = { prefixString = 
        "From "; formatString = ""${from.address}""; separator = " or "; };
                                                queryFormatting = { formatString = 
        "from.address = '${from.address}'"; separator = " or "; escapeSingleQuotes = 1; 
        };
                                        };
                                };
                                relatedSearches =
                                (
                                        {
                                                titleImage = "NSUser";
                                                titleSymbol = "person.fill";
                                                titleFormatting = { formatString = 
        "From "${from.address}""; separator = " or "; };
                                                queryFormatting = { formatString = 
        "from.address = '${from.address}'"; separator = " or "; escapeSingleQuotes = 1; 
        };
                                        },
                                        {
                                                titleImage = "NSUser";
                                                titleSymbol = "person.fill";
                                                titleFormatting = { formatString = 
        "From "${from.name}""; separator = " or "; };
                                                queryFormatting = { formatString = 
        "from.name = '${from.name}'"; separator = " or "; escapeSingleQuotes = 1; };
                                        },
                                        {
                                                titleImage = "NSUser";
                                                titleSymbol = "person.fill";
                                                titleFormatting = { formatString = 
        "From or To "${from.address}""; separator = " or "; };
                                                queryFormatting = { formatString = 
        "#any-address.address = '${from.address}'"; separator = " or "; 
        escapeSingleQuotes = 1; };
                                        },
                                        {
                                                titleImage = "NSUser";
                                                titleSymbol = "person.fill";
                                                titleFormatting = { formatString = 
        "From or To "${from.name}""; separator = " or "; };
                                                queryFormatting = { formatString = 
        "#any-address.name = '${from.name}'"; separator = " or "; escapeSingleQuotes = 
        1; };
                                        },
                                );
                        };
                };
        }
    
    

    But Benny didn’t mention the name of the attached file. (As I said, I didn’t join the mailing list, so I don’t know what the attached file would look like in a real email.) Then I recalled that I had read something about it in MailMate’s user manual; it should be named outlineColumns.plist.

    Once you have finished the setup and relaunched MailMate, you should see a new option named From Address in the View > Columns menu. Now you can combine the default From and the new From Address column to display the complete information about the sender in the message list.

    • A MailMate message list with both "From" and "From Address" columns visible. The "From" column displays the sender’s name, while the "From Address" column shows the full standard email address, including both the username and the domain. This layout takes up more space compared to the single-column display.
      ▲ You can combine the default “From” and the new “From Address” column to display the complete information about the sender in the message list. (The text in the screenshot looks odd because I enabled “Distortion Mode” in MailMate.)

    An Alternative Tweak

    However, if you feel that those two columns use too much space on the screen, the OP provided a tweak to mitigate it. You can replace formatString = "${from.address}"; in line 10 (of the content of the plist file I showed above) with formatString = "${from.name:-${from.address}}${from.name:+ (@${from.domain})}";. Now you get Someone (@example.com) instead of someone@example.com in the From Address.

    After all, what the OP wants is to verify whether the domain name of the sender’s address looks suspicious. There is probably no need to display the username in the message list. Additionally, since we have both the sender’s name and the domain of the address in From Address, this makes the From column redundant.

    If you make the tweak, you may want to hide the From column. But at this moment, the From option is not only checked but also grayed out in the View > Columns menu. To remove the From column, you have to choose the From Address option in the View > Columns > Outline Column at first. Then, you can uncheck the From option in the View > Columns menu.

    • A MailMate message list showing only the "From Address" column, where each row displays the sender’s name followed by the email address in angle brackets. The user name part before the "@" is removed, so only the domain is shown, resulting in a more compact display that saves horizontal space.
      ▲ The OP’s design can replace the default “From” column and save some screen space. As you can see, I replaced “()” with “<>” to mimic the usual way of displaying sender information, such as “Someone <someone@example.com>”.

    A Little Tweak, Huge Satisfaction

    I appreciate the OP’s design for presenting more information in a compact and streamlined way. The discussion thread I mentioned earlier also demonstrates just how customizable MailMate is.

    In today’s Mac ecosystem, it’s increasingly rare to find apps that allow this kind of low-level tweaking. Apple could do more to encourage users to explore slightly more advanced interactions with macOS.

    Don’t get me wrong. It’s not that Apple has done nothing. Shortcuts is a very good example that encourages users to try building automations that fit their needs. But Apple could go further.

    As users become more comfortable with this kind of control, they’ll start to expect—and value—more flexible and powerful Mac software.

    For users, in an age where help is just a search or a chat away, discovering that they can interact with their computers in this way will not only help them get things done but also reveal that they can accomplish much more. And more importantly, it’s fun and delightful.

    Related Posts

    • Three Useful Hidden Preferences in MailMate

    1. Speaking of phishing email, Benny wrote a post about how he implemented a feature of warning the user about email address spoofing. The mechanism is simple and clever:

      Whenever the name part of an address header contains a @ then it’s replaced with a skull (💀). That should at least make the user aware of simple attempts to spoof an address header. ↩︎


  • Three Useful Hidden Preferences in MailMate

    Thursday, March 27, 2025

    I like MailMate and use it as my default email app on my Mac. I wrote about MailMate in this post (in Traditional Chinese, though). One of its biggest strengths is that it’s highly customizable.

    However, many of its customizations aren’t available in the app’s settings panel. You have to set up these hidden preferences through the Terminal. You can find the relevant instructions in MailMate’s user manual. Today, I want to share three useful hidden preferences in MailMate that make it more convenient and better suited to my needs.

    1. Set a “Delayed Sending Time”

    The first one is setting a default delayed sending time. As the name suggests, this lets you specify a delay before emails are actually sent.

    MailMate does have a built-in “Send Later” option in the composer. There’s a list icon (with a “⌄” symbol) in the upper-left corner of the window, below the paper plane icon (the “Send Message” button). “Send Later” is in the dropdown menu.

    A screenshot of a dropdown menu in MailMate’s composer. The menu displays a list of email header fields that can be toggled on or off. The “Send Later” option is highlighted.
    ▲ MailMate does have a built-in “Send Later” option in the composer.

    But this is a one-time setting—it only applies to that specific message. I want to show you how to set up a default Send Later time. This setting effectively combines the benefits of both Undo Send and scheduled sending.

    To enable it and set the default delay, run the following command in Terminal:

    
    defaults write com.freron.MailMate MmSendMessageDelayEnabled -bool YES
    
    

    To disable it:

    
    defaults write com.freron.MailMate MmSendMessageDelayEnabled -bool NO
    
    

    The default delay is 5 minutes. If you want to change it, use:

    
    defaults write com.freron.MailMate MmSendMessageDelayString -string "3 minutes"
    
    

    My setup is 40 seconds, so the command would be:

    
    defaults write com.freron.MailMate MmSendMessageDelay -integer 40
    
    

    (If you don’t include a time unit, seconds will be used by default.)

    Please note: you must relaunch MailMate after running these defaults commands for the changes to take effect. After relaunching, you’ll see a new header in the composer named “Send Later,” and the time you set will appear there every time you compose a new message or reply.

    After hitting the send button, the message will be temporarily placed in a mailbox named “Outbound.” Before the timer runs out, you can still edit or cancel the email.

    Scheduled Sending

    With this new “Send Later” header, you can also schedule messages using natural language. For example, if you want to send an email at 10 am tomorrow, type “10 am tomorrow” in the “Send Later” field. And if you’re not sure whether your input is valid, the manual explains:

    If MailMate cannot parse the expression and the user tries to send the message, then a sheet is shown which allows the user to correct the problem or set an exact time using a calendar and a clock.

    As shown in the screenshot below, if you see the text in the “Send Later” field turn red, it means that MailMate can’t parse the expression. Then, after you hit the send button, a reminder with an orange background will appear, along with a sheet containing a calendar and a clock that let you correct the “Send Later” time.

    MailMate displays an error message “Unable to parse the ‘Send Later’ string” due to the invalid input “10am yesterday”. A pop-up appears with a calendar and clock interface, prompting the user to correct the date and time.
    ▲ If MailMate is unable to parse your expression, a reminder and a sheet will appear, allowing you to correct the “Send Later” field.

    One limitation of this feature is that MailMate must be open and your Mac must be connected to the internet when the message is scheduled to send.

    Since this changes the default sending behavior, MailMate will display a warning about pending delayed messages when you quit the app. You can also disable or modify this warning if needed.

    2. Change the Focus in the Composer

    If, like me, you prefer to always have the focus in the text editor when opening a composer window, you can use this command:

    
    defaults write com.freron.MailMate MmComposerInitialFocus -string "alwaysTextView"
    
    

    To revert to the default:

    
    defaults delete com.freron.MailMate MmComposerInitialFocus
    
    

    I find this setting useful because it lets you start writing the email as soon as the composer window opens. There’s also a nice side effect: since the focus is in the text editor and not the “To” field, you haven’t filled in the recipient’s address. This significantly lowers the chance of accidentally sending the email. (Well, if your cat somehow hits the keys in a perfect sequence, an accident could still happen.) However, this “nice side effect” doesn’t work when replying to a message, so be careful.

    3. Enable Auto-Expanding Email Threads

    I prefer viewing emails in thread mode, and I like MailMate to automatically expand all threads at all times. You can enable this with:

    
    defaults write com.freron.MailMate MmAutomaticallyExpandThreadsEnabled -bool YES
    
    

    (New messages will also trigger auto-expansion.)

    If you want MailMate to expand only those threads that contain unread messages, use:

    
    defaults write com.freron.MailMate MmAutomaticallyExpandOnlyWhenCounted -bool YES
    
    

    Bonus: Turn on Automatic Message Selection After Switching Mailbox

    When Benny Kjær Nielsen, the developer of MailMate, introduced this setting in revision 6216 (BETA) on January 26, 2025, he mentioned it as an experimental feature:

    Changed: Automatic message selection (after switching mailbox) can now be enabled (MmAutomaticMessageSelectionEnabled). Consider it experimental for now.

    Although it’s experimental, I’ve found it useful. I use the “Correspondence” layout, and with this enabled, MailMate automatically displays the last message (because it’s selected) after I switch mailboxes—saving me the extra step of pressing Tab.

    To enable it:

    
    defaults write com.freron.MailMate MmAutomaticMessageSelectionEnabled -bool YES
    
    

    To disable it:

    
    defaults write com.freron.MailMate MmAutomaticMessageSelectionEnabled -bool NO
    
    

    I think MailMate has so many “hidden preferences” because it’s hard to fit them all into the settings panel. And, according to Benny, most of them are experimental:

    Some have just not been added to a Preferences pane yet, but most of them are used to enable experimental features which cannot yet be considered stable or complete features.

    However, in my experience, the first three “hidden preferences” I introduced have been stable for the past two and a half years.

    If you’re a MailMate user and haven’t explored these hidden preferences yet, I encourage you to check out the MailMate user manual. There’s a lot of useful information there, and I’m sure you’ll find something that fits your workflow.

    Even if you’re not comfortable applying the settings yourself, you can export the manual pages as PDFs and upload them to an LLM chatbot like ChatGPT or Claude. Then, you can ask it to help you identify features that might be useful for you and how to enable them.

    I plan to share more of my settings and tips on using MailMate in the future. I’d love to hear what features you enjoy most, or any handy settings you’d recommend.

    Related Posts

    • Customizing MailMate: Display Sender Addresses in Your Message List


  • “Oh, the Real World’s Pretty Nice Too.”

    Saturday, March 15, 2025

    I appreciate this video. Both Adam and Norm share their genuine opinions based on over a year of experience with the Vision Pro. (Btw, I’m a fan of Adam and his crew!)

    Norm clearly enjoys using the Vision Pro, but what I find most interesting is how he feels after taking it off. At the end of the video, he says:

    I might not be using it every day. I’m certainly using it at least once a week, but every time I put it on, it’s like, “Wow, I forget how nice it is in here.” At the same time, every time I take it off, I’m like, “Oh, the real world’s pretty nice too.”

    “Oh, the real world’s pretty nice too.” reminds me of something Jaron Lanier wrote in his essay “Where Will Virtual Reality Take Us?” for The New Yorker last year:

    In the nineteen-eighties, we used to try to sneak flowers or pretty crystals in front of people before they would take off their headsets; it was a great joy to see their expressions as they experienced awe. In a sense, this was like the awe someone might experience when appreciating a flower while on a psychedelic drug. But it was actually the opposite of that. They were perceiving the authentic ecstasy of the ordinary, anew.


  • “It’s 5 A.M.”

    Sunday, March 9, 2025

    I like watching videos about people working, especially when they articulate their thoughts through monologues or provide commentary.

    About one year ago, during a certain period, I frequently watched YouTube videos about cameras and photography. One day, I watched a video from The Verge about the Hasselblad 907X & CFV 100C. After that, I ended up watching a few more videos about the camera. One of them was made by Willem Verbeeck.

    Willem Verbeeck is a Belgian film photographer and YouTuber based in Los Angeles. Yes, he is a film photographer and made a video about the digital Hasselblad camera. But soon, my focus shifted from Verbeeck talking about the camera to how he was using it, and then my thoughts were all about his projects. At that point, I no longer wanted to watch more videos about Hasselblad.

    In the video, Verbeeck said he wanted to use the Hasselblad for a project about the freeway landscapes in Los Angeles. The next scene shows him driving to a spot. That is a slope with a distant view of the freeway. “It’s 5 a.m. here in Los Angeles. It’s a really cold, windy morning. And I’m back at the spot I’ve photographed…” As Verbeeck says this, he sets up the tripod and gets ready to shoot. After a few shots, when I still thought about the photo he just made, he moved to the next spot.

    On the second day, he did the same thing again. He left home by car at about 4:30 a.m. to a spot he wanted to shoot before sunrise. I’ve seen the sky before sunrise many times. It’s beautiful. Then I realized this is not about a YouTuber or working as a freelance film photographer (although I believe many jobs require you to get up early). It’s about the way he lives. He wants to shoot some photos, so he just drives out at midnight or early in the morning. Free and simple.

    At that moment, the words “I envy this YouTuber” surfaced in my mind. I know it’s not an easy life. (I guess there is no such thing as “an easy life.”) What I envy is his mindset and lifestyle. It’s like: I want to do something and can do it right away. Of course, I can get up at 4:30 a.m. and go out for a jog or a walk, but driving my car to some spots to get a good photo shot for several consecutive days is a different thing.

    The Morro Rock

    Putting my envy aside, I really appreciate his projects and related videos. There are two other fascinating projects I want to talk about in this post.

    The first project is about a massive rock. It’s the “Morro Rock,” a giant volcanic plug in Morro Bay, California. Before the project, Verbeeck went to Morro Bay once to shoot some photos. One day, while reviewing those images and prints, he noticed a theme that many of them have the Morro Rock in them. In some photos, the rock is the main subject; in others, in Verbeeck’s words, “it’s lurking in the background.” He then came up with an idea: how about making a project about it?

    Compared to one-time photography—whether professional or tourist–style—continuously and repeatedly photographing the same subject, or even the same object, over a long period is not only more challenging but also a constraint that hones and stimulates creativity. Verbeeck said the following in a video titled “The Importance of Long Term Photography Projects,” where he reflected on the project called “The Morro.”

    Instead of constantly trying to turn new corners now to look for the next best Landmark that I hadn’t gotten a picture of yet, I find it just as exciting, if not more, to go back to the same place that I’ve photographed a 100 times and just see how it looks different this time around.

    This reminds me of what Dimitri Bruni and Manuel Krebs, the Swiss graphic designer duo, said in the documentary “Helvetica”:1

    Dimitri Bruni: “We like restrictions. We can’t operate, we can do nothing without restrictions. The more restrictions we have, the more happy we are.”

    […]

    Manuel Krebs: “When it comes to type, we will only use, if possible, one typeface, or two, and if possible we will use one size.”

    They feel excited and happy when they face challenges and constraints.

    Verbeeck’s work is impressive. I particularly like a few of the photos. For example, one shows a storefront full of hand-painted signs, which nicely reflects Morro Rock.

    Another good one is a series of photos, or I can call it a sequence of time shifts. Those were taken in the morning as sunlight and shadows swiftly moved across Morro Rock. Verbeeck captured this fleeting transformation in four photos, showing the rapid shift in light over a brief moment—almost like a time-lapse of the moon’s phases. You can see the “dark side” of Morro Rock gradually fading until the entire surface is bathed in golden light. It’s fascinating.

    I also like an image of two people playing basketball on the court while the massive Morro Rock fades into the misty background. When Verbeeck pressed the shutter, one player released the ball while the other took a shot. In reality, the mist was constantly shifting, and in the next second, both balls would fall. Rather than simply capturing that fleeting moment, Verbeeck’s camera seems to freeze time itself.

    Of course, Verbeeck shows many other good photos in the videos. But I have to stop right here and leave the rest to tempt you into watching the video.

    I like how Verbeeck sets up a project in the video. As a former student of a specialized art program in middle school, I find it similar to sketching and painting the same objects over and over—sometimes hundreds of times. It’s a good way to hone my skills and push myself to explore different angles and expressions. I can also apply the concept to a personal photography project, perhaps in a place where I live or one I visit frequently.

    Verbeeck has visited Morro Bay about 20 times. Recently, he finished the last shot of the project and made a video about it. I recommend you watch all three videos. You may come up with some ideas to start your own projects. I’m looking forward to seeing the results if he puts them in a photobook.

    The Purple Glow at the Night

    The other project is about streetlights. The story is that one day Verbeeck learned that in Los Angeles, due to some kind of coating problem, a few LED streetlights started emitting a purple glow. Again, he drives out at night to find the purple streetlights to shoot and make it a personal project. People tipped him where to find those purple streetlights through Instagram DMs, and he marked them on Google Maps. “It’s like a treasure hunt,” Verbeeck said.

    When I saw the purple streetlights in the video, I kept thinking about a scene in the Pokémon Go app or its predecessor, Ingress.2 Because the location with the purple streetlight is like indicating there is a “Gym” or a “Portal.” And it’s not on your smartphone screen; it’s in real life. Isn’t that cool?

    Although I love shooting photos in bright sunlight, night photography has its own unique charm. From time to time, I take more good photos at night than during the day. And since night photography usually involves long exposures (in Verbeeck’s case, he’s doing film photography), what we see and what the camera captures can sometimes be drastically different. This visual dissonance—or even conflict—can be intriguing.

    The other reason I like this project is that it’s fun, and its theme comes from unexpected events in everyday life. “There was a time when, every evening, the neighborhood would be shrouded in a purple glow.” I don’t know if any local residents will recall their childhood this way, but I hope they can use Verbeeck’s video as proof when sharing this strange memory with friends.

    Verbeeck’s YouTube videos cover various photography-related topics in multiple formats. While some may not be as popular or entertaining, they are genuine. I truly enjoy watching his passion for photography unfold. I think you will find some of them truly inspiring and beautiful.

    Thanks to Verbeeck, I’m planning to write a post about a photobook I discovered through another one of his videos, “A Tour of My Photobook Shelf.” I’m so glad I had a chance to read a photobook like that. Feel free to take a guess at which one it might be.


    1. Helvetica — Gary Hustwit, a great documentary. ↩︎

    2. I found an image from the support page of Ingress as a reference. ↩︎


  • Useful macOS Tips and Tricks

    Tuesday, March 4, 2025

    → Link to the original post, “macOS Tips & Tricks,” by Saurabh

    I came across this post through a toot from Jeff Johnson. If you’re like me and always looking for ways to operate your Mac more easily and efficiently, be sure to check it out.

    Here are five useful tips I selected, arranged in the order they appear in the post:

    1. Press ⇧⌘/ to search all of the current app’s menu items. Then use the Up/Down arrow keys to navigate the results and press Return to execute the selected menu bar action.
    2. Press ⌃⌘D while holding the pointer over a word to view an inline dictionary definition.
    3. After pressing ⇧⌘4, press the Space bar to select a window to screenshot. Hold down Option while taking the screenshot to remove the window’s shadow.
    4. Press ⌥⌘C to copy the full pathname of the currently selected file.
    5. Press ⇧⌘A to select the output from the previous command.

    I use the first keyboard shortcut most often—it’s so convenient for searching and executing commands. It works best for native apps. Alternatively, you can use this Alfred workflow, “Menu Bar Search.” (I wrote a paragraph about this workflow in this post in Traditional Chinese.)

    The second and third tips are already well known. I can add something to the second one: you can choose dictionaries and arrange their order in the built-in Dictionary app’s settings.

    The fourth tip is for Finder. It’s better than right-clicking the file, holding Option, and selecting “Copy [File] as Pathname” in the context menu. However, it’s quite annoying that macOS Sequoia now encloses the copied pathname in single quotes. I even made a Shortcut to clean it up. If you know a better way to get a “clean pathname,” please let me know!

    Finally, the fifth tip is for the built-in Terminal, and it’s what I miss most when using Ghostty. Another terminal emulator, Warp, also provides this handy feature, but even better–it lets me copy the command, the output, or both using different keyboard shortcuts.

    The original post contains many other useful tips and tricks. Although it’s an update for macOS 14 Sonoma, it’s still worth your time.

    Back to where I found the post, the context actually is more important than the post itself. I agree with what Jeff Johnson said about “peak Apple”:

    They used to be great at progressive disclosure. You don’t eliminate complexity, you just hide it from new users, progressively disclosing the complexity as users become more experienced and knowledgeable.

    That’s exactly how I used to feel about Apple.


1 2
Next Page→

Blog at WordPress.com.

 

Loading Comments...
 

    • Subscribe Subscribed
      • YPWU
      • Already have a WordPress.com account? Log in now.
      • YPWU
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar