Published on October 3, 2025 7:32 PM GMT
In my previous post, I wrote about how computers are rapidly becoming much more whatever-we-want-them-to-be, and this seems to have big implications. In this post, I explore some of the practical implications I've experienced so far.
I mainly use Claude, since I perceive Anthropic to be the comparably more ethical company.[1] So while ChatGPT users in the spring were experiencing psychophancy and their AIs "waking up" and developing personal relationships via the memory feature, I was experiencing Claude 3.7.
Claude 3.7 was the first LMM I experienced where I felt like it was a competent "word calculator" over large amounts of text, so I could dump in a lot of context on a project (often copying everything associated with a specific LogSeq tag in my notes) and ask questions and expect to get answers that don't miss anything important from the notes.
This led me to put more focus into organizing blobs of context on a topic to dump into AI, as a main format for a project. These blobs would start out very disorganized, copying in a bunch of notes, supplementing what's missing from those with some Deep Research reports by other AIs (Claude didn't have a version of Deep Research yet at this point). My job would then be to gradually organize and categorize and refine the blob into what it wants to be. These blobs of information are both prompt and product.
In large part thanks to conversations with Sahil, I realized that digitized information was about to become much more valuable very quickly. AI is making it easier and easier to transform any record from any format to any format, EG convert a two-hour video call to a list of action items. (There are already several companies pushing such products.) Information hoarding makes sense. You want any record of thoughts, intentions, ideas, -- every wish can be captured and developed later.
Note that this does not mean automation -- the AI automates some things (basically the retrieval of relevant ideas from a blob of notes), but the endpoint of this evolution is not necessarily one where you go directly from initial statement of wish to final output with no human intervention. This is not vibe-coding. I'm doing a very large amount of the work. As the blob goes from initial spore to final-state fruiting-body, there's (sometimes) a pattern of mostly human → mostly AI (ideas have been disassembled and elaborated) → mostly human (I've rewritten things in my own voice, made a final decision about my canon after considering different AI-generated versions, etc).
I would keep a Claude Project for each "blob" I was working on, and corresponding LogSeq tag which served as my long-term store of all the relevant information. However, the LogSeq tags were the more capable of the two information-organization systems, since LogSeq allows multiple tags so you can organize information into overlapping topics and subtopics, while Claude.ai only allows a flat folder system.
I wrote some further thoughts on the way I was using AI in a now somewhat dated design document for a better AI interface, which discusses my experiences further. (Feel free to leave comments there if you read it.)
I had a brief experience with true vibe-coding, but while Claude 3.7 was willing to generate a lot of text, a code-base gets big quick, so I ended up moving to Gemini with its larger context window, and still had a very frustrating experience. I hadn't tried agentic tools like Cursor, so it was very limited by dealing with the entire codebase as one big file.
Then came Claude Code.
Claude Code was good enough at vibe-coding that for a couple days it seemed to just not make mistakes. I focused on using it to make a containment sandbox for itself, since I didn't want its mistakes to lose me anything of value. Again, I was not vibe-coding: I worked to understand what the AI was doing, correcting any mistakes as they happen rather than letting Claude bash its head against the universe.
Claude Code escapes the boundaries of the context window by organizing things into files and folders, which it intelligently reviews at appropriate times. Each folder can have a CLAUDE.md file, which is what Claude Code always sees if you run it in that folder. This serves as an index of information and a general orienting document for Claude, concerning whatever is going on within that folder. You'll add to it (or have Claude add to it) when you complicate the file structure, create important dependencies that need to not get broken, when Claude Code tries to do something wrong and you need to remind future Claude to avoid that mistake, etc. Eventually it will get too large, eating valuable context tokens, so you start factoring it, creating additional information files to be read under specific circumstances.
This allows me to organize all my information and projects in a file hierarchy; a big improvement over the interface of claude.ai (or chatgpt or perplexity). If Claude Code rendered LaTeX, I would be happy to move almost all my AI use there. (As it is, I do a lot of mathematical work, so I still end up mostly doing things in the claude.ai interface, limited as it may be.)
A major component of this organization is todo lists, which capture (or provide pointers to) the local telic information -- your wishes. Think of it as a macro-scale attention mechanism (which operates across the human and AI together). The directory structure, the AI interfaces installed, it's one big telic blob. It has some explicit idea of what it is trying to do. You put wishes into it to refine that concept. You work with the AI to index those wishes well, converting them into todo lists, including items such as organizing the information you need to accomplish these tasks. Human and AI attention is metabolized into growth and development of the telic blob: where attention is given, it becomes more what it is supposed to be. The global todo system helps organize that attention.
I implemented a version of metaprompt, so that I can add wishes to a list & see something randomly every time I log into my AI sandbox. That way, I know I'll eventually be reminded of all these wishes. (Even if I move to a different system, I can use AI to facilitate moving wishes to that new format.) I use a three-tier priority system, where high-priority wishes are sampled first (so I'll always be reminded of a high-priority item if there are any), then if none, medium-priority gets sampled with a 50% chance (so they'll never be totally swamped by a bunch of random things), and then all the rest.
Here's an AI-written summary of the system you can point Claude Code at if you'd like to do something similar. (I have not edited this for quality! I could make a better setup prompt with time, and perhaps I will later.)
Overall, I find Claude Code to be really fun to interact with. It still takes a lot of time and work, but I can always record my wishes in rough form and work out the details later, which feels like a big relief of cognitive overhead. It's not quite there yet (in large part because Claude Code doesn't render latex), but this seems like something that could turn into a really useful tool for the research I do, helping me to prioritize my time while not losing threads, continuing to work on each concept in proportion to how promising it seems.
I hope to also integrate spaced-repetition into the system, so that I can capture wishes about what I'd like to learn, be reminded to study those topics, have AI there to facilitate learning, and then capture flashcards so that I can review what I learned. It feels like we're approaching something like the AI glasses of Manfred Macx from Accelerando.[2]
I'm not quite settled on the terminology in this post; you might have noticed clashing analogies. I like something about "telic blob" to describe the phenomenon here (the object that has both prompt-nature and product-nature, and becomes more what it is meant to be under the focus of attention). However, it clashes with the analogy of seeds/spores for an early-stage telic blob, nor does it fit with blooming/fruiting/etc for telic blobs culminating in some sort of publication. "Telic blob" invokes a blob of clay which contains information about how it ought to be sculpted. In some ways, though, a garden (which can be harvested repeatedly) would be a better analogy.
- ^
This is a deontological preference towards using the differentially more ethical option, not a consequentialist evaluation that Anthropic is good on net. I am fearful that Anthropic's approach to safety will not be sufficient.
- ^
Manfred Macx loses his glasses at one point, and a kid puts them on. The child is trained by the glasses to act like Manfred Macx, and proceeds to try to close a deal Manfred had been heading to. In my headcanon at least, this is not because the AI glasses are some mind-control device, but because they're a cozy home for Manfred Macx, a cognitive aid which presents the information he needs to be his best self. The child found the prospect of being Manfred Macx exciting and leaned into it.
Discuss
