← Writing

I've built my knowledge system four times and I'm not done.

I've been hunting for the way to capture my knowledge so my work with Claude compounds.

Six months into building with Claude, I can tell you exactly what my goal is. Every hour of work I put into explaining how the business runs should make the next hour more productive than the last. The context should build on itself. The knowledge should transfer. The value should compound.

What I've learned is that compounding doesn't happen by default. It happens when you build the system that makes it happen. And the system is harder to get right than I expected.

I've built it four times in six+ months. Each version taught me something the previous one didn't. None of them has solved it yet. Whether the fourth will is still an open question.

Version 1: Managing Claude's memory directly

The first attempt was the obvious one. Claude has a memory feature. Use it. I was deliberate about it. I told Claude what to add, what to remove, what to keep. I tried to shape the layer the same way I'd shape any document I relied on.

Two things broke that approach. The first was consumption. Even when I was explicit about what belonged in memory, Claude ignored it most of the time. A rule I had put there intentionally would not surface in the next session. Sometimes it did. Usually it didn't. There was no way to predict which.

The second was control. I assumed that if I was explicit about curation, the memory layer would reflect my choices. It didn't. Claude would add entries on its own and remove ones I had put in. The layer I thought I was managing was actually being managed alongside me, by a system whose logic I couldn't see. What I put in wasn't always what was there the next time I looked.

"I thought I had sole control of my knowledge layer. I didn't."

If you can't predict when your context will be used, and you can't trust that what you curated is still there, you can't rely on it. And if you can't rely on it, it doesn't compound. You're still bringing your context in manually every session, just with the extra cost of trying to maintain a layer that isn't really yours to maintain.

I moved on.

Version 2: Project files

Project files gave me control. I could put context in a file, be deliberate about what belonged there, and trust that what I put in would still be there the next time I opened it. The layer was mine to manage.

Consumption still wasn't reliable. Claude ignored the file at least some of the time, the same way memory entries got ignored. What project files added was recoverability. When Claude behaved as if the context wasn't there, I had somewhere explicit to point. "Read the project file." The context was in a known location, and I could push Claude back to it without rebuilding it from scratch. That was real progress over version 1.

What broke was maintenance. The specs and scripts I was working with were evolving constantly. A script would land as final for a few days, sometimes not even that long, and then we'd need to iterate again. Every iteration meant updating the file in the project, which meant deleting the old version and re-uploading the new one. The work itself was getting done, but the tax of keeping the project files current with what we had actually built was too high.

The maintenance cost grew faster than the leverage. I was spending more time managing the context than working with it. That's the opposite of compounding.

I moved on.

Version 3: Google Drive with an index

Version 3 kept the control of project files and solved the maintenance nightmare. The project file became a pointer. A short document that told Claude what existed in Google Drive and where to find it. The real content lived in Drive.

The key shift was that Claude could edit the Drive docs directly. I wasn't the one doing the updating anymore. When a spec changed or a new pattern was worth capturing, Claude updated the doc as part of the work itself. The maintenance moved off my plate.

Consumption still wasn't solved. Claude still needed to be pushed to the right document at the right time, and sometimes it fetched and sometimes it didn't. The recoverability from version 2 carried over, but the underlying issue hadn't gone away.

The new problem was scale. Two parts to it. First, every call to Drive routed through an API relay I had built for authentication. That worked, but it was another layer of infrastructure I had to maintain, and it sat between Claude and the content on every fetch. Second, the fetches themselves pulled full documents into context. That was fine for three or four docs per session, but the knowledge base was growing. Pulling whole documents into context wasn't going to scale. I couldn't quantify exactly where the ceiling was, but I could feel it coming.

"I could feel the ceiling before I hit it."

Control was good. Maintenance was off my plate. But the architecture had me fetching more than I needed to answer most questions, and I knew it wouldn't hold.

I moved on.

Version 4: A structured index in a git repository

This is less than a week old, so I'll be honest about what's resolved and what isn't.

The relay problem is gone. Content lives in a git repository, which means there's no authentication layer between Claude and the knowledge. One less piece of infrastructure to maintain, one less dependency in every fetch.

The token scale problem is plausibly better. A structured index means Claude can pull the specific sections that are relevant instead of whole documents. In principle that scales much further than version 3. In practice it depends on two things I don't know yet: whether the index is granular enough that partial reads are useful, and whether Claude actually does partial reads instead of grabbing whole files anyway. The architecture makes the improvement possible. It doesn't guarantee it.

The consumption problem is still the open question. It has been across all four versions, and version 4 doesn't solve it by structure alone. What it does is create the best conditions yet for consumption to work. The content is in a format Claude works with natively. The newest generation of models may handle structured context better than the previous one did. Both of those give me reason to hope. Neither of them is a guarantee.

If consumption finally gets reliable, this is the version where compounding reaches a new level. Each of the past versions got me one step closer. I'm hoping version 4 doubles all three of them combined. I'll know in a few weeks.

What I've learned

Four versions in, the pattern underneath them matters more than any of the specific tools. Every version has been a better answer to the same three questions. Can I trust Claude to consume what I've captured? Can I stay in control of what the layer contains? Can I keep it current without the maintenance cost eating the leverage?

Version 1 failed on all three. Version 2 gave me control but the maintenance was brutal. Version 3 fixed maintenance and kept control but the architecture had a ceiling. Version 4 is the first version where all three could land at once, and the consumption question is the one that will decide whether the bet pays off.

I've gotten compounding at every step. Each version has been more productive than the one before it, and the knowledge I've captured has transferred forward every time. What I haven't gotten is compounding that means I never have to repeat myself. That's probably an unreachable standard. It's still the one I'm aiming at.

What stays the same across versions is the asset itself. The knowledge I've captured about how the business works has migrated from memory entries to project files to Google Docs to a git-based index. The storage keeps changing. The knowledge doesn't. The discipline of capturing what I know in a form that can hand it back, when it's needed, without me having to remember to bring it in. That's the work that compounds, regardless of which version is hosting it.

Where I landed

Still hunting. Version 4 might be the method. It might not. What I'm more sure of is that whoever finds the method that compounds fastest is going to be out ahead of the market. They're the Driven.

Compounding isn't what AI does for you. It's what you build the system to do.