Anyone Can Build Software Now. So What's Your Moat?
Milla Jovovich — The Fifth Element, Resident Evil — shipped an AI memory tool last week. It's called MemPalace, it's open source, and it's inspired by the ancient "method of loci" memory technique. She designed the concept and architecture. Ben Sigman, a developer and CEO of Libre Labs, engineered it. Claude helped shape the project.
The tool is interesting. But the more interesting thing isn't the tool.
The Pattern Worth Paying Attention To
Jovovich isn't a software engineer but the key is that she doesn't need to be. She had opinions about how to solve the "AI amnesia" problem (reexplaining context across sessions burns real tokens — real money). Claude and a developer she knew turned those opinions into something real. She shipped it to her audience.
This collaboration pattern is going to produce a lot of software over the next few years.
This isn't new exactly. Celebrities have been attaching their names to products for decades. What's new is the cost and friction of building the thing.
When the build cost drops to near-zero, the bottleneck shifts entirely to distribution. And distribution is the one thing Milla Jovovich has that most developers don't — and will reach people that a developer building in isolation never could.
A Brief Technical Note
I've been experimenting with my own personal memory system for AI agents backed by a vector database and semantic search, inspired by Nate B. Jones' Open Brain project. Worth a quick comparison since the architectures make different bets.
MemPalace takes a "store everything, then structure it spatially" approach inspired by the ancient method of loci — wings, halls, rooms, drawers. Every word verbatim, locally, no cloud. The pitch includes a performance claim: 96.6% on LongMemEval versus ~85% for Mem0 and Zep. Meanwhile, my setup makes the opposite architectural bet — remote, portable, accessible from any client that speaks MCP. The goal isn't to beat benchmarks; it's to ensure your memories travel with you across models and contexts.
Neither is objectively correct. They're different answers to the same question: where should your AI memory live, and who controls it?
On the benchmark claims: Ewan Mak published a detailed teardown on Medium worth reading. The short version — the performance numbers are real in a narrow sense since they are largely an artifact of the approach (apples to oranges). The more interesting question — which Mak also raises — is whether the "store everything, structure it spatially" approach will prove better than "let AI extract what matters" as these systems mature. Remains to be seen.
But there's a more fundamental question worth asking: even if the performance difference is real, does it matter?
A 2-3% recall improvement means nothing if you don't change your behavior to actually use the system. The bottleneck in personal AI memory isn't retrieval latency — it's adoption. Do you capture decisions consistently? Do your agents query memory before acting? Does the system fit into how you actually work, or does it require you to work around it?
A slightly slower system you actually use beats a faster one you don't by an infinite margin.
What a Normal Developer Is Supposed to Do
So if anyone can build software now — celebrities, domain experts, non-engineers with opinions and a collaborator — what's the actual moat for a developer?
I've been thinking about this since the Jovovich announcement landed. My honest take: technical skill is table stakes. Taste is the differentiator. And taste without receipts is invisible.
Taste is underrated and hard to fake
Knowing which problem is worth solving, which architectural bet is correct, which tradeoff matters — that judgment compounds over time and doesn't come from a prompt. Jovovich had taste about the memory problem. The developer who wins isn't the one who can build fastest; it's the one who knows what to build.
Building in public is how you prove the taste exists
One post at a time, with receipts. Milla has 10M Instagram followers because she's spent decades doing interesting things visibly. The developer version of that is showing the work, sharing the failures, naming the patterns before anyone else does. That's what I'm hoping to do here.
One post/commit at a time.
Building the data layer behind your AI product?
Distribution is the moat — but the technical foundation still has to hold. Bad data pipelines, silent integration failures, and unscalable architectures are the things that quietly kill promising products after the audience shows up.
Whether you need a stack audit, custom pipeline development, or ongoing data engineering support, let's talk.