Essays on local-first AI, autonomous agents, self-improving systems, and what we're learning while building in public.
The cloud-first AI revolution was necessary, but it created a problem: vendor lock-in, data leakage, and AI infrastructure controlled by a few megacorps. We're building the alternative. A founder-controlled, local-first AI operating system that proves you don't need the cloud to build something powerful.
Three patterns from our evaluation layer that catch fabrication, prevent brand safety violations, and keep agents grounded in reality. Rule-based checks + LLM scoring = human-level quality gates.
Every night, the system evaluates its own work, generates hypotheses, and tests improvements. By morning, you've got measurable quality gains. Here's how we set it up.
How to coordinate 15+ AI agents in a single local system. DAG-based execution, typed message passing, circuit breakers, and the state bus pattern that makes it all work.
No vaporware. No vanity metrics. Real data from real systems. Here's why we're sharing everything, including the failures, and what we've learned from building in the open.
15 pages on setting up local AI agents — hardware, Ollama, your first agent, memory systems, and the pitfalls that kill most projects. Working code included.
New insights on autonomous systems, local AI infrastructure, and self-improving agents delivered to your inbox.