romulus@roma-eterna:~$ ./access.sh
Enter Access Phrase
Access denied. Try again.
← Back to site
node: roma-eterna
mode: persistent / discord / obsidian
CASE STUDY • 2026

Building a Scalable, Personal AI System

How I designed and built an always-on AI agent — named after the first king of Rome — to close the gap between conception and shipping.

$ system.boot --name Romulus --host Roma Eterna
$ status persistent agent / discord interface / obsidian memory / scheduled workflows

The Problem

I've spent the past decade in product — designing, building, and shipping across startups in health, web3, energy, and consumer apps. The recurring bottleneck was never ideas. It was throughput.

Good opportunities pile up fast when you know how to spot them. Building them one by one, alone, is slower than it should be. Romulus started as an attempt to change that without giving up taste, judgment, or control.


Roma Eterna

Romulus runs on OpenClaw and lives on a dedicated Mac mini called Roma Eterna. It's always on — running scheduled jobs, monitoring things in the background, and sending a morning brief to Discord every day at 8 AM.

The interface is Discord. The memory layer is an Obsidian vault: daily notes, project files, operating context, decisions, and lessons learned going back to the first boot session on February 3, 2026.

Right now that memory spans 83 files, 29 project notes, 31 daily logs, and 11 key decisions. That's the actual point of the system: context that compounds instead of disappearing between chats.


How the Work Actually Runs

Every project follows the same loop:

01
Research
Grok pulls in live signal from X — what people are talking about, what feels noisy, and where there might actually be an opening.
02
Analysis
OpenAI models stress-test the idea: technical feasibility, stack options, product shape, and whether the thing is actually worth building.
03
Synthesis
Everything routes back through Romulus. That's where the filtering happens. The system helps surface options. Product judgment still makes the call.
04
Execution
Claude Code handles the heavier implementation work. Romulus orchestrates, tracks state, and preserves context between sessions. Git and Vercel close the loop.

Model Routing Philosophy

Not every task deserves the same model. I treat model selection the same way I'd treat any system design decision: match capability and cost to the job.

Minimax M2.7
The nervous system. Morning briefs, balance checks, scheduled tasks. Fast, cheap, reliable. Zero overhead.
Claude Sonnet
The thinking partner. Research synthesis, strategic discussions, things that need actual depth — not just output.
Claude Opus
The workhorse. Multi-file implementations, complex refactors, the builds that actually matter.

Projects Built with This System

Chronicle shipped
thischronicle.com

A daily history guessing game. One event per day, three chances to guess the year. Wordle-style digit feedback. 90 hand-curated puzzles spanning five centuries.

I designed the game first — share card format, difficulty calibration, no-clues Sunday mode, and the editorial voice. Then I used the system to move from spec to production quickly.

Next.js + Tailwind
Caveat shipped
trycaveat.com

AI contract analyzer. Upload any PDF or DOCX, get a risk report in 60 seconds. Flags bad terms, missing clauses, unfavorable language. Built on Next.js 15, Stripe, GPT-4o. Privacy-first — contracts are never stored.

The core insight came from genuine signal: contract review is expensive and slow, and most people skip it. The product addresses a real friction point that freelancers, contractors, and founders face every day.

Next.js 15 + Stripe + GPT-4o
The Morning Brief live daily
delivered daily to Discord

Every morning at 8 AM: weather for Park Slope, subway departures for the R train from Prospect Ave to Canal St, and a product signal digest. All three in Discord. Custom Node.js script pulling live MTA GTFS-RT protobuf feeds.

Built something I actually use every day. Real utility, not a demo. The kind of tool that makes the system feel worth it before breakfast.

Node.js + MTA GTFS-RT

The Memory Architecture

The Obsidian vault is the part of this system I'm most proud of.

83 files. 12 core memory docs. 29 project notes. 31 daily logs. 11 key decisions. Every session reads from it. Every session writes back to it. Romulus keeps context because the system stores it, not because I restate it every time.

This is what makes it compound. Each project makes the next one faster. The vault grows more useful over time. The system learns from itself.


Legion

The next layer is already designed. Legion is an orchestration system — Romulus as conductor, multiple Claude Code workers running in parallel with git worktree isolation.

Task decomposition, parallel execution, quality gates before merge, results synthesized and delivered to Discord. The goal is to move from one person building with an AI partner to something closer to a small studio.

romulus@roma-eterna:~/legion$ cat README.md
Phase 1: MVP Swarm — 2-3 parallel workers, collected results, completion notification.
Phase 2: Worktree isolation — each worker in its own git branch.
Phase 3: Quality gates — automated tests before merge.
Phase 4: Knowledge accumulation — past learnings prime future tasks.

"Romulus commands Legion." — Roman military structure. A disciplined coordinated force under single command.

What I'd Tell Another Builder

The difference between a useful AI setup and a toy is system design. Memory matters. Routing matters. Context management matters. The model alone is not the product.

Judgment still matters more. AI can accelerate research, writing, coding, and execution. It can't decide what deserves your time. That part is still human.

The best test is whether the thing becomes part of your real workflow. Romulus did. It runs every day, keeps state across projects, and helps me move faster without losing the thread.