Skip to main content
zzoo.dev
5 min read

A Team of One

claude-codeaisolo-developersetup

What's the biggest thing you lack when building a service alone?

It's not ability. You can learn anything. Security, databases, marketing β€” none of it is gatekept. The real problem is simultaneity. When you're writing code, you only think about code. That's what focus means. But while you're focused, security slips, analytics pile up, and marketing gets forgotten. Pour yourself into one thing and everything else falls behind. That's why companies have departments.

What I wanted was to automate every domain. Not hire people, but build a system where each domain runs on its own. Where security reviews happen and analytics accumulate and marketing drafts appear even while I'm buried in code. I've been shaping my Claude Code setup in this direction.

The Constitution

Every project gets a CLAUDE.md at the root β€” the rules for how the AI behaves in that codebase. Tech stack, architecture, constraints, which skills and agents to use. The core pattern:

1. All implementation MUST use specified skills
2. After implementation, run two agents in parallel:
   - security-reviewer β†’ audit β†’ fix
   - verifier β†’ run tests, E2E, browser verify β†’ report

These are hard rules. Claude Code reads this file at the start of every session. It doesn't need reminding.

I've also standardized the scaffolding. Instead of thinking through project structure every time, I copy a template. App directories, docs (PRD, UX, architecture), business operations (marketing, analytics, growth), task boards β€” all included, with a plan β†’ build β†’ ship workflow baked in. I'm the kind of person who writes tests when I feel like it and does security reviews when I remember. That forced structure was something I needed.

Skills as Constraints

I run 34 skills. Nearly half of them have nothing to do with code.

The technical ones are obvious. Architecture design. Database modeling. API specifications. Frontend patterns. Mobile. Internationalization. Security checklists. 3D web. Alongside those sit skills for copywriting, email marketing, social media content, ad creative, pricing strategy, competitor analysis, conversion optimization, churn prevention, search visibility, product briefs, and PRDs.

I didn't plan this. I started building skills for code quality and kept going. Looking back, I think the key difference between a skill and documentation is that documentation describes; a skill constrains. When the security skill loads, it doesn't just "know" the OWASP checklist β€” it walks through every item. When the copywriting skill loads, it follows persuasion principles. The gap between knowing something and having it applied every time turned out to be bigger than I expected.

The Automation Structure

I run 14 agents organized into three domains. Started with 7 and added more as I needed them.

Plan β€” four agents. The product manager writes briefs and PRDs. The architect handles system design and technical decisions. The UX designer maps information architecture and user flows. The task manager extracts implementation tasks from design documents. When I start a new feature, these four define what gets built before I write a line of code. It's helped me break the habit of coding first and thinking later.

Dev β€” six agents. Originally just a security reviewer and a verifier. Now frontend, backend, mobile, and desktop developers have been added. Each works with its domain-specific skills, then the reviewer runs a security audit and the verifier runs tests and browser verification in parallel. Anything touching payment or authentication gets escalated to me. I keep that line intentionally.

Biz β€” four agents. No live service with real users yet, so these haven't been battle-tested. The agents and skills are ready, not proven. The data analyst connects to PostHog for retention analysis, the growth optimizer handles conversion funnels and churn prevention, the marketer builds launch strategies and positioning, and the content marketer manages content calendars. This is the first area I want to validate once a service has real users.

The Connective Tissue

Agents become useful when they're connected to real systems. Nineteen plugins provide the infrastructure β€” TypeScript, Python, and Rust LSPs give type-level analysis, while Expo and Vercel plugins bring framework conventions.

On top of that, MCP servers connect agents to live data. Context7 provides up-to-date library documentation. Playwright drives a browser for testing. Sentry surfaces production errors. PostHog feeds analytics data. Neon handles database migrations. GitHub opens PRs and issues. Slack and Gmail handle communications.

Each connection is simple on its own. Together they mean the data analyst queries real metrics instead of guessing, and the verifier opens a browser to check UI instead of assuming.

The Boundaries

Some domains automate well. Security reviews, testing, code quality gates β€” the repetitive stuff. These don't get skipped even on bad days. Quality stays roughly consistent across projects.

Some things clearly don't automate. The system can't tell me I'm solving the wrong problem. It can't talk to users and find out what they actually need. Strategic judgment is still mine.

34 skills, 14 agents, 19 plugins, a constitution and scaffolding for every project. The investment went into the system, not any individual project. Applying it to a new project costs relatively little.

What solo developers lack isn't ability β€” it's simultaneity. Focus on one thing and the rest falls behind. This setup is my attempt at solving that with automation. It's not perfect. But having the other domains keep running while I'm deep in code β€” that's changed how working alone feels.