By the Numbers
Auto-audited from the codebase. Client/server split measured by "use client" directive.
What Would This Cost to Build?
Estimates based on actual codebase scope: 16K+ LOC, 12 DB models, 62 server actions, role-based auth, complex business logic.
Cost Comparison
Scope Included
- 12-model PostgreSQL schema with migrations and seed data
- Role-based auth (JWT) with 3 credential providers and rate limiting
- 62 server actions with auth guards, transactions, and Zod validation
- 39 React components across 5 feature modules (organizer, judge, captain flows)
- KCBS scoring engine with weighted averages, tiebreaking, and DQ handling
- Box distribution algorithm with Latin square constraint
- 113 unit tests + E2E simulation script (2,000+ assertions)
- Full documentation suite (15 docs in Diataxis framework)
- Dark/light theme, mobile responsive, WCAG accessibility
| Approach | Rate | Hours | Estimated Cost |
|---|---|---|---|
| US Dev Shop | $150–250/hr | 400–600 | $60,000–$150,000 |
| Offshore (India/China) | $25–60/hr | 500–800 | $12,500–$48,000 |
| AI-Assisted (this project) | $20/mo API | ~50 human hrs | $20–$60 API cost |
Disclaimer: These are rough estimates for comparison purposes. Actual costs vary by team, timeline, requirements clarity, and project management overhead. The AI-assisted cost reflects API/subscription costs only — human time was spent on architecture decisions, code review, and prompt engineering (~50 hours across 10 days).
AI Development Workflow
How this project was built using Claude Code as the primary development tool.
Terminal-Only Development
Every line of code was generated through Claude Code (CLI). No IDE code generation, no copy-paste from ChatGPT. The terminal is the entire development environment — prompts in, code out, diffs reviewed.
CLAUDE.md — Cross-Session Memory
The 103-line CLAUDE.md file gives Claude full project context at the start of every session: stack constraints (Prisma must stay on v5), business rules (KCBS scoring), auth patterns, seed data. Without it, every session starts from zero. With it, Claude makes architecturally consistent decisions immediately.
Session = 1 Conversation = 1 PR
Each development session was one Claude Code conversation, typically producing one PR. Sessions ranged from focused fixes (PR #3: security hardening, 15 files) to large feature pushes (PR #1: full app scaffold, 130 files).
Human Directs, AI Implements
I made architecture decisions (feature modules, barrel exports), defined the security model (auth guards, session-derived IDs), designed UX flows (judge phases, captain workflow), and decided what to build next. Claude generated the implementation, tests, and documentation. I reviewed every diff.
What I Delegate vs. Direct
Delegated to Claude
- Component implementation from description
- Server action boilerplate + Prisma queries
- Test generation for pure utility functions
- Prisma schema design from requirements
- Accessibility fixes (ARIA, keyboard nav)
- Documentation generation (Diataxis structure)
Directed by me
- Architecture decisions (feature modules, barrel exports)
- Security model (auth guards, session-derived IDs)
- UX flows (judge phases, captain workflow)
- What to build next (session sequencing)
- What to cut (no per-user PINs for MVP)
- Code review — approving or rejecting diffs
Development Stats
Architecture Overview
How the system pieces fit together — from browser to database.
Tech Stack
Grouped by layer. Versions are pinned — see constraints in CLAUDE.md.
Frontend
Backend & Data
Testing & Quality
Infrastructure
Database Schema
12 models in PostgreSQL (via Prisma), grouped by domain. Click to expand.
Competition Setup
Scoring & Corrections
Audit Trail
Feature Modules
Each module in src/features/ has the same structure: actions/, components/, hooks/, store/, schemas/, types/, utils/, and an index.ts barrel export. Nothing leaks outside the barrel.
competition/27 actions · 12 componentsCompetition lifecycle — CRUD, category round advancement, box distribution, judge/table management.
judging/16 actions · 13 componentsJudge experience — setup flow (4 phases), score card submission, appearance-first workflow, comment cards.
scoring/9 actions · 6 componentsTable captain dashboard — score review, correction request approval/denial, category submission.
tabulation/7 actions · 7 componentsResults engine — KCBS weighted scoring, tiebreaking, winner declaration, score audit, audit log.
users/3 actions · 1 componentsJudge import (single + bulk) and search. Lightweight — most user logic lives in competition.
Build Journal
Session-by-session timeline, most recent first. Includes mistakes — this is what actually happened, not a highlight reel.
Full Build Timeline
Rewrote the /tech page from a simple stats page into a comprehensive 11-section build teardown with Mermaid diagrams, cost comparison, AI workflow documentation, and build journal.
Created the initial /tech page with codebase stats, tools inventory, and integrations.
Restructured all documentation into Diataxis framework (tutorials, how-to, reference, explanation). Built E2E competition simulation script — runs full KCBS lifecycle with 2,000+ assertions and generates a markdown report. Then a refactor pass: split the monolithic competition/actions/index.ts into 6 focused files. Cleaned up stale files.
Course correction: The simulation script found 3 bugs that unit tests missed (all in edge cases around DQ handling). Should have written the simulation earlier — it's a better integration test than anything I had.
WCAG pass: ARIA labels, keyboard navigation fixes, focus management. OWASP security headers in next.config. Rate limiting on both login providers. Three new edge-case test suites (box distribution edge cases, all schema validation, tabulation edge cases). Test count jumped from ~40 to 113.
Course correction: Rate limiter is in-memory — resets on deploy. Fine for MVP, but would need Redis or similar for production. Chose to document this rather than over-engineer.
Major restructure of organizer pages: new routes for check-in, table management, box distribution. Added competition selector dropdown, score audit view with per-judge formula breakdown. Enhanced DataTable with client-side search and pagination. 41 files touched.
Course correction: Navigation structure changed three times in this session. Should have sketched the IA before coding. Ended up with dead routes that had to be cleaned up later.
Built the box distribution generator with Latin square constraint (no repeat competitor at same table across categories). Added 188 unit tests for the algorithm. Captain dashboard polished with category submission screen.
Extended judge flow with event info screen and comment cards. Then a focused security pass: created auth-guards.ts with requireAuth/requireOrganizer/requireJudge/requireCaptain. Retrofitted every server action to start with an auth guard. Wrapped DB writes in transactions. Derived all user IDs from session (removed client-supplied IDs).
Course correction: Auth guards should have been in the first commit. Building all the features before security meant retrofitting 10+ action files. The guard pattern was simple — no reason not to do it from day one.
Single massive commit: Prisma schema (12 models), all 5 feature modules with actions/components/types/schemas, auth system (JWT + Credentials), dashboard shell with role-based navigation, KCBS scoring constants, seed data (24 judges, 24 teams, 4 tables), design system (11 common components + 10 UI primitives), 3 test suites, and the rules page. 130 files changed.
Course correction: Built too much in one commit. Made the next round of fixes harder to isolate. Should have shipped Prisma schema + auth first, then features incrementally.
Created Next.js 14 app with TypeScript strict mode. Added shadcn/ui configured for Tailwind v3 (not v4 — shadcn v4 generates incompatible code). Set up path aliases.
Lessons Learned
What worked well and what was hard building a real application with AI.
7 Insights
CLAUDE.md is the single most important file
103 lines that give Claude full context across sessions: stack constraints (Prisma must stay on v5), business rules (KCBS scoring), auth patterns, seed data credentials. Without it, every new session starts from zero. With it, Claude can make architecturally consistent decisions immediately.
Pure functions for business logic
Extracting KCBS scoring rules, box distribution, and validation into pure utility functions made them trivially testable (113 tests, all passing). No database mocking needed. The E2E simulation composes these same functions to validate a full competition lifecycle.
Feature module pattern keeps things navigable
Each of the 5 feature modules has the same structure (actions/, components/, types/, schemas/, store/, index.ts barrel). Claude can work on one module without accidentally breaking another. The barrel export forces you to define the public API.
E2E simulation catches what unit tests miss
The simulation script found 3 bugs that passed code review and unit tests — all in edge cases around DQ handling and scoring with missing judges. Automated end-to-end verification is more reliable than trying to review faster.
Security is easier to build in than bolt on
Auth guards, session-derived user IDs, and table ownership verification were added in PR #3 — after the entire app was already built. Retrofitting 10+ action files was tedious and error-prone. The pattern itself was simple. Should have been there from commit one.
AI generates code faster than you can review it
The biggest risk isn't wrong code — it's code you approved without fully understanding. The simulation script caught 3 bugs that passed code review. Adding more automated verification (tests, E2E simulation) is more reliable than trying to review faster.
Don't build the whole app in one commit
PR #1 changed 130 files in one shot. When bugs surfaced, it was nearly impossible to git bisect or isolate the cause. Incremental PRs (schema → auth → features → polish) would have been much easier to debug and review.
Tools Used
Development tools and services used to build, test, and deploy.