Under the Hood

BBQ Judge — How It Was Built

A KCBS competition judging app built entirely with Claude Code over 10 days. One developer, one AI, zero boilerplate generators. This page is a complete teardown — architecture, costs, session-by-session build log, and lessons learned. Everything here is data-driven from the actual codebase.

Last updated: March 19, 2026

By the Numbers

Auto-audited from the codebase. Client/server split measured by "use client" directive.

16,424
Lines of Code
8,280 client / 8,144 server
136
Source Files
.ts and .tsx files
39
React Components
Feature + shared components
62
Server Actions
Across 10 action files
12
Database Models
Prisma schema (PostgreSQL)
113
Unit Tests
25 test suites, all passing
5
Feature Modules
Domain-driven architecture
8
Prisma Migrations
Incremental schema evolution
19
Git Commits
Mar 6 – Mar 16, 2026
10
PRs Merged
Feature-branch workflow
1,561
Documentation
Lines across 15 docs (Diataxis)
103
CLAUDE.md
Lines of AI context
38
Dependencies
25 prod + 13 dev

What Would This Cost to Build?

Estimates based on actual codebase scope: 16K+ LOC, 12 DB models, 62 server actions, role-based auth, complex business logic.

Cost Comparison

Scope Included

  • 12-model PostgreSQL schema with migrations and seed data
  • Role-based auth (JWT) with 3 credential providers and rate limiting
  • 62 server actions with auth guards, transactions, and Zod validation
  • 39 React components across 5 feature modules (organizer, judge, captain flows)
  • KCBS scoring engine with weighted averages, tiebreaking, and DQ handling
  • Box distribution algorithm with Latin square constraint
  • 113 unit tests + E2E simulation script (2,000+ assertions)
  • Full documentation suite (15 docs in Diataxis framework)
  • Dark/light theme, mobile responsive, WCAG accessibility
ApproachRateHoursEstimated Cost
US Dev Shop$150–250/hr400–600$60,000–$150,000
Offshore (India/China)$25–60/hr500–800$12,500–$48,000
AI-Assisted (this project)$20/mo API~50 human hrs$20–$60 API cost

Disclaimer: These are rough estimates for comparison purposes. Actual costs vary by team, timeline, requirements clarity, and project management overhead. The AI-assisted cost reflects API/subscription costs only — human time was spent on architecture decisions, code review, and prompt engineering (~50 hours across 10 days).

AI Development Workflow

How this project was built using Claude Code as the primary development tool.

Terminal-Only Development

Every line of code was generated through Claude Code (CLI). No IDE code generation, no copy-paste from ChatGPT. The terminal is the entire development environment — prompts in, code out, diffs reviewed.

CLAUDE.md — Cross-Session Memory

The 103-line CLAUDE.md file gives Claude full project context at the start of every session: stack constraints (Prisma must stay on v5), business rules (KCBS scoring), auth patterns, seed data. Without it, every session starts from zero. With it, Claude makes architecturally consistent decisions immediately.

Session = 1 Conversation = 1 PR

Each development session was one Claude Code conversation, typically producing one PR. Sessions ranged from focused fixes (PR #3: security hardening, 15 files) to large feature pushes (PR #1: full app scaffold, 130 files).

Human Directs, AI Implements

I made architecture decisions (feature modules, barrel exports), defined the security model (auth guards, session-derived IDs), designed UX flows (judge phases, captain workflow), and decided what to build next. Claude generated the implementation, tests, and documentation. I reviewed every diff.

What I Delegate vs. Direct

Delegated to Claude

  • Component implementation from description
  • Server action boilerplate + Prisma queries
  • Test generation for pure utility functions
  • Prisma schema design from requirements
  • Accessibility fixes (ARIA, keyboard nav)
  • Documentation generation (Diataxis structure)

Directed by me

  • Architecture decisions (feature modules, barrel exports)
  • Security model (auth guards, session-derived IDs)
  • UX flows (judge phases, captain workflow)
  • What to build next (session sequencing)
  • What to cut (no per-user PINs for MVP)
  • Code review — approving or rejecting diffs

Development Stats

10
PRs merged
10
Development days
113
Unit tests
2,000+
E2E assertions

Architecture Overview

How the system pieces fit together — from browser to database.

Tech Stack

Grouped by layer. Versions are pinned — see constraints in CLAUDE.md.

Frontend
Next.js 14.2App Router, server components, server actions
React 18Hooks, context, Suspense
TypeScript 5Strict mode across entire codebase
Tailwind CSS v3Utility-first, custom theme (deep red primary)
shadcn/ui + Radix UIAccessible headless primitives (new-york style)
ZustandLightweight client state management
React Hook Form + ZodForm handling with runtime schema validation
next-themesDark/light mode with system preference
Lucide ReactIcon library
Mermaid.jsDiagrams as code (ERDs, flowcharts)
Backend & Data
Prisma 5Type-safe ORM, migrations, transactions, seeding
Supabase (PostgreSQL)Managed Postgres with connection pooling
NextAuth.js v5 (beta)JWT auth, two Credentials providers, 24h expiry
bcryptjsPassword hashing for all stored credentials
Custom rate limiterIn-memory sliding window (5 attempts / 15 min)
date-fnsDate manipulation and formatting
Testing & Quality
Vitest113 unit tests across 25 suites
E2E simulation scriptFull KCBS lifecycle, 2,000+ assertions
ESLintNext.js linting configuration
Infrastructure
VercelHosting with edge middleware and preview deploys
GitHubSource control with PR workflows (10 PRs merged)
tsxTypeScript execution for scripts and seeding

Database Schema

12 models in PostgreSQL (via Prisma), grouped by domain. Click to expand.

Competition Setup
Scoring & Corrections
Audit Trail

Feature Modules

Each module in src/features/ has the same structure: actions/, components/, hooks/, store/, schemas/, types/, utils/, and an index.ts barrel export. Nothing leaks outside the barrel.

competition/27 actions · 12 components

Competition lifecycle — CRUD, category round advancement, box distribution, judge/table management.

judging/16 actions · 13 components

Judge experience — setup flow (4 phases), score card submission, appearance-first workflow, comment cards.

scoring/9 actions · 6 components

Table captain dashboard — score review, correction request approval/denial, category submission.

tabulation/7 actions · 7 components

Results engine — KCBS weighted scoring, tiebreaking, winner declaration, score audit, audit log.

users/3 actions · 1 components

Judge import (single + bulk) and search. Lightweight — most user logic lives in competition.

Build Journal

Session-by-session timeline, most recent first. Includes mistakes — this is what actually happened, not a highlight reel.

Full Build Timeline
Mar 16Tech page overhaulPR #10

Rewrote the /tech page from a simple stats page into a comprehensive 11-section build teardown with Mermaid diagrams, cost comparison, AI workflow documentation, and build journal.

+485 / -347
Mar 13Tech page v1PR #9

Created the initial /tech page with codebase stats, tools inventory, and integrations.

+347
Mar 11 AMDocs, E2E simulation, architecture refactorPR #7, PR #8

Restructured all documentation into Diataxis framework (tutorials, how-to, reference, explanation). Built E2E competition simulation script — runs full KCBS lifecycle with 2,000+ assertions and generates a markdown report. Then a refactor pass: split the monolithic competition/actions/index.ts into 6 focused files. Cleaned up stale files.

+5,318 / -2,166

Course correction: The simulation script found 3 bugs that unit tests missed (all in edge cases around DQ handling). Should have written the simulation earlier — it's a better integration test than anything I had.

Mar 10Accessibility, security, edge casesPR #6

WCAG pass: ARIA labels, keyboard navigation fixes, focus management. OWASP security headers in next.config. Rate limiting on both login providers. Three new edge-case test suites (box distribution edge cases, all schema validation, tabulation edge cases). Test count jumped from ~40 to 113.

+1,657 / -258

Course correction: Rate limiter is in-memory — resets on deploy. Fine for MVP, but would need Redis or similar for production. Chose to document this rather than over-engineer.

Mar 9 PMOrganizer UX overhaulPR #5

Major restructure of organizer pages: new routes for check-in, table management, box distribution. Added competition selector dropdown, score audit view with per-judge formula breakdown. Enhanced DataTable with client-side search and pagination. 41 files touched.

+2,377 / -652

Course correction: Navigation structure changed three times in this session. Should have sketched the IA before coding. Ended up with dead routes that had to be cleaned up later.

Mar 9 mid-dayBox distribution algorithmPR #4

Built the box distribution generator with Latin square constraint (no repeat competitor at same table across categories). Added 188 unit tests for the algorithm. Captain dashboard polished with category submission screen.

+1,006 / -100
Mar 9 AMJudge flow + security hardeningPR #2, PR #3

Extended judge flow with event info screen and comment cards. Then a focused security pass: created auth-guards.ts with requireAuth/requireOrganizer/requireJudge/requireCaptain. Retrofitted every server action to start with an auth guard. Wrapped DB writes in transactions. Derived all user IDs from session (removed client-supplied IDs).

+1,854 / -752

Course correction: Auth guards should have been in the first commit. Building all the features before security meant retrofitting 10+ action files. The guard pattern was simple — no reason not to do it from day one.

Mar 8Full app built in one sessionPR #1

Single massive commit: Prisma schema (12 models), all 5 feature modules with actions/components/types/schemas, auth system (JWT + Credentials), dashboard shell with role-based navigation, KCBS scoring constants, seed data (24 judges, 24 teams, 4 tables), design system (11 common components + 10 UI primitives), 3 test suites, and the rules page. 130 files changed.

+14,396 / -1,605

Course correction: Built too much in one commit. Made the next round of fixes harder to isolate. Should have shipped Prisma schema + auth first, then features incrementally.

Mar 6Project scaffolding

Created Next.js 14 app with TypeScript strict mode. Added shadcn/ui configured for Tailwind v3 (not v4 — shadcn v4 generates incompatible code). Set up path aliases.

+11,118

Lessons Learned

What worked well and what was hard building a real application with AI.

7 Insights
Worked well

CLAUDE.md is the single most important file

103 lines that give Claude full context across sessions: stack constraints (Prisma must stay on v5), business rules (KCBS scoring), auth patterns, seed data credentials. Without it, every new session starts from zero. With it, Claude can make architecturally consistent decisions immediately.

Worked well

Pure functions for business logic

Extracting KCBS scoring rules, box distribution, and validation into pure utility functions made them trivially testable (113 tests, all passing). No database mocking needed. The E2E simulation composes these same functions to validate a full competition lifecycle.

Worked well

Feature module pattern keeps things navigable

Each of the 5 feature modules has the same structure (actions/, components/, types/, schemas/, store/, index.ts barrel). Claude can work on one module without accidentally breaking another. The barrel export forces you to define the public API.

Worked well

E2E simulation catches what unit tests miss

The simulation script found 3 bugs that passed code review and unit tests — all in edge cases around DQ handling and scoring with missing judges. Automated end-to-end verification is more reliable than trying to review faster.

Hard

Security is easier to build in than bolt on

Auth guards, session-derived user IDs, and table ownership verification were added in PR #3 — after the entire app was already built. Retrofitting 10+ action files was tedious and error-prone. The pattern itself was simple. Should have been there from commit one.

Hard

AI generates code faster than you can review it

The biggest risk isn't wrong code — it's code you approved without fully understanding. The simulation script caught 3 bugs that passed code review. Adding more automated verification (tests, E2E simulation) is more reliable than trying to review faster.

Hard

Don't build the whole app in one commit

PR #1 changed 130 files in one shot. When bugs surfaced, it was nearly impossible to git bisect or isolate the cause. Incremental PRs (schema → auth → features → polish) would have been much easier to debug and review.

Tools Used

Development tools and services used to build, test, and deploy.

AI Development

Claude Code (CLI)Primary development tool — all code generation, architecture decisions, and iteration
Claude Opus 4LLM powering code generation, planning, and review

Development Tools

VS CodeEditor for code review and manual edits
GitHubSource control with PR-based workflow (10 PRs merged)
VercelDeployment platform with preview deploys per PR
SupabaseManaged PostgreSQL hosting with connection pooling

Build & Runtime

Node.jsJavaScript runtime
npmPackage management (38 dependencies)
tsxTypeScript execution for scripts and seeding
PostCSSCSS processing pipeline for Tailwind