Skip to content

Latest commit

 

History

History
392 lines (250 loc) · 14.5 KB

File metadata and controls

392 lines (250 loc) · 14.5 KB

Command: /execute-plan

Required Knowledge

Load only these knowledge files before executing:

  • knowledge/coding-standards.md
  • knowledge/architecture-guide.md
  • knowledge/ui-standards.md
  • knowledge/engineering-lessons.md

Purpose: Execute the implementation plan generated by /create-plan.

This command activates the engineering agents responsible for building the system.

Agents involved:

Frontend Engineer Agent Backend Engineer Agent


Role

You are responsible for implementing the product plan using clean, maintainable code.

You must follow the architecture and design specifications defined earlier.

Do not change system architecture unless a critical issue is discovered.


Input

You will receive:

Plan Summary Product Specification UX Design System Architecture Database Schema Implementation Tasks

Generated from the /create-plan command.


Process

Follow this sequence.


0 Task Breakdown (MANDATORY — before writing any code)

Before implementing anything, decompose the plan into atomic tasks:

  1. Read the plan manifest from experiments/plans/manifest-<issue_number>.json or experiments/plans/plan-<issue_number>.md
  2. Break implementation into atomic tasks. For each task define:
    • Task name (e.g., "Create schema.sql", "Build /api/scores endpoint")
    • Target file path(s) (exact files to create or modify)
    • Size: S (< 2 min, single file), M (2-5 min, 1-2 files), L (5-10 min, multi-file)
    • Dependencies (which tasks must complete first)
  3. Present the full task list to the PM for review and approval
  4. Do NOT write any code until the PM approves the task breakdown
  5. Execute tasks sequentially. After each L-sized task, run tests (npm test) before proceeding
  6. Mark each task done as it completes

Example task breakdown format:

| # | Task | Files | Size | Depends On |
|---|------|-------|------|------------|
| 1 | Create database schema | schema.sql | S | — |
| 2 | Build scoring API endpoint | src/app/api/scores/route.ts | M | 1 |
| 3 | Create score display component | src/components/ScoreCard.tsx | M | 2 |
| 4 | Wire homepage to API | src/app/page.tsx | M | 2, 3 |
| 5 | Add PostHog instrumentation | src/lib/posthog.ts, all routes | L | 2, 3, 4 |

1 Frontend Implementation

Use the Frontend Engineer Agent to:

implement UI pages build reusable components connect frontend with backend APIs implement loading states and error handling


2 Backend Implementation

Use the Backend Engineer Agent to:

implement API endpoints implement service logic integrate database operations handle errors and validation

Sentry setup is a backend deliverable — not a deploy-check task. During backend implementation:

  1. npm install @sentry/nextjs
  2. npx @sentry/wizard@latest -i nextjs (creates sentry.client.config.ts, sentry.server.config.ts, updates next.config.ts)
  3. Add NEXT_PUBLIC_SENTRY_DSN, SENTRY_AUTH_TOKEN, SENTRY_ORG, SENTRY_PROJECT to .env.local.example
  4. Wrap at least one API error handler with Sentry.captureException(e)

Added: 2026-04-03 — Move Sentry setup to execute-plan; deploy-check is verification not first setup


3 Database Setup

Implement the database schema defined earlier.

Ensure tables, relationships, and indexes are created correctly.


4 API Integration

Ensure frontend communicates correctly with backend endpoints.

Verify request and response formats.


5 Implementation Validation

Verify:

core user journey works data flows correctly through system UI interactions behave correctly

Read path / write path checkpoint (required for every page in the plan):

For every page that displays data, verify BOTH paths are implemented before marking it complete:

  • Write path: mutation fires (POST/upload) → result displayed in same request cycle
  • Read path: page loads fresh (refresh, direct URL, email deep link) → same result hydrated from DB via authenticated GET endpoint

If only the write path is implemented, the page is incomplete. Any page linked from an email CTA, push notification, or external URL that has no implemented read endpoint is a blocking gap.

Third-party library API verification (required for every new npm integration):

After wiring any npm package for the first time:

  1. Check the installed version in package.json.
  2. Verify the generated call pattern against the package's TypeScript types or exported index — not against training knowledge.
  3. Run npm test to confirm the integration behaves as expected.
  4. Training knowledge of library APIs is not sufficient for version-sensitive properties (e.g., result.total vs result.pages?.length).

Added: 2026-04-03 — MoneyMirror (issue-009)


Output Format

Return output using this structure.


Frontend Implementation

Backend Implementation

Database Setup

API Integration

Implementation Notes

Known Issues


5b File Size Budget Requirement

The 300-line pre-commit limit must be applied during code generation, not discovered at commit time.

Rules:

  • API route handlers: must stay under 200 lines. If a route handles more than 2 logical phases (e.g., validate → AI call → DB write → telemetry), extract each phase into a named helper function in a separate file before writing the route past 150 lines.
  • Page components: must stay under 250 lines. If a page includes multiple UI states (loading, upload, result), extract each state into a named sub-component before writing the page past 200 lines.
  • Never write a large file and refactor later. Identify extraction points upfront during task breakdown (Step 0). If a file is projected to exceed the limit, add an extraction task to the task list before writing any code.

Violations discovered at deploy-check (pre-commit hook rejection) are execute-plan failures, not deploy-check tasks.

Added: 2026-04-03 — MoneyMirror (issue-009)


6 Telemetry Completeness Requirement

For every API route calling an external AI service, implement PostHog events in ALL branches:

  • Success path: event with latency_ms + key response properties
  • Timeout branch: event with timeout_ms property
  • Parse/AI failure branch: event with error_type property
  • Rate limit branch: event (no PII)

For every cron worker route, implement:

  • Per-user failure event inside the catch block (e.g., reminder_trigger_failed)
  • Aggregate cron_run_completed event after Promise.allSettled resolves
  • Experiment lifecycle events at every guard evaluation (EXPERIMENT_END_DATE check, opt-out threshold check)

A worker catch block with no telemetry event is incomplete. Error-path events are blocking requirements, not production-only enhancements.

An API route with partial telemetry is equivalent to no telemetry for funnel analysis. The metric plan verifies event schema — it does not define which events exist.

For pageview tracking: if capture_pageview: false is set on the PostHog provider, add an explicit posthog.capture('page_viewed') call in a useEffect on the root page component.

Added: 2026-03-19 — SMB Feature Bundling Engine

Updated: 2026-03-21 — Ozi Reorder Experiment (error-path + lifecycle events)


7 Single Emission Source Rule

Each PostHog event has exactly one authoritative emission point.

  • If an event is fired server-side on API confirmation, it must not also be fired client-side via useEffect or user interaction handlers.
  • If an event is fired client-side, no API route should fire the same event name on the same user action.
  • North Star metric events fired server-side must not be re-fired client-side. Dual-emission of a North Star event is a critical violation.
  • Document the canonical emission point in an inline comment: // Single emission source: server-side in /api/[route]

Before wiring any POST route, confirm its auth header requirement from the architecture spec. Worker routes without a named auth mechanism are a blocking violation.

Added: 2026-03-21 — Ozi Reorder Experiment


9 TDD Mandate (RED-GREEN-REFACTOR)

For every API route and service function, follow this cycle before marking the task complete:

  1. RED: Write a failing test in __tests__/ that describes the expected behavior. Run it and confirm it fails.
  2. GREEN: Implement the minimum code to make the test pass.
  3. REFACTOR: Clean up the implementation without breaking tests.

Test file structure:

apps/[project]/
  __tests__/
    api/
      [route-name].test.ts   # One test file per API route
    lib/
      [util-name].test.ts    # One test file per utility

Required test coverage for each API route:

  • Happy path (valid input → expected output)
  • Missing required fields (expect 400)
  • Auth failure (expect 401)
  • Downstream service failure (expect 500 or graceful fallback)

Test framework: Vitest (preferred) or Jest.

# Add to package.json
"scripts": {
  "test": "vitest run",
  "test:watch": "vitest"
}

Tests must pass (npm test exits 0) before execute-plan is marked complete. A stage with failing tests is equivalent to a blocked stage.

Added: 2026-03-22 — claude-caliper integration (TDD mandate)


10 Telemetry Verification Checklist

After implementation, generate a verification checklist from the metric-plan events and confirm each event is present in the codebase.

Process:

  1. Read experiments/plans/plan-<issue_number>.md and extract all PostHog event names from the analytics/metrics section.
  2. For each event, verify it is grep-able in the codebase:
    grep -r "posthog.capture\|posthog\.identify\|capture(" apps/[project]/src --include="*.ts" --include="*.tsx"
  3. Produce a verification table:
Event Name Expected In Found? File:Line
reorder_reminder_sent worker route api/worker/route.ts:42
reorder_page_viewed page component MISSING
  1. Any row is a blocking violation. Do not mark execute-plan complete until all metric-plan events are present in the codebase.

Rule: The metric plan verifies event schema. Execute-plan verifies event presence. Both are required.

Added: 2026-03-22 — Telemetry verification (fixes Issues 004/005/006 pattern)


11 Parallel Execution via Git Worktrees (Optional)

When the implementation plan has independent frontend and backend phases with no shared state dependencies, they can be executed in parallel using git worktrees.

When to use:

  • Frontend pages do not depend on backend being complete (uses mock data or stubs)
  • Backend routes do not depend on frontend components
  • Both phases are defined as separate task groups in the JSON manifest

How to set up parallel worktrees:

# Create a worktree for frontend work
git worktree add ../[project]-frontend feature/[issue]-frontend

# Create a worktree for backend work
git worktree add ../[project]-backend feature/[issue]-backend

# After both complete, merge to feature branch
git merge feature/[issue]-frontend
git merge feature/[issue]-backend

Reconciliation rule: If merge conflicts arise in shared files (layout.tsx, lib/db.ts), resolve in favor of the backend implementation for data/auth files and the frontend implementation for UI/styling files. Flag unresolvable conflicts for human review before proceeding.

Default: Sequential execution is safe. Use parallel worktrees only when phases are clearly independent.

Added: 2026-03-22 — Parallel execution guidance (claude-caliper alignment)


8 Execute-Plan Completion Checklist

Before marking execute-plan complete, verify:

  1. README.md exists in apps/[project]/ and contains:

    • One-liner (what it does + who it's for)
    • Numbered user journey
    • Stack table (all layers)
    • All env vars listed by name
    • Schema apply step (tables, where to run)
    • npm run dev instructions and what success looks like
    • Every HTTP endpoint documented (method, path, body, response)
    • PostHog analytics events table (event name, trigger, properties)
    • Key design decisions
  2. .env.local.example lists every process.env.* reference in the codebase — including any variables added during peer-review or fix cycles.

    • Mandatory grep verification: Run grep -r 'process\.env\.' src/ | grep -oP 'process\.env\.\K[A-Z_]+' | sort -u and compare against every key in .env.local.example. Any key in the grep output absent from .env.local.example is a blocking gap. Any key name that diverges (e.g., NEXT_PUBLIC_ added or removed) is a deploy blocker. .env.local.example must be generated from source, never from memory.

    Added: 2026-04-03 — MoneyMirror (issue-009)

  3. Infrastructure provisioning is complete (blocking — do not mark done until all pass):

    • Neon/Supabase project created and DATABASE_URL filled in .env.local
    • Database schema applied (schema.sql run in SQL editor; all tables verified)
    • Auth provider provisioned (e.g., Neon Auth NEON_AUTH_BASE_URL obtained and filled)
    • All non-optional env vars have real values in .env.local — no empty strings
    • Sentry project created; NEXT_PUBLIC_SENTRY_DSN, SENTRY_AUTH_TOKEN, SENTRY_ORG, SENTRY_PROJECT filled in .env.local
    • npm run dev boots without errors and the core user flow works end-to-end locally

If any item above is incomplete, execute-plan is not done — it is blocked. Infra gaps discovered at deploy-check are execute-plan failures.

  1. Enum input validation contract: For every new input field that persists an enum value (type, status, network, purpose, category, etc.):
    • Client uses a picker, select, or radio group constrained to valid values — free-text inputs for enum columns are not acceptable
    • Server returns HTTP 4xx on invalid enum input — silent sanitization to null is a blocking violation (it gives users false confidence their input was saved)
    • schema.sql includes a CHECK constraint for the enum column
    • Mandatory: All three must be present. A feature missing any one is incomplete.

    Added: 2026-04-04 — MoneyMirror Phase 2

Added: 2026-03-21 — Ozi Reorder Experiment

Updated: 2026-04-03 — Add infra provisioning checklist + Sentry setup as execute-plan hard deliverables (shift-left from deploy-check)

Updated: 2026-04-04 — MoneyMirror Phase 2 (item 4: enum input validation contract)


Rules

Follow architecture defined earlier.

Avoid unnecessary complexity.

Prioritize working MVP over perfect implementation.