Skip to content

Commands Reference

All commands are available as slash commands in your AI coding agent after running specify init.

Core Workflow

These commands form the main Spec-Driven Development pipeline:

/speckit.constitution

Create or update project governing principles and development guidelines.

bash
/speckit.constitution Code quality, testing standards, security, accessibility
  • Output: .specify/memory/constitution.md
  • When to use: Once at project start, update as principles evolve
  • Learning loop: After implementation cycles, it can analyze patterns and suggest new principles

/speckit.specify

Define what you want to build — requirements, user stories, and acceptance criteria.

bash
/speckit.specify Build a photo album app with drag-and-drop organization
  • Output: .specify/specs/NNN-feature-name/spec.md
  • When to use: For each new feature
  • Creates: Feature branch, numbered directory, structured spec from template
  • Maturity tracking: Specs start at draft and evolve through refined → validated → implemented → evolved

/speckit.plan

Create a technical implementation plan with architecture and tech stack decisions.

bash
/speckit.plan Use React with TypeScript, PostgreSQL, deploy to AWS
  • Output: plan.md, data-model.md, research.md, contracts/
  • When to use: After spec is written and clarified
  • Reads: spec.md, constitution, architecture spec

/speckit.tasks

Generate an actionable, ordered task breakdown from the implementation plan.

bash
/speckit.tasks
  • Output: tasks.md
  • When to use: After plan is created
  • Features: Dependency ordering, parallel markers [P], file paths, TDD structure, checkpoints

/speckit.implement

Execute all tasks to build the feature according to the plan.

bash
/speckit.implement
  • When to use: After tasks are generated (and optionally analyzed)
  • Reads: All spec artifacts (constitution, spec, plan, tasks)
  • Behavior: Executes tasks in order, follows TDD, reports progress

/speckit.estimate

Enrich tasks.md with hours estimation, complexity classification, and a planning summary.

bash
/speckit.estimate
/speckit.estimate Consider we're a junior team, be conservative
  • Output: Enriched tasks.md (in-place) with estimation tooltips and summary section
  • When to use: After /speckit.tasks, before /speckit.implement or /speckit.swarm
  • Reads: tasks.md (required), plan.md (tech stack calibration), spec.md, data-model.md, contracts/, research.md (optional)
  • Idempotent: Re-running strips previous estimates before recalculating
  • Complexity tiers:
    • 🟢 Low (0.5h – 2h) — scaffolding, config, simple models, docs
    • 🟡 Medium (2h – 6h) — CRUD endpoints, standard services, middleware, tests
    • 🔴 High (4h – 12h) — auth/security, multi-system integration, performance-critical code
  • Enrichments:
    • Inline tooltips on each task line: <sup title="2h | Medium | reason">⏱2h 🟡</sup>
    • Execution graph JSON gains estimates and summary keys (existing keys preserved)
    • Estimation Summary section appended with totals by complexity, phase, and user story
  • Calibration: Estimates adjust based on available artifacts (contracts reduce endpoint estimates, data-model reduces model estimates, research reduces exploration time)
  • Disclaimer: Estimates are AI-generated approximations — review and adjust based on team velocity

/speckit.export

Export all tasks from tasks.md into a CSV file for project management tools and reporting.

bash
/speckit.export
  • Output: tasks-export.csv in the feature directory
  • When to use: After /speckit.tasks (or /speckit.estimate for richer data)
  • Reads: tasks.md
  • CSV columns: spec_id, spec_name, task_id, task_name, done, estimation, complexity
  • Features:
    • Captures completion status (done: true/false) from checkbox state
    • Extracts estimation and complexity from <sup> tags (if present from /speckit.estimate)
    • Includes all tasks from all phases (Setup, Foundational, User Stories, Polish)
    • Standard CSV format compatible with Excel, Google Sheets, Jira import, etc.

Design & Infrastructure

/speckit.brand

Initialize or update the project's branding specification with design tokens.

bash
/speckit.brand Modern SaaS with teal colors and clean typography
  • Output: .specify/branding/branding.md + .specify/branding/tokens.json
  • When to use: Once at project start, before creating design specs
  • Covers: Colors (light/dark themes), typography scale, spacing, shadows, borders, motion, iconography

/speckit.design

Create UI component specs, user flow specs, or screen layout specs for a feature.

bash
/speckit.design Design the user dashboard with navigation, stats cards, and activity feed
  • Output: .specify/specs/NNN-feature-name/design.md
  • When to use: After branding is defined, alongside or after /speckit.specify
  • Reads: Branding spec for token consistency
  • Validates: WCAG contrast ratios, accessibility requirements, token references

/speckit.infra

Create or update the infrastructure specification for development and production environments.

bash
/speckit.infra Docker Compose for dev, Kubernetes with Helm for production
  • Output: .specify/infra/infra.md
  • When to use: At project start or when infrastructure changes
  • Scans: Existing Docker Compose, Helm charts, CI/CD, Makefiles, env templates
  • Covers: Dev environment, prod deployment (5-phase), data layer, CI/CD, observability, DR
  • Scaffolding: Can generate docker-compose.yml, Makefile, Helm charts, .gitlab-ci.yml, deploy.sh

Consistency & Quality

/speckit.reconcile

Cross-artifact conflict detection and interactive resolution. Runs automatically after every spec-modifying command.

bash
/speckit.reconcile
  • Triggers after: /speckit.specify, /speckit.clarify, /speckit.constitution, /speckit.brand, /speckit.infra, /speckit.design
  • Reads: All existing specs, plans, tasks, constitution, branding, infrastructure
  • Detection passes: Contradiction detection, constitution violations, terminology drift, scope overlap, dependency conflicts, branding inconsistency, infrastructure misalignment
  • Interactive: Presents each conflict with options (keep new / keep existing / merge / defer)
  • Log: .specify/memory/reconciliation-log.md
  • Learn more: Auto-Reconciliation

/speckit.clarify

Interactive Q&A to fill specification gaps. Asks structured questions about underspecified areas.

bash
/speckit.clarify
  • When to use: After /speckit.specify, before /speckit.plan
  • Behavior: Sequential coverage-based questioning, records answers in spec

/speckit.analyze

Cross-artifact consistency and quality analysis across spec, plan, and tasks.

bash
/speckit.analyze
  • When to use: After /speckit.tasks, before /speckit.implement
  • Read-only: Never modifies files
  • Checks: Coverage gaps, orphan tasks, terminology drift, constitution violations, ambiguity, design token consistency
  • Scoring: 100-point rubric (completeness 30, consistency 25, testability 25, clarity 20)
  • Rating: Excellent (90+), Good (80+), Fair (70+), Poor (<70)

/speckit.checklist

Generate custom quality checklists that validate requirements completeness and clarity.

bash
/speckit.checklist
  • When to use: Anytime you want to validate spec quality
  • Output: Markdown checklist ("unit tests for English")

/speckit.refine

Review and apply auto-generated refinement suggestions for specs.

bash
/speckit.refine
  • When to use: After implementation, or after constitution updates
  • Detects: Gaps (thin specs vs. complex implementation), staleness (code changed since spec), consistency issues
  • Interactive: Presents suggestions, you approve/reject each one
  • Updates: Spec maturity tracking, Project History Record

/speckit.levelup

Capture team conventions, patterns, and lessons learned as reusable directives.

bash
/speckit.levelup We always use kebab-case for file names
  • Output: .specify/memory/directives/{slug}.md + updates index.md
  • When to use: When a team pattern emerges that should be consistent
  • Types: Convention, Pattern, Lesson, Guideline
  • Alignment: Checks against constitution, suggests amendments if needed

Advanced Implementation

/speckit.swarm

Dispatch parallel subagents to implement features using the execution graph from tasks.md. Each wave of tasks runs simultaneously with consensus checkpoints between waves.

bash
/speckit.swarm
  • Output: swarm-log.md
  • When to use: After /speckit.tasks has generated an execution graph
  • Requires: tasks.md with a json execution-graph fenced block
  • Behavior: Classifies tasks by specialist type (infrastructure, data, logic, interface, test), dispatches waves in parallel, runs consensus checkpoints, executes critical path sequentially
  • Agents used: Infrastructure, Data, Logic, Interface, Test (from .claude/agents/)

/speckit.tdd

Implement a feature using strict Test-Driven Development — red-green-refactor cycles anchored to spec acceptance criteria.

bash
/speckit.tdd
  • Output: Implementation code with full test coverage
  • When to use: Instead of /speckit.implement when you want test-first discipline
  • Requires: spec.md and tasks.md
  • Behavior: For each acceptance criterion: write failing test (red), write minimum code (green), refactor. Verifies 100% acceptance criteria coverage.

/speckit.pseudocode

Generate language-agnostic pseudocode and algorithm design from the feature spec, bridging specification and technical planning.

bash
/speckit.pseudocode
  • Output: pseudocode.md
  • When to use: After /speckit.specify, before /speckit.plan
  • Behavior: Translates acceptance criteria into algorithms, selects data structures, identifies design patterns, defines module boundaries
  • Includes: Complexity analysis (Big-O) for performance-constrained requirements

/speckit.tests

Generate failing test scaffolding from GIVEN/WHEN/THEN acceptance scenarios in spec.md. Produces TDD red-phase tests.

bash
/speckit.tests
/speckit.tests --framework pytest
/speckit.tests --type contract --update-tasks
  • Output: Test files in tests/contract/, tests/integration/, tests/unit/ + test-coverage.md
  • When to use: After /speckit.specify and /speckit.plan
  • Flags: --framework <name> (override auto-detected), --type <contract|integration|unit|all>, --update-tasks
  • Generates: Contract tests (API shape), integration tests (full GIVEN/WHEN/THEN), unit test stubs (complex logic)
  • Auto-detects: pytest, jest, vitest, rspec, go test, junit5, rust, XCTest

Verification & Monitoring

/speckit.drift

Detect semantic drift between implemented code and the feature specification. Read-only analysis.

bash
/speckit.drift
/speckit.drift 003-auth
  • Output: Drift report + .specify/memory/drift-log.md
  • When to use: Before PRs, after implementation
  • Categories:
    • Ghost — code exists but has no spec requirement
    • Phantom — spec requirement with no implementation
    • Mutated — both exist but differ semantically (always CRITICAL)
  • Read-only: Only writes the drift log, never modifies spec or code

/speckit.health

Generate a portfolio-level health dashboard across all features.

bash
/speckit.health
/speckit.health --save
/speckit.health --feature 003-auth
  • Output: Dashboard printed to console (or saved with --save)
  • When to use: Weekly check, after running /speckit.analyze on features
  • Flags: --save (write to .specify/memory/health-report.md), --feature <id> (detailed breakdown)
  • Shows: Maturity distribution, quality scores, task completion, drift findings, risk classification (HIGH/MEDIUM/LOW)
  • Risk heuristics: Based on quality score, drift score, maturity level, spec staleness

/speckit.security

Security review of implementation against spec requirements and OWASP Top 10.

bash
/speckit.security
  • Output: security-review.md
  • When to use: After /speckit.implement or /speckit.swarm
  • Phases:
    1. Spec security compliance (auth, data sensitivity, compliance)
    2. OWASP Top 10 audit (injection, XSS, broken access control, etc.)
    3. Secrets and configuration audit
  • Severity levels: CRITICAL (block deployment), HIGH (fix before merge), MEDIUM (follow-up)

Released under the MIT License.