Skip to content

Workflow

The Maistik Spec Kit workflow is a structured pipeline. Each step produces a Markdown artifact that feeds the next. You can enter at any point — not every project needs every step.

Auto-Reconciliation

After every spec-modifying command (steps 1-5), /speckit.reconcile runs automatically. It scans all existing artifacts for conflicts with your changes and asks you to resolve any contradictions before proceeding. Learn more.

Step 1: Constitution

Command: /speckit.constitution

The constitution defines your project's immutable principles. These guide every subsequent decision — from spec writing to code generation.

bash
/speckit.constitution
  Code quality: all functions must have tests.
  Performance: pages must load under 2 seconds.
  Security: all inputs must be validated.
  UX: follow WCAG 2.1 AA accessibility standards.

Output: .specify/memory/constitution.md

The AI will walk you through defining principles interactively. You can also provide them as a prompt.

TIP

Run this once at project start. Update it as your team learns — the /speckit.refine command will flag specs that drift from updated principles.

Step 2: Branding (Optional)

Command: /speckit.brand

Define your product's visual identity: colors, typography, spacing, shadows, motion, and iconography.

bash
/speckit.brand Modern SaaS product with teal primary color, clean typography, 4px spacing grid

Output: .specify/branding/branding.md + .specify/branding/tokens.json

The branding spec becomes context for all /speckit.design runs — components reference these tokens.

Step 3: Infrastructure (Optional)

Command: /speckit.infra

Plan your full development and production stack. The command scans existing artifacts (Docker Compose, Helm charts, CI/CD config) and pre-fills the spec.

bash
/speckit.infra Docker Compose for dev with PostgreSQL and Redis, Kubernetes with Helm for production

Output: .specify/infra/infra.md

Covers:

  • Development environment (Docker Compose services, env vars, feature flags)
  • Production environment (Kubernetes deployment phases, Helm charts, auto-scaling)
  • Data layer (databases, caching, message brokers, backups)
  • CI/CD pipeline (stages, jobs, container registry, deployment strategy)
  • Observability (logging, metrics, tracing, alerting)
  • Disaster recovery (RPO/RTO targets, recovery procedures)

Step 4: Specify

Command: /speckit.specify

The core of SDD. Describe what you want to build — focus on the what and why, not the tech stack.

bash
/speckit.specify
  Build a team task management app. Users can create projects, assign tasks,
  drag tasks between kanban columns. Five predefined users, no auth for v1.
  Comments on tasks, assignee highlighting, mobile-responsive.

Output: .specify/specs/001-feature-name/spec.md

The spec includes:

  • Overview and context
  • Functional requirements
  • Non-functional requirements
  • User stories with acceptance criteria
  • Edge cases
  • Review checklist

WARNING

Be explicit about what you want. Mark anything unclear with [NEEDS CLARIFICATION] rather than guessing.

Command: /speckit.clarify

Interactive Q&A that identifies and fills gaps in your spec. The AI asks structured questions about underspecified areas.

bash
/speckit.clarify

Run this before planning to reduce rework. The answers are recorded in a Clarifications section of the spec.

Step 6: Plan

Command: /speckit.plan

Now specify your tech stack and architecture. The AI creates a detailed implementation plan.

bash
/speckit.plan
  Use Next.js 14 with App Router, PostgreSQL with Prisma ORM,
  Tailwind CSS for styling, deploy on Vercel.

Output: .specify/specs/001-feature-name/plan.md (plus supporting files like data-model.md, research.md, contracts/)

The plan includes:

  • Architecture decisions with rationale
  • Data model design
  • API contracts
  • Implementation phases
  • Pre-implementation gates (simplicity, anti-abstraction)

Step 7: Tasks

Command: /speckit.tasks

Generates an actionable task breakdown from the plan.

bash
/speckit.tasks

Output: .specify/specs/001-feature-name/tasks.md

Features:

  • Tasks ordered by dependency
  • Parallel execution markers [P]
  • File path specifications
  • Test-first ordering
  • Checkpoint validations per phase

Command: /speckit.analyze

Cross-artifact consistency check before implementation. Scores your specs on a 100-point rubric.

bash
/speckit.analyze

This is read-only — it never modifies files. It checks for:

  • Requirements with no tasks (coverage gaps)
  • Tasks with no requirements (orphans)
  • Terminology drift across artifacts
  • Constitution violations
  • Ambiguous or underspecified requirements
  • Design token consistency (if design specs exist)

Score breakdown: Completeness (30pts) + Consistency (25pts) + Testability (25pts) + Clarity (20pts)

Step 9: Estimate (Optional)

Command: /speckit.estimate

Add hours estimation and complexity classification to every task in tasks.md.

bash
/speckit.estimate
/speckit.estimate Consider we're a junior team, be conservative

This command enriches tasks.md in-place:

  • Adds inline estimation tooltips: ⏱2h 🟡 on each task line
  • Classifies complexity: 🟢 Low (0.5–2h), 🟡 Medium (2–6h), 🔴 High (4–12h)
  • Appends an Estimation Summary section with totals by complexity, phase, and user story
  • Enriches the execution graph JSON with estimates and summary keys
  • Calibrates based on available artifacts (contracts, data-model, research)

TIP

Estimates are AI-generated approximations. Review and adjust based on team velocity and domain knowledge.

Step 10: Implement

Command: /speckit.implement

Executes all tasks in order, building your feature according to the plan.

bash
/speckit.implement

The AI:

  • Validates prerequisites (constitution, spec, plan, tasks all exist)
  • Executes tasks respecting dependency order
  • Follows TDD approach (tests before implementation)
  • Reports progress and handles errors

TIP

The AI will run local CLI commands (npm, dotnet, etc.). Make sure you have the required tools installed.

Step 11: Export (Optional)

Command: /speckit.export

Export all tasks to CSV for spreadsheets, project management tools, or reporting dashboards.

bash
/speckit.export
  • Output: tasks-export.csv in the feature directory
  • Columns: spec_id, spec_name, task_id, task_name, done, estimation, complexity
  • Includes estimation and complexity data if /speckit.estimate was run first
  • Compatible with Excel, Google Sheets, Jira CSV import, etc.

Step 12: Refine (After Implementation)

Command: /speckit.refine

Reviews specs against implementation reality. Detects gaps, staleness, and suggests improvements.

bash
/speckit.refine

The refinement engine:

  • Compares spec complexity vs. implementation complexity
  • Flags specs whose linked code has changed significantly
  • Propagates constitution updates to affected specs
  • Scores specs and suggests maturity upgrades

Verification (After Implementation)

After building, use these commands to verify quality and catch drift:

Drift Detection

Command: /speckit.drift

Compares actual code against the spec to find semantic mismatches. Classifies findings as Ghost (code not in spec), Phantom (spec not in code), or Mutated (both exist but differ).

bash
/speckit.drift

This is read-only — it writes only a drift log, never modifies spec or code.

Security Review

Command: /speckit.security

Audits the implementation against OWASP Top 10 and any security requirements in the spec.

bash
/speckit.security

Health Dashboard

Command: /speckit.health

Portfolio-level view across all features. Shows maturity distribution, quality scores, task completion, drift findings, and risk classification.

bash
/speckit.health --save

Parallel Implementation with Swarm

For larger features, /speckit.swarm dispatches tasks to specialized agents that work in parallel:

bash
/speckit.swarm

Tasks execute in waves — all tasks in a wave run simultaneously, with consensus checkpoints between waves. Each task is routed to the appropriate specialist agent (Infrastructure, Data, Logic, Interface, or Test) based on its description.

See Swarm Agent Architecture for details.

Iteration

After implementation, the cycle continues:

  1. New features: Run /speckit.specify for the next feature
  2. Improvements: Run /speckit.refine to improve existing specs
  3. Team learning: Run /speckit.levelup to capture conventions
  4. Design updates: Run /speckit.design for UI component specs
  5. Verify alignment: Run /speckit.drift before every PR
  6. Monitor health: Run /speckit.health weekly across all features

Released under the MIT License.