Workflow
The Maistik Spec Kit workflow is a structured pipeline. Each step produces a Markdown artifact that feeds the next. You can enter at any point — not every project needs every step.
Auto-Reconciliation
After every spec-modifying command (steps 1-5), /speckit.reconcile runs automatically. It scans all existing artifacts for conflicts with your changes and asks you to resolve any contradictions before proceeding. Learn more.
Step 1: Constitution
Command: /speckit.constitution
The constitution defines your project's immutable principles. These guide every subsequent decision — from spec writing to code generation.
/speckit.constitution
Code quality: all functions must have tests.
Performance: pages must load under 2 seconds.
Security: all inputs must be validated.
UX: follow WCAG 2.1 AA accessibility standards.Output: .specify/memory/constitution.md
The AI will walk you through defining principles interactively. You can also provide them as a prompt.
TIP
Run this once at project start. Update it as your team learns — the /speckit.refine command will flag specs that drift from updated principles.
Step 2: Branding (Optional)
Command: /speckit.brand
Define your product's visual identity: colors, typography, spacing, shadows, motion, and iconography.
/speckit.brand Modern SaaS product with teal primary color, clean typography, 4px spacing gridOutput: .specify/branding/branding.md + .specify/branding/tokens.json
The branding spec becomes context for all /speckit.design runs — components reference these tokens.
Step 3: Infrastructure (Optional)
Command: /speckit.infra
Plan your full development and production stack. The command scans existing artifacts (Docker Compose, Helm charts, CI/CD config) and pre-fills the spec.
/speckit.infra Docker Compose for dev with PostgreSQL and Redis, Kubernetes with Helm for productionOutput: .specify/infra/infra.md
Covers:
- Development environment (Docker Compose services, env vars, feature flags)
- Production environment (Kubernetes deployment phases, Helm charts, auto-scaling)
- Data layer (databases, caching, message brokers, backups)
- CI/CD pipeline (stages, jobs, container registry, deployment strategy)
- Observability (logging, metrics, tracing, alerting)
- Disaster recovery (RPO/RTO targets, recovery procedures)
Step 4: Specify
Command: /speckit.specify
The core of SDD. Describe what you want to build — focus on the what and why, not the tech stack.
/speckit.specify
Build a team task management app. Users can create projects, assign tasks,
drag tasks between kanban columns. Five predefined users, no auth for v1.
Comments on tasks, assignee highlighting, mobile-responsive.Output: .specify/specs/001-feature-name/spec.md
The spec includes:
- Overview and context
- Functional requirements
- Non-functional requirements
- User stories with acceptance criteria
- Edge cases
- Review checklist
WARNING
Be explicit about what you want. Mark anything unclear with [NEEDS CLARIFICATION] rather than guessing.
Step 5: Clarify (Recommended)
Command: /speckit.clarify
Interactive Q&A that identifies and fills gaps in your spec. The AI asks structured questions about underspecified areas.
/speckit.clarifyRun this before planning to reduce rework. The answers are recorded in a Clarifications section of the spec.
Step 6: Plan
Command: /speckit.plan
Now specify your tech stack and architecture. The AI creates a detailed implementation plan.
/speckit.plan
Use Next.js 14 with App Router, PostgreSQL with Prisma ORM,
Tailwind CSS for styling, deploy on Vercel.Output: .specify/specs/001-feature-name/plan.md (plus supporting files like data-model.md, research.md, contracts/)
The plan includes:
- Architecture decisions with rationale
- Data model design
- API contracts
- Implementation phases
- Pre-implementation gates (simplicity, anti-abstraction)
Step 7: Tasks
Command: /speckit.tasks
Generates an actionable task breakdown from the plan.
/speckit.tasksOutput: .specify/specs/001-feature-name/tasks.md
Features:
- Tasks ordered by dependency
- Parallel execution markers
[P] - File path specifications
- Test-first ordering
- Checkpoint validations per phase
Step 8: Analyze (Recommended)
Command: /speckit.analyze
Cross-artifact consistency check before implementation. Scores your specs on a 100-point rubric.
/speckit.analyzeThis is read-only — it never modifies files. It checks for:
- Requirements with no tasks (coverage gaps)
- Tasks with no requirements (orphans)
- Terminology drift across artifacts
- Constitution violations
- Ambiguous or underspecified requirements
- Design token consistency (if design specs exist)
Score breakdown: Completeness (30pts) + Consistency (25pts) + Testability (25pts) + Clarity (20pts)
Step 9: Estimate (Optional)
Command: /speckit.estimate
Add hours estimation and complexity classification to every task in tasks.md.
/speckit.estimate
/speckit.estimate Consider we're a junior team, be conservativeThis command enriches tasks.md in-place:
- Adds inline estimation tooltips:
⏱2h 🟡on each task line - Classifies complexity: 🟢 Low (0.5–2h), 🟡 Medium (2–6h), 🔴 High (4–12h)
- Appends an Estimation Summary section with totals by complexity, phase, and user story
- Enriches the execution graph JSON with
estimatesandsummarykeys - Calibrates based on available artifacts (contracts, data-model, research)
TIP
Estimates are AI-generated approximations. Review and adjust based on team velocity and domain knowledge.
Step 10: Implement
Command: /speckit.implement
Executes all tasks in order, building your feature according to the plan.
/speckit.implementThe AI:
- Validates prerequisites (constitution, spec, plan, tasks all exist)
- Executes tasks respecting dependency order
- Follows TDD approach (tests before implementation)
- Reports progress and handles errors
TIP
The AI will run local CLI commands (npm, dotnet, etc.). Make sure you have the required tools installed.
Step 11: Export (Optional)
Command: /speckit.export
Export all tasks to CSV for spreadsheets, project management tools, or reporting dashboards.
/speckit.export- Output:
tasks-export.csvin the feature directory - Columns:
spec_id,spec_name,task_id,task_name,done,estimation,complexity - Includes estimation and complexity data if
/speckit.estimatewas run first - Compatible with Excel, Google Sheets, Jira CSV import, etc.
Step 12: Refine (After Implementation)
Command: /speckit.refine
Reviews specs against implementation reality. Detects gaps, staleness, and suggests improvements.
/speckit.refineThe refinement engine:
- Compares spec complexity vs. implementation complexity
- Flags specs whose linked code has changed significantly
- Propagates constitution updates to affected specs
- Scores specs and suggests maturity upgrades
Verification (After Implementation)
After building, use these commands to verify quality and catch drift:
Drift Detection
Command: /speckit.drift
Compares actual code against the spec to find semantic mismatches. Classifies findings as Ghost (code not in spec), Phantom (spec not in code), or Mutated (both exist but differ).
/speckit.driftThis is read-only — it writes only a drift log, never modifies spec or code.
Security Review
Command: /speckit.security
Audits the implementation against OWASP Top 10 and any security requirements in the spec.
/speckit.securityHealth Dashboard
Command: /speckit.health
Portfolio-level view across all features. Shows maturity distribution, quality scores, task completion, drift findings, and risk classification.
/speckit.health --saveParallel Implementation with Swarm
For larger features, /speckit.swarm dispatches tasks to specialized agents that work in parallel:
/speckit.swarmTasks execute in waves — all tasks in a wave run simultaneously, with consensus checkpoints between waves. Each task is routed to the appropriate specialist agent (Infrastructure, Data, Logic, Interface, or Test) based on its description.
See Swarm Agent Architecture for details.
Iteration
After implementation, the cycle continues:
- New features: Run
/speckit.specifyfor the next feature - Improvements: Run
/speckit.refineto improve existing specs - Team learning: Run
/speckit.levelupto capture conventions - Design updates: Run
/speckit.designfor UI component specs - Verify alignment: Run
/speckit.driftbefore every PR - Monitor health: Run
/speckit.healthweekly across all features