Blaze Platform
A 4-phase agentic SDLC that enforces TDD, BDD, and CDD at every gate — producing a complete compliance evidence chain from requirement to attestation, automatically.
How a Request Flows Through Blaze
A developer describes work ("implement feature X"). Blaze routes it through four phases, dispatching specialized agents at each step, enforcing quality gates between phases, and collecting evidence throughout.
architecture-reviewer
critical-thinking
risk-assessment
security-reviewer
code-quality-reviewer
playwright-e2e-tester
trust-enforcer
cdd-methodology
multi-ai-reviewer
compliance-manager
Where Things Live
The platform is split into two directories with distinct roles:
| Directory | Role | Contains |
|---|---|---|
.opencode/ |
Intelligence | 65 agent definitions, commands, schemas, skills, tools — auto-discovered at startup |
blaze/ |
Enforcement | Python validators, git hooks, config files, evidence engines — loaded on demand |
MCP servers |
Integration | Neo4j (graph), Playwright (browser), Context7 (docs) — external tool access |
Agent Visibility Tiers
Not all 65 agents are user-facing. Three tiers control who can invoke what:
| Tier | Count | Who Invokes |
|---|---|---|
| Primary | 2 | User switches tabs: sdlc review |
| Visible | 10 | User types @agent-name in chat |
| Hidden | 55 | Only orchestrators call these — never the user directly |
Strategic Intelligence
Before a single line of code is written, Phase 1 produces a PRD with BDD Gherkin scenarios, an architecture review, risk assessment, and critical-thinking analysis.
Supporting Agents
PRD must contain BDD Gherkin scenarios with Given/When/Then acceptance criteria for every user story. No scenarios = no code.
Automated Development
All development happens in isolated git worktrees. Tests are written first (TDD), validated against BDD scenarios, and evidence is collected continuously (CDD).
TDD Red-Green-Refactor Cycle
RED
Write failing tests from BDD scenarios
GREEN
Write minimum code to pass tests
REFACTOR
Clean up while maintaining green tests
EVIDENCE
Collect coverage, results, BDD mapping
≥80% test coverage • All tests passing • Tests written before implementation (git history verified) • BDD scenarios executable • Worktree isolation enforced
Orchestrated Deployment
CI/CD pipelines run in isolation, trust verification ensures code integrity, and deployment evidence is collected for the compliance chain.
Evidence Directory Structure
Pipeline must complete successfully • Trust verification must pass • Deployment evidence must be collected
Lifecycle Management
Phase 4 runs two complementary review mechanisms: pr-orchestrator coordinates 8 specialized review agents for code analysis, then multi-ai-reviewer runs cross-model validation across 4 AI models. Both must pass before human approval.
PR Orchestrator 8 Agents
Dispatches specialized review agents in parallel:
Multi-AI Reviewer 4 Models
Sequential cross-model validation for consensus:
Quality & patterns
Architecture & security
Cross-model validation
Consensus score
Consensus Rule
3 of 4 models must agree (majority consensus) on no critical issues. Disagreements trigger escalation to human reviewer with full diff context.
Compliance Attestation
CDD methodology agent generates a final attestation linking every requirement to its test, code, and evidence artifacts.
Human Approval Required
No PR merges without human approval. AI review is advisory — the merge decision always requires a human.
≥90% compliance score • Attestation generated • All BDD acceptance criteria met • Human merge approval
Quality Gate Reference
Every phase transition is gated. These are hard blocks — no override, no bypass (except documented emergency protocol).
| Transition | Pillar | Criteria | Hard Block? |
|---|---|---|---|
| Phase 1 → 2 | BDD | PRD contains Gherkin scenarios for every user story | ✓ |
| Phase 1 → 2 | CDD | Strategic intelligence evidence collected | ✓ |
| Phase 2 → 3 | TDD | Test coverage ≥80%, all tests pass * | ✓ |
| Phase 2 → 3 | TDD | Tests written before implementation (git history) | ✓ |
| Phase 2 → 3 | BDD | BDD scenarios executable as automated tests | ✓ |
| Phase 2 → 3 | Worktree | Development in isolated git worktree | ✓ |
| Phase 3 → 4 | CDD | Pipeline green, trust verified, evidence collected | ✓ |
| Phase 4 → Done | CDD | Compliance score ≥90% | ✓ |
| Phase 4 → Done | CDD | Final attestation generated | ✓ |
| Phase 4 → Done | Review | PR approved by human reviewer | ✓ |
| Always | Security | 0 critical issues, ≤2 high (with justification) | ✓ |
| Always | TDD | Every implementation file has a corresponding test file | ✓ |
* The SDLC phase gate enforces ≥80% on new code. CI enforces a project-wide floor of ≥50% on lines/branches/functions/statements.
Enforcement Layers
Pre-commit hooks block direct commits to main. SDLC orchestrator validates phase transitions. PR orchestrator coordinates 8+ review agents. Trust enforcer prevents code integrity violations.
Evidence Paths
CDD Evidence Chain
Compliance-Driven Development produces a continuous, machine-readable evidence chain from requirement through attestation.
Requirement
PRD + work item
BDD Scenario
Gherkin Given/When/Then
Implementation
Code + tests (TDD)
Verification
Pipeline + trust check
Attestation
Compliance ≥90%
Full Evidence Directory
Neo4j Graph Model
Evidence relationships are stored in a Neo4j knowledge graph via MCP. Nodes represent requirements, tests, code changes, and attestations. Edges encode traceability links — enabling queries like "show me every test that covers requirement X" or "which requirements lack evidence."
Enforcement Engines
Three Python enforcement engines run continuously:
evidence-generator.py — Collects artifacts at each phasecompliance-monitor.py — Real-time compliance scoringworkflow-validator.py — SDLC phase gate validationBPMN & DMN Capabilities
Blaze includes a full process modeling pipeline: 7 specialized agents for creating, validating, testing, and deploying BPMN 2.0 process diagrams and DMN decision tables, with support for both Camunda 7 and Camunda 8 (Zeebe).
Agent Pipeline
Create
bpmn-specialist builds BPMN 2.0 XML from requirements
Validate
bpmn-validator (C7) + bpmn8-validator (C8) check schema & engine compat
Test
bpmn-tester generates gateway, boundary event, and flow tests
Simulate
process-simulator runs Monte Carlo what-if analysis via PM4Py
Commit
bpmn-commit-agent validates XML + visual overlaps before commit
Enforced Standards
Two rule sets govern all BPMN output:
modeling-standards.md — Correct element types, labeling conventions, gateway flow labels, event types, pool/lane usagevisual-clarity.md — Left-to-right flow, minimum spacing (50px), no overlaps, grid alignment, consistent diamond sizesDual Engine Support
Separate validators for each Camunda generation:
Integration Layer
Blaze connects to external systems via Model Context Protocol (MCP) servers, supports multiple AI platforms, and integrates with major PM tools.
MCP Servers
Neo4j
Knowledge graph for evidence relationships, requirement traceability, and agent interaction history.
Playwright
Browser automation for E2E testing, screenshot evidence, and design reviews via MCP tool calls.
Context7
Up-to-date library documentation retrieval. Agents query Context7 for current API references.
Memory Bank
Persistent session context, decision logs, and active context handoff between conversations.
Platform Support
| Capability | OpenCode (L1) | Claude Code (L2) | Cursor (L3) |
|---|---|---|---|
| Agent orchestration | ✓ | ✓ | ● |
| MCP server integration | ✓ | ✓ | ✓ |
| SDLC phase enforcement | ✓ | ✓ | ● |
| Multi-AI review pipeline | ✓ | ✓ | ✗ |
| CDD evidence collection | ✓ | ✓ | ● |
| Git worktree isolation | ✓ | ✓ | ✓ |
✓ Full support • ● Partial • ✗ Not available
PM Tool Integration
All PM operations go through dedicated manager agents — never direct CLI. Each agent handles format conversion, field mapping, BDD validation, and evidence linking.