AGENTIC SDLC PLATFORM

Blaze Platform

A 4-phase agentic SDLC that enforces TDD, BDD, and CDD at every gate — producing a complete compliance evidence chain from requirement to attestation, automatically.

65 Specialized Agents
4 SDLC Phases
3 Pillars (TDD/BDD/CDD)
4 AI Models in Review
>90% Min Compliance Score

How a Request Flows Through Blaze

A developer describes work ("implement feature X"). Blaze routes it through four phases, dispatching specialized agents at each step, enforcing quality gates between phases, and collecting evidence throughout.

 Developer: "implement feature X"
sdlc-orchestrator
Routes to phase • dispatches agents • enforces gates
PHASE 1
Strategic Intelligence
prd-generator
architecture-reviewer
critical-thinking
risk-assessment
PHASE 2
Automated Development
test-coverage-analyzer
security-reviewer
code-quality-reviewer
playwright-e2e-tester
PHASE 3
Orchestrated Deployment
pipeline-orchestrator
trust-enforcer
cdd-methodology
PHASE 4
Lifecycle Management
pr-orchestrator
multi-ai-reviewer
compliance-manager
GATE: BDD scenarios
GATE: ≥80% coverage
GATE: Pipeline + trust
GATE: ≥90% compliance
Evidence Chain
evidence/development/{feature}/*.json
Approved PR
Multi-AI reviewed, human-approved, merged

Where Things Live

The platform is split into two directories with distinct roles:

DirectoryRoleContains
.opencode/ Intelligence 65 agent definitions, commands, schemas, skills, tools — auto-discovered at startup
blaze/ Enforcement Python validators, git hooks, config files, evidence engines — loaded on demand
MCP servers Integration Neo4j (graph), Playwright (browser), Context7 (docs) — external tool access

Agent Visibility Tiers

Not all 65 agents are user-facing. Three tiers control who can invoke what:

TierCountWho Invokes
Primary 2 User switches tabs: sdlc review
Visible 10 User types @agent-name in chat
Hidden 55 Only orchestrators call these — never the user directly

Strategic Intelligence

Before a single line of code is written, Phase 1 produces a PRD with BDD Gherkin scenarios, an architecture review, risk assessment, and critical-thinking analysis.

prd-generator
Generates Product Requirements Documents with mandatory BDD Gherkin acceptance criteria for every user story.
BDD Phase 1
architecture-reviewer
Reviews system architecture, design patterns, and structural integrity. Validates ADRs against hypothesis framework.
Architecture Phase 1
critical-thinking
Applies rigorous 7-phase analytical framework. Challenges assumptions, identifies logical gaps, stress-tests decisions.
Analysis Phase 1

Supporting Agents

risk-assessment codebase-mapper goal-verifier regulatory-analysis ai-governance-advisor design-review
Quality Gate: Phase 1 → Phase 2

PRD must contain BDD Gherkin scenarios with Given/When/Then acceptance criteria for every user story. No scenarios = no code.

Automated Development

All development happens in isolated git worktrees. Tests are written first (TDD), validated against BDD scenarios, and evidence is collected continuously (CDD).

TDD Red-Green-Refactor Cycle

RED

Write failing tests from BDD scenarios

GREEN

Write minimum code to pass tests

REFACTOR

Clean up while maintaining green tests

EVIDENCE

Collect coverage, results, BDD mapping

test-coverage-analyzer
Verifies test-first approach via git history. Enforces ≥80% coverage threshold.
TDD
security-reviewer
OWASP Top 10 vulnerability scan, secrets detection, dependency audit.
Security
code-quality-reviewer
Standards compliance, best practices, code smells, complexity analysis.
Quality
playwright-e2e-tester
Maps E2E tests to BDD Gherkin scenarios. Browser automation via MCP.
BDD
Quality Gate: Phase 2 → Phase 3

≥80% test coverage • All tests passing • Tests written before implementation (git history verified) • BDD scenarios executable • Worktree isolation enforced

Orchestrated Deployment

CI/CD pipelines run in isolation, trust verification ensures code integrity, and deployment evidence is collected for the compliance chain.

pipeline-orchestrator
Orchestrates CI/CD pipeline execution. Coordinates build, test, and deploy stages with dependency-aware sequencing.
CI/CD Phase 3
trust-enforcer
Validates code integrity claims. Prevents trust violations, verifies no unauthorized modifications to critical paths.
Trust Phase 3
cdd-methodology
Collects deployment evidence, validates compliance controls, and produces the evidence artifacts required for Phase 4 review.
CDD Phase 3

Evidence Directory Structure

# Evidence collected at Phase 3 evidence/development/{feature}/ phase-3-pipeline-results.json # CI/CD execution log phase-3-trust-verification.json # Code integrity attestation phase-3-deployment-evidence.json # Deployment confirmation screenshots/ phase-3-deployment-verification.png
Quality Gate: Phase 3 → Phase 4

Pipeline must complete successfully • Trust verification must pass • Deployment evidence must be collected

Lifecycle Management

Phase 4 runs two complementary review mechanisms: pr-orchestrator coordinates 8 specialized review agents for code analysis, then multi-ai-reviewer runs cross-model validation across 4 AI models. Both must pass before human approval.

PR Orchestrator 8 Agents

Dispatches specialized review agents in parallel:

code-quality-reviewer security-reviewer test-coverage-analyzer dependency-checker architecture-reviewer documentation-reviewer compliance-manager trust-enforcer

Multi-AI Reviewer 4 Models

Sequential cross-model validation for consensus:

Sonnet

Quality & patterns

Opus

Architecture & security

Gemini + GPT-4o

Cross-model validation

Synthesis

Consensus score

Consensus Rule

3 of 4 models must agree (majority consensus) on no critical issues. Disagreements trigger escalation to human reviewer with full diff context.

Compliance Attestation

CDD methodology agent generates a final attestation linking every requirement to its test, code, and evidence artifacts.

Human Approval Required

No PR merges without human approval. AI review is advisory — the merge decision always requires a human.

Quality Gate: Phase 4 → Complete

≥90% compliance score • Attestation generated • All BDD acceptance criteria met • Human merge approval

Quality Gate Reference

Every phase transition is gated. These are hard blocks — no override, no bypass (except documented emergency protocol).

TransitionPillarCriteriaHard Block?
Phase 1 → 2 BDD PRD contains Gherkin scenarios for every user story
Phase 1 → 2 CDD Strategic intelligence evidence collected
Phase 2 → 3 TDD Test coverage ≥80%, all tests pass *
Phase 2 → 3 TDD Tests written before implementation (git history)
Phase 2 → 3 BDD BDD scenarios executable as automated tests
Phase 2 → 3 Worktree Development in isolated git worktree
Phase 3 → 4 CDD Pipeline green, trust verified, evidence collected
Phase 4 → Done CDD Compliance score ≥90%
Phase 4 → Done CDD Final attestation generated
Phase 4 → Done Review PR approved by human reviewer
Always Security 0 critical issues, ≤2 high (with justification)
Always TDD Every implementation file has a corresponding test file

* The SDLC phase gate enforces ≥80% on new code. CI enforces a project-wide floor of ≥50% on lines/branches/functions/statements.

Enforcement Layers

Pre-commit hooks block direct commits to main. SDLC orchestrator validates phase transitions. PR orchestrator coordinates 8+ review agents. Trust enforcer prevents code integrity violations.

Evidence Paths

evidence/development/{feature}/ # Feature work evidence/platform/{category}/ # Infrastructure evidence/trace-backups/ # Cross-cutting

CDD Evidence Chain

Compliance-Driven Development produces a continuous, machine-readable evidence chain from requirement through attestation.

Requirement

PRD + work item

BDD Scenario

Gherkin Given/When/Then

Implementation

Code + tests (TDD)

Verification

Pipeline + trust check

Attestation

Compliance ≥90%

Full Evidence Directory

evidence/development/{feature}/ phase-1-strategic-intelligence.json # Risk, architecture, PRD summary phase-2-test-results.json # Unit + integration test output phase-2-coverage.json # Line/branch/function coverage bdd-scenario-mapping.json # Gherkin ↔ test file mapping phase-3-pipeline-results.json # CI/CD execution log phase-3-trust-verification.json # Code integrity attestation phase-4-pr-review.json # Multi-AI review consensus phase-4-final-attestation.json # Compliance sign-off context-capsule.json # Cross-phase evidence summary screenshots/ phase-3-deployment-verification.png phase-4-e2e-test-results.png

Neo4j Graph Model

Evidence relationships are stored in a Neo4j knowledge graph via MCP. Nodes represent requirements, tests, code changes, and attestations. Edges encode traceability links — enabling queries like "show me every test that covers requirement X" or "which requirements lack evidence."

Enforcement Engines

Three Python enforcement engines run continuously:

evidence-generator.py — Collects artifacts at each phase
compliance-monitor.py — Real-time compliance scoring
workflow-validator.py — SDLC phase gate validation

BPMN & DMN Capabilities

Blaze includes a full process modeling pipeline: 7 specialized agents for creating, validating, testing, and deploying BPMN 2.0 process diagrams and DMN decision tables, with support for both Camunda 7 and Camunda 8 (Zeebe).

Agent Pipeline

Create

bpmn-specialist builds BPMN 2.0 XML from requirements

Validate

bpmn-validator (C7) + bpmn8-validator (C8) check schema & engine compat

Test

bpmn-tester generates gateway, boundary event, and flow tests

Simulate

process-simulator runs Monte Carlo what-if analysis via PM4Py

Commit

bpmn-commit-agent validates XML + visual overlaps before commit

bpmn-specialist
Full lifecycle BPMN 2.0 management: creation, repair, optimization, deployment. Enforces layout standards and visual clarity rules.
Primary Camunda 7+8
dmn-decision-architect
Creates and validates DMN 1.3 decision tables with FEEL expressions and proper hit policy configuration.
DMN FEEL
process-simulator
What-if scenario analysis using PM4Py simulation and Monte Carlo methods. Models throughput, bottlenecks, and resource contention.
Simulation PM4Py

Enforced Standards

Two rule sets govern all BPMN output:

modeling-standards.md — Correct element types, labeling conventions, gateway flow labels, event types, pool/lane usage
visual-clarity.md — Left-to-right flow, minimum spacing (50px), no overlaps, grid alignment, consistent diamond sizes

Dual Engine Support

Separate validators for each Camunda generation:

bpmn-validator — Camunda 7 compatibility, legacy element checks
bpmn8-validator — Camunda 8.7+ (Zeebe), connector validation, FEEL expressions

Integration Layer

Blaze connects to external systems via Model Context Protocol (MCP) servers, supports multiple AI platforms, and integrates with major PM tools.

MCP Servers

Neo4j

Knowledge graph for evidence relationships, requirement traceability, and agent interaction history.

Playwright

Browser automation for E2E testing, screenshot evidence, and design reviews via MCP tool calls.

Context7

Up-to-date library documentation retrieval. Agents query Context7 for current API references.

Memory Bank

Persistent session context, decision logs, and active context handoff between conversations.

Platform Support

CapabilityOpenCode (L1)Claude Code (L2)Cursor (L3)
Agent orchestration
MCP server integration
SDLC phase enforcement
Multi-AI review pipeline
CDD evidence collection
Git worktree isolation

✓ Full support  •  ● Partial  •  ✗ Not available

PM Tool Integration

GitHub Issues Jira Azure DevOps Linear

All PM operations go through dedicated manager agents — never direct CLI. Each agent handles format conversion, field mapping, BDD validation, and evidence linking.