Improve self-review, e2e plan consolidation, and DOCS story templates#42
Conversation
The /sync command was silently dropping the parent field when creating epics, then listing it as a manual "Next Step" instead of treating it as a failure. This adds explicit Jira API field syntax, post-creation verification for both epics and stories, and prohibits deferring parent linking to manual follow-up. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Scenarios that share the same setup (given) and action (when) are now merged into consolidated tests with multiple validations, reducing e2e suite execution time by avoiding repeated expensive setup+action. Adds a consolidation step, 8-validation cap, AC-tagged traceability, and updates downstream phases (revise, publish) for consistency. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace inline three-tier review logic in implement and e2e /code phases with references to _shared/recipes/self-review-gate.md. Add SUPPLEMENTARY_CRITERIA parameter to the recipe so callers can pass domain-specific evaluation criteria (e2e anti-patterns) and review focus directives (cross-cutting concerns) through the formal channel into the reviewer subagent. Reframe publish-time review from full re-review to cross-cutting review focused on inter-task interactions, since each sub-task is now individually reviewed during /code. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add Design as evaluation criterion #4 in the shared review protocol with concrete detection signals (parameter count, import spread, repeated error handling, exposed internal types). Add "Challenge decisions, not just implementation" as the first core principle, pushing reviewers to question whether abstractions should exist and whether boundaries are in the right place — not just whether the implementation is correct. Replace inline code quality checklists in implement and e2e /validate Step 6 with self-review gate invocations, bringing subagent isolation, iterative fix-and-re-review, and the full protocol to the validation phase. Preserve domain-specific checks as SUPPLEMENTARY_CRITERIA. Remove stale criteria enumeration from code-review workflow. Update validation report templates to reflect gate-based review. Reframe self-review gate preamble to scope what it can and cannot catch without enumerating specific criteria. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the self-graded 17-item checklist in /decompose with a subagent-based review gate that evaluates decomposition artifacts against the PRD without access to the design document. The reviewer sees only what the artifacts actually say, catching gaps the decomposer's mental model papers over. - Add design/decomposition-review.md with 6 evaluation criteria (requirement coverage, epic structure, story completeness, testing commitment, integration stability, documentation), finding format, severity definitions, and validation rules - Replace decompose Step 7 checklist with Steps 7-12: structural verification, review invocation (subagent with inline fallback), validate-and-assess, re-review loop (max 2 rounds), report, and updated presentation with FLAG handling - Fix self-review-gate.md Step 4 fresh subagent path to include SUPPLEMENTARY_CRITERIA, CONTEXT_FILES, and AGENTS.md/CLAUDE.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
[DOCS] stories previously shared the same template as [DEV]/[QE]/etc stories, inheriting code-oriented sections (Implementation Guidance, Testing Approach) that are inappropriate for documentation work. This replaces those sections with Documentation Scope (what the reader needs to understand) and Documentation Inputs (inventory of user-facing artifacts to document), keeping [DOCS] stories focused on *what* needs documenting rather than *how* to build it. Documentation Inputs are explicitly framed as guidance, not authority — implementation may diverge from design, so documentation authors should verify against shipped code. Updates the decomposition review protocol and sync phase to recognize and evaluate the [DOCS]-specific template structure. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…stories Same consistency fix applied to decompose.md quality checks and decomposition-review.md — guidelines.md was missed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
"Every story includes functionality and testing" was only true for [DEV] stories. Scoped it to [DEV] and added a line noting [DOCS] stories use their own template. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (3)
WalkthroughThis PR standardizes code review workflows across implementation and E2E tracks by introducing a shared self-review gate recipe with supplementary criteria, adds a new decomposition review protocol for design quality assurance, refines story templates to distinguish documentation ([DOCS]) from other work, and introduces scenario consolidation patterns for E2E testing to avoid duplicate setup and actions. ChangesWorkflow standardization and review protocols
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/skills/publish.md (1)
86-94:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRequire a validation refresh after cross-cutting fix commits.
This step can create new commits after
/validatealready passed. Add an explicit requirement to re-run validation (or return to/validate) before PR creation when Step 2 applies fixes.Proposed patch
If the gate made code fixes, commit them before proceeding: @@ git commit -m "{JIRA-KEY}: address cross-cutting review findings"
+If Step 2 produced a fix commit, re-run
/validate(or at minimum the
+validation profile's required checks) before continuing to Step 3.</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.In
@e2e/skills/publish.mdaround lines 86 - 94, Update the publish workflow docs
to require re-running validation when Step 2 creates commits: after the existing
"If the gate made code fixes, commit them" block (referencing Step 2 and the git
commit instructions), add a sentence that if Step 2 produced any fix commits the
contributor must re-run/validate(or at minimum the validation profile's
required checks) and wait for those checks to pass before proceeding to Step 3
or creating the PR; mention explicitly that validation must be green after fix
commits to proceed.</details> </blockquote></details> </blockquote></details>🧹 Nitpick comments (2)
e2e/skills/revise.md (1)
94-97: ⚡ Quick winAdd a uniqueness check for scenario names/IDs in consistency review.
Given consolidation and splitting, include an explicit check that scenario identifiers remain unique (e.g., no duplicate
C1/S2labels or repeated titles) to protect AC/task traceability.Proposed patch
- Does the Scenario Consolidation table still match the current scenario list? +- Are scenario names/identifiers unique across the plan (no duplicate consolidated or standalone scenario labels)? - Are commit messages still properly formatted?🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/skills/revise.md` around lines 94 - 97, Add an explicit uniqueness check to the consistency review: verify that scenario identifiers (e.g., labels like C1, S2) and scenario titles are unique after consolidation/splitting; update the checklist items under the "consistency review" (the questions currently listing consolidation/opportunities, 8-validation cap, and Scenario Consolidation table) to include "Are all scenario IDs and titles unique?" and implement/mention a deterministic validation step that scans the Scenario Consolidation table and the scenario list for duplicate IDs or duplicate titles and fails the review if any duplicates are found to preserve AC/task traceability.e2e/skills/plan.md (1)
270-273: ⚡ Quick winAdd explicit unique-scenario enforcement to the self-review checklist.
Consolidation introduces more renaming/merging; add a checklist item ensuring scenario identifiers and titles are unique to avoid traceability collisions in coverage/task mappings.
Proposed patch
- [ ] Scenarios sharing setup+action are consolidated, not duplicated as separate tests - [ ] No consolidated scenario exceeds 8 validations - [ ] Each validation in a consolidated scenario is tagged with its source AC +- [ ] Scenario identifiers/titles are unique across the plan (no duplicate C#/S# or repeated names) - [ ] The plan is achievable — no scenarios depend on unmerged features or unavailable test infrastructure methods🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/skills/plan.md` around lines 270 - 273, Add a new self-review checklist item that enforces explicit uniqueness of scenario identifiers and titles to prevent traceability collisions when consolidating tests; update the checklist near the existing bullets (the block containing "Scenarios sharing setup+action are consolidated..." and "No consolidated scenario exceeds 8 validations") to include a line such as "All scenario identifiers and titles are unique across the plan to avoid coverage/task mapping collisions" and ensure the guidance mentions verifying both ID fields and human-readable titles during consolidation and renaming.🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. Inline comments: In `@e2e/skills/code.md`: - Around line 254-256: Add an explicit re-run of task-scoped checks (unit tests, linters, and any task-specific validation) immediately after the gate may have made code fixes (i.e., after Step 3c/3d) and before the commit in Step 3f; if fixes are applied, re-stage the affected files and only proceed to commit when these re-run checks pass, and record any dismissed findings in the Implementation Report (Discoveries section). Ensure the flow invokes the same check commands used earlier, re-adds changed files to the index, and gates the commit on the re-run success to avoid committing unverified fixes. In `@implement/skills/validate.md`: - Around line 204-212: Add a mandatory re-run step immediately after the commit instructions in validate.md: after the git commit line, instruct the user to re-run the affected pre-PR checks referenced in Step 3 and, if the committed changes touch covered packages, re-run the coverage analysis from Step 4 before proceeding; reference "Step 3" and "Step 4" so reviewers know which checks to execute and ensure the text explicitly states to re-run only the affected checks and coverage when relevant. --- Outside diff comments: In `@e2e/skills/publish.md`: - Around line 86-94: Update the publish workflow docs to require re-running validation when Step 2 creates commits: after the existing "If the gate made code fixes, commit them" block (referencing Step 2 and the git commit instructions), add a sentence that if Step 2 produced any fix commits the contributor must re-run `/validate` (or at minimum the validation profile's required checks) and wait for those checks to pass before proceeding to Step 3 or creating the PR; mention explicitly that validation must be green after fix commits to proceed. --- Nitpick comments: In `@e2e/skills/plan.md`: - Around line 270-273: Add a new self-review checklist item that enforces explicit uniqueness of scenario identifiers and titles to prevent traceability collisions when consolidating tests; update the checklist near the existing bullets (the block containing "Scenarios sharing setup+action are consolidated..." and "No consolidated scenario exceeds 8 validations") to include a line such as "All scenario identifiers and titles are unique across the plan to avoid coverage/task mapping collisions" and ensure the guidance mentions verifying both ID fields and human-readable titles during consolidation and renaming. In `@e2e/skills/revise.md`: - Around line 94-97: Add an explicit uniqueness check to the consistency review: verify that scenario identifiers (e.g., labels like C1, S2) and scenario titles are unique after consolidation/splitting; update the checklist items under the "consistency review" (the questions currently listing consolidation/opportunities, 8-validation cap, and Scenario Consolidation table) to include "Are all scenario IDs and titles unique?" and implement/mention a deterministic validation step that scans the Scenario Consolidation table and the scenario list for duplicate IDs or duplicate titles and fails the review if any duplicates are found to preserve AC/task traceability.🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID:
043ef183-7b1d-4c83-8bfc-375c35889106📒 Files selected for processing (17)
_shared/recipes/self-review-gate.md_shared/review-protocol.mdcode-review/skills/start.mddesign/README.mddesign/decomposition-review.mddesign/guidelines.mddesign/skills/decompose.mddesign/skills/sync.mde2e/guidelines.mde2e/skills/code.mde2e/skills/plan.mde2e/skills/publish.mde2e/skills/revise.mde2e/skills/validate.mdimplement/skills/code.mdimplement/skills/publish.mdimplement/skills/validate.md
The self-review gate can modify code, but all three workflows (e2e /code, implement /code, implement /validate) committed the fixes without re-running tests and lint. Now each re-runs the task-scoped checks to verify the post-fix state before committing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| Design section: {§4.3 API Changes, or specific subsection} | ||
| ``` | ||
|
|
||
| **For `[DOCS]` stories** (see `[DOCS]` story requirements below for the |
There was a problem hiding this comment.
DOCS has a dedicated workflow (docs-writer) for the implementation which currently is not depending on these stories.
If we choose to follow this approach, we just simply ignore the DOCS stories in this context.
[DOCS] = downstream docs.
[DEV] is used for upstream docs.
This applies to all DOCS related references in this PR and is pending the decision about properly handling the upstream docs.
There was a problem hiding this comment.
The template change is orthogonal to the workflow-wiring question. Regardless of whether docs-writer eventually consumes [DOCS] stories, replacing code-oriented Implementation Guidance and Testing Approach with Documentation Scope and Documentation Inputs is the right structure for documentation work. The previous template would generate guidance like "interface shapes, function/method names, patterns to follow" for docs stories, which doesn't make sense.
We explicitly decided not to wire docs-writer to [DOCS] stories in this PR — the Documentation Inputs are design-time guidance (not authority), and the actual implementation may diverge. The docs-writer still discovers what changed from PRs and diffs. This change doesn't touch docs-writer at all.
| - **Create only — never update or delete.** Once Jira issues are created, they evolve independently — developers add implementation notes, QA adds test details, PMs adjust criteria. Pushing file content back to Jira would clobber those additions. If the decomposition is revised after sync, `/revise` will tell the user exactly which Jira issues need manual updates. | ||
| - **Link to source.** Every Jira issue description references the design document. | ||
| - **Jira-native references.** Local identifiers (`Story 1.01`, `Epic 1`) have meaning only within the `.artifacts/` directory. When constructing Jira issue descriptions, resolve all local references (in Dependencies, Design Reference, and any other cross-references) to Jira issue keys using the sync manifest. Jira is the source of truth — readers of a Jira issue should never need to decode a local artifact numbering scheme. | ||
| - **Jira-native references.** Local identifiers (`Story 1.01`, `Epic 1`) have meaning only within the `.artifacts/` directory. When constructing Jira issue descriptions, resolve all local references (in Dependencies, Documentation Inputs, Design Reference, and any other cross-references) to Jira issue keys using the sync manifest. Jira is the source of truth — readers of a Jira issue should never need to decode a local artifact numbering scheme. |
There was a problem hiding this comment.
It's better not to make changes to the docs handling until we have a final decision about the upstream docs.
There was a problem hiding this comment.
See reply on decompose.md — the template change is independent of the upstream docs decision. We're not wiring any workflow to [DOCS] stories here, just fixing the template so it doesn't generate code-oriented guidance for documentation work.
| (`**Story 1.01 — {title}:**` → `**EDM-XXXX — {title}:**`). | ||
| **For `[DOCS]` stories:** | ||
|
|
||
| ```markdown |
There was a problem hiding this comment.
Will it be more readable to have a folder with "templates"?
There was a problem hiding this comment.
I considered this but prefer keeping the templates inline. They're consumed in-place during sync and are short enough that inlining keeps context local. Extracting into a templates/ folder adds indirection — the agent would need to read the template file separately to understand what Jira descriptions look like, rather than seeing it right where it's used.
| @@ -0,0 +1,160 @@ | |||
| # Decomposition Review Protocol | |||
There was a problem hiding this comment.
This file seems to be very desired, but I wonder about its tokens consumption/economics.
There is a lot of judgement calls and high level reviews here that try to capture the big picture.
I guess we need just to try and improve over time.
Just to be clear, I don't see an action item with this review comment. It's more for sharing thoughts.
There was a problem hiding this comment.
Agreed on the token economics concern — this is expensive. The tradeoff we discussed: decomposition errors caught early (missing requirements, broken dependency chains, vague acceptance criteria) tend to compound downstream. A story with a missing AC becomes a PR with missing tests becomes a feature gap in production. The review cost is front-loaded but should reduce rework in the implementation and QE phases. We'll need to see how it plays out in practice and adjust.
| - Follow the section guidance (`templates/section-guidance.md`) for content standards. | ||
| - Design-scoped goals are **implementation constraints**, not product outcomes. They complement — not duplicate — the PRD's goals. | ||
| - Acceptance criteria must be **behavioral outcomes** (what the system does, testable from outside), not activities or implementation specifications. Implementation details belong in Implementation Guidance. | ||
| - Acceptance criteria must be **behavioral outcomes** (what the system does, testable from outside), not activities or implementation specifications. For non-`[DOCS]` stories, implementation details belong in Implementation Guidance. For `[DOCS]` stories, implementation details belong in Documentation Scope. |
There was a problem hiding this comment.
Referring DOCS again...
There was a problem hiding this comment.
See reply on decompose.md.
| 2. **Merge validations.** For each group, combine the validation blocks | ||
| into a single consolidated scenario. Tag each validation with its | ||
| source AC (e.g., `[AC-1]`, `[AC-3]`) to preserve traceability. | ||
| 3. **Apply the cap.** A consolidated scenario must not exceed **8 |
There was a problem hiding this comment.
If each validation represents an "assert" statement, I suspect that a cap of 8 for a consolidated scenario is too limiting. Maybe 25?
There was a problem hiding this comment.
Agreed — 8 was too tight. We looked at the flightctl e2e suite: median is 9 assertions per test, 52% exceed 8. Raised the cap to 15 in 1a7ebc5, which covers 75% of existing tests. The remaining 25% are large integration-style tests (CSR lifecycle, inline config) that naturally have more assertions.
| - **Setup (given):** {shared setup — what the test needs beyond suite-level per-test setup} | ||
| - **Action (when):** | ||
| 1. {first action using test infrastructure methods} | ||
| 2. {subsequent action — multi-step sequences are common in e2e} |
There was a problem hiding this comment.
The best practices for e2e test try to minimize the number of actions in a single scenario, in many cases only a single action. The rationale is the ability to validate the results when multiple actions changes the state of the system.
What was causing this validation failure? The first or the fifth action?
Using multiple actions is required in cases you can not achieve the desired state change in a single action.
This principal should be expressed as part of the guidelines.
There was a problem hiding this comment.
Good point — added a minimize-actions principle to e2e/guidelines.md and updated the plan template to express single-action preference instead of normalizing multi-step sequences. Fixed in 1a7ebc5.
The guideline now reads: "Each scenario should use the fewest state-changing actions needed to reach the condition under test — ideally one. When a test fails after five actions, it's unclear which action caused the failure. Multi-action sequences are acceptable only when the desired state cannot be reached in fewer steps."
amir-yogev-gh
left a comment
There was a problem hiding this comment.
Picked the wrong option when provided the last feedback...
The cap of 8 validations per consolidated scenario was too tight — 52% of existing flightctl e2e tests exceed it (median is 9). Raised to 15, which covers 75% of existing tests. Added a minimize-actions-per-scenario principle to e2e guidelines: prefer the fewest state-changing actions needed to reach the condition under test, so failures are diagnosable. Updated the plan template to express single-action preference instead of normalizing multi-step sequences. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
e2e/skills/plan.md (1)
64-67:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUnify validation cap references (15 vs 8) to avoid contradictory plan behavior.
Line 64-Line 67 sets the cap to 15, but Line 167 and Line 272 still enforce 8. This contradiction will cause inconsistent consolidation and self-review decisions.
Suggested fix
- merging would exceed the 8-validation cap).} + merging would exceed the 15-validation cap).} - - [ ] No consolidated scenario exceeds 8 validations + - [ ] No consolidated scenario exceeds 15 validationsAlso applies to: 167-167, 272-273
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/skills/plan.md` around lines 64 - 67, The doc has conflicting validation caps (15 vs 8); update all enforcement references that still say "8" to use the unified cap "15" so consolidation and self-review logic is consistent—search for occurrences of the numeric cap "8" (e.g., the enforcement statements around the consolidation/self-review rules that reference 8 validations) and replace them with "15", and ensure any explanatory text and examples (mentions of "cap", "validations", or rule names) reflect the single cap value "15" consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@e2e/skills/plan.md`:
- Around line 64-67: The doc has conflicting validation caps (15 vs 8); update
all enforcement references that still say "8" to use the unified cap "15" so
consolidation and self-review logic is consistent—search for occurrences of the
numeric cap "8" (e.g., the enforcement statements around the
consolidation/self-review rules that reference 8 validations) and replace them
with "15", and ensure any explanatory text and examples (mentions of "cap",
"validations", or rule names) reflect the single cap value "15" consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 0ca8f8b6-dc8a-4e72-9627-9a785f17b955
📒 Files selected for processing (5)
e2e/guidelines.mde2e/skills/code.mde2e/skills/plan.mdimplement/skills/code.mdimplement/skills/validate.md
✅ Files skipped from review due to trivial changes (3)
- implement/skills/code.md
- e2e/guidelines.md
- implement/skills/validate.md
🚧 Files skipped from review as they are similar to previous changes (1)
- e2e/skills/code.md
Updated remaining references to the old 8-validation cap in e2e/skills/plan.md (two places) and e2e/skills/revise.md to 15. Added re-run of validation checks after gate fixes in e2e/skills/publish.md — same pattern fixed in code.md and validate.md earlier. Added scenario identifier uniqueness checks to plan.md self-review checklist and revise.md consistency review. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
_shared/recipes/self-review-gate.md) with a deepened adversarial review protocol, replacing the previous checklist-based approach/planphase — merges overlapping scenarios, enforces unique naming, and streamlines the plan-to-code pipeline/syncto prevent orphaned epics/decompose— a subagent-based review step that evaluates epics/stories against the PRD without seeing the design document[DOCS]stories a dedicated template in design/decomposeand/sync— replaces code-orientedImplementation GuidanceandTesting ApproachwithDocumentation ScopeandDocumentation Inputs, keeping documentation stories focused on what needs documenting rather than how to build itDetails
Self-review gate (
_shared/,code-review/,implement/,e2e/)E2e plan consolidation (
e2e/)/planthat merges overlapping scenarios and enforces unique naming/code,/validate,/publish, and/revisefor consistency with the updated plan structureDesign workflow (
design/)/sync— verifiesparent.keyafter creation/decomposewith a dedicated review protocol (decomposition-review.md)[DOCS]stories keepImplementation Guidance+Testing Approach;[DOCS]stories getDocumentation Scope+Documentation InputsDocumentation Inputsare explicitly framed as guidance, not authority — implementation may diverge from designguidelines.md,README.md,decomposition-review.md, andsync.mdfor consistency with the template splitTest plan
/decomposeon a feature with both[DEV]and[DOCS]stories — verify the correct template is used for each/syncon a decomposition with[DOCS]stories — verify Jira descriptions useDocumentation ScopeandDocumentation Inputs(notImplementation GuidanceandTesting Approach)/decomposewith the decomposition review enabled — verify the reviewer evaluates[DOCS]stories against the[DOCS]template criteria/planand verify scenario consolidation merges overlapping scenarios/codeand verify self-review gate uses the shared review protocol🤖 Generated with Claude Code