Harden review prompts for consistency and noise reduction#579
Conversation
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
roborev: Combined Review (
|
|
Going to handle all this overflow crap with some follow on templating work |
|
I'm going to cut 0.50.0 before merging this so I can have an opportunity to spend a day with these new prompts to see how they perform |
|
Sounds good to me @wesm |
9d9edae to
bc27b65
Compare
roborev: Combined Review (
|
Squashed series of prompt-quality improvements: - 🔬 Define impact-based severity levels in review prompts - 🔬 Require concrete harm articulation in review findings - 🔬 Add evidence thresholds to suppress speculative findings - 🔬 Add intent-implementation alignment check (cold-read prediction) - 🔬 Add self-review quality gate before output - 🔬 Add evidence thresholds to insights analysis - 🔬 Gracefully degrade intent-alignment check for vague commit messages - 🔬 Fix quality gate and range intent-alignment over-constraints - 🔬 Demarcate commit messages as untrusted external data - 🔬 XML-escape commit metadata to prevent tag injection - 🔬 Fix errcheck lint for xml.EscapeText - 🔬 Guard intent-alignment check against trimmed commit messages
bc27b65 to
d352d4b
Compare
roborev: Combined Review (
|
wesm
left a comment
There was a problem hiding this comment.
a/b tested locally and looks good
Summary
high/medium/lowlabels with concrete definitions tied to real-world impact (data loss, exploitability, blast radius). Gives all agents a shared calibration standard so severity is consistent across reviews.🤖 Generated with Claude Code