Skip to content

fix(mcp): surface actionable error when adhoc_filters passed to generate_chart#39507

Draft
sadpandajoe wants to merge 1 commit intomasterfrom
fix-mcp-adhoc_filters-error
Draft

fix(mcp): surface actionable error when adhoc_filters passed to generate_chart#39507
sadpandajoe wants to merge 1 commit intomasterfrom
fix-mcp-adhoc_filters-error

Conversation

@sadpandajoe
Copy link
Copy Markdown
Member

SUMMARY

When LLMs pass adhoc_filters (Superset's internal format) to the generate_chart MCP tool, the validation pipeline returns an opaque validation_system_error with message "An error occurred", empty details, and no suggestions — making it impossible for the LLM to self-correct.

Root cause: ChartErrorBuilder.TEMPLATES had no "validation_error" key, so build_error() fell through to defaults. Additionally, Pydantic's discriminated union error strings list all variant type names, blowing past the 200-char security truncation before the actual "did you mean 'filters'?" message.

Fix:

  • Add "validation_error" template to ChartErrorBuilder.TEMPLATES
  • Extract meaningful messages from Pydantic's __cause__ chain before truncation (only accesses err["msg"], never err["input"])
  • Catch config validation errors specifically with targeted error_type/error_code instead of the generic catch-all
  • Add skip_generic flag to prevent keyword-based replacement from masking field-specific messages

BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF

Before: adhoc_filters in config returns:

{"success": false, "error_type": "validation_system_error", "message": "An error occurred", "details": "", "suggestions": [], "error_code": "VALIDATION_PIPELINE_ERROR"}

After:

{"success": false, "error_type": "config_validation_error", "message": "Configuration validation failed: Unknown field 'adhoc_filters' — did you mean 'filters'?", "details": "Unknown field 'adhoc_filters' — did you mean 'filters'?", "suggestions": ["Check field names in your chart configuration", "Read the chart://configs resource for valid fields and examples", "Use the 'filters' field for structured filters (column/op/value) — do NOT use 'adhoc_filters'"], "error_code": "CONFIG_VALIDATION_ERROR"}

TESTING INSTRUCTIONS

  1. Call generate_chart with adhoc_filters in the config — should now return error mentioning "did you mean 'filters'?" with CONFIG_VALIDATION_ERROR code
  2. Run: pytest tests/unit_tests/mcp_service/chart/validation/test_pipeline_unknown_fields.py -v
  3. Run full MCP suite: pytest tests/unit_tests/mcp_service/ -v (1237 tests pass)

ADDITIONAL INFORMATION

  • Has associated issue:
  • Required feature flags:
  • Changes UI
  • Includes DB Migration (follow approval process in SIP-59)
    • Migration is atomic, supports rollback & is backwards-compatible
    • Confirm DB migration upgrade and downgrade tested
    • Runtime estimates and downtime expectations provided
  • Introduces new feature or API
  • Removes existing feature or API

🤖 Generated with Claude Code

…ate_chart

The validation pipeline caught unknown-field errors from parse_chart_config
but used a template_key ("validation_error") that didn't exist in
ChartErrorBuilder.TEMPLATES, causing the error to fall through to the
default "An error occurred" message with empty details and suggestions.

Additionally, Pydantic's discriminated union error strings start with a
very long type name listing all variants, which caused the 200-char
security truncation to cut off the actual helpful message before it
reached the user.

Fixes:
- Add "validation_error" template to ChartErrorBuilder.TEMPLATES
- Extract meaningful error messages from Pydantic's __cause__ chain
  before truncation (only accesses err["msg"], never err["input"])
- Catch config validation errors specifically in the pipeline with a
  targeted error_type/error_code instead of the generic catch-all
- Skip generic-message replacement for config errors so field-specific
  "did you mean?" suggestions are preserved

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@netlify
Copy link
Copy Markdown

netlify Bot commented Apr 21, 2026

Deploy Preview for superset-docs-preview ready!

Name Link
🔨 Latest commit d666ca9
🔍 Latest deploy log https://app.netlify.com/projects/superset-docs-preview/deploys/69e7156e03b71a00080e2aec
😎 Deploy Preview https://deploy-preview-39507--superset-docs-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 21, 2026

Codecov Report

❌ Patch coverage is 0% with 26 lines in your changes missing coverage. Please review.
✅ Project coverage is 64.36%. Comparing base (78fb096) to head (d666ca9).
⚠️ Report is 9 commits behind head on master.

Files with missing lines Patch % Lines
superset/mcp_service/chart/validation/pipeline.py 0.00% 26 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #39507      +/-   ##
==========================================
- Coverage   64.47%   64.36%   -0.11%     
==========================================
  Files        2558     2560       +2     
  Lines      133243   133570     +327     
  Branches    30956    31012      +56     
==========================================
+ Hits        85902    85972      +70     
- Misses      45850    46105     +255     
- Partials     1491     1493       +2     
Flag Coverage Δ
hive 39.74% <0.00%> (-0.11%) ⬇️
mysql 60.24% <0.00%> (-0.22%) ⬇️
postgres 60.32% <0.00%> (-0.22%) ⬇️
presto 41.52% <0.00%> (-0.12%) ⬇️
python 61.89% <0.00%> (-0.23%) ⬇️
sqlite 59.96% <0.00%> (-0.22%) ⬇️
unit 100.00% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Improves MCP chart validation error surfacing so LLM clients get actionable, field-specific feedback (not opaque “An error occurred”) when invalid chart configs (e.g., adhoc_filters) are provided.

Changes:

  • Add a validation_error template to ChartErrorBuilder so validation failures don’t fall back to the generic default.
  • Extract concise, meaningful Pydantic validation messages (from the __cause__ ValidationError) before security truncation, with an option to skip generic message replacement.
  • Add regression tests ensuring unknown config fields yield actionable messages/suggestions and the CONFIG_VALIDATION_ERROR code.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
superset/mcp_service/utils/error_builder.py Adds a missing validation_error template used by the validation pipeline to produce structured error responses.
superset/mcp_service/chart/validation/pipeline.py Improves sanitization/extraction of config parsing validation errors and returns a specific config-validation error response.
tests/unit_tests/mcp_service/chart/validation/test_pipeline_unknown_fields.py Adds regression coverage for unknown config fields (including adhoc_filters) to ensure actionable errors/suggestions.

Comment on lines +178 to +185
"validation_error": {
"message": "Configuration validation failed: {reason}",
"details": "{reason}",
"suggestions": [
"Check field names in your chart configuration",
"Read the chart://configs resource for valid fields and examples",
],
},
Comment on lines +18 to +25
"""
Regression tests for validation pipeline error surfacing.

Ensures that unknown fields (like adhoc_filters) produce actionable error
messages instead of opaque "An error occurred" responses.

See: sc-103356
"""
Comment on lines +88 to +101
if not isinstance(cause, PydanticValidationError):
return None

# Extract just the message parts from Pydantic's structured errors
messages = []
for err in cause.errors()[:3]:
msg = err.get("msg", "")
# Pydantic prefixes model_validator errors with "Value error, "
if msg.startswith("Value error, "):
msg = msg[len("Value error, ") :]
if msg:
messages.append(msg)

return " | ".join(messages) if messages else None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants