Skip to content
Open
Changes from 1 commit
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
83b22f6
fix 4x exception logging
codeconsole Apr 8, 2026
e85c094
test: update StackTraceFiltererSpec
matrei Apr 9, 2026
ce60438
test: update StackTraceFiltererSpec
matrei Apr 9, 2026
d1ba16d
Merge branch '7.1.x' into 7.1.x-stop-4x-exceptionlogging
codeconsole Apr 11, 2026
90e375e
Merge branch '7.1.x-stop-4x-exceptionlogging' of https://github.com/c…
codeconsole Apr 11, 2026
024b9c8
filterer no longer logs; exception logging is the resolver's job
codeconsole Apr 11, 2026
7678cf4
add grails.exceptionresolver.logFullStackTrace opt-in for pre-filter …
codeconsole Apr 11, 2026
2963cd9
document grails.exceptionresolver.logFullStackTrace
codeconsole Apr 11, 2026
c68360f
Merge branch '7.1.x' into 7.1.x-stop-4x-exceptionlogging
codeconsole Apr 16, 2026
d5d70a6
Introduce mechanism for resolving user id with stack traces
codeconsole Apr 16, 2026
cbf9f70
add mechanism for logging ip addresses associated with stack traces
codeconsole Apr 16, 2026
c67f460
fix for ip logging tests
codeconsole Apr 16, 2026
9c7a7a3
fix spec
codeconsole Apr 16, 2026
870f9a3
Merge branch '7.1.x' into 7.1.x-stop-4x-exceptionlogging
codeconsole Apr 22, 2026
e4c678a
default logging ip address to false
codeconsole Apr 27, 2026
2ba566d
Merge branch '7.1.x' into 7.1.x-stop-4x-exceptionlogging
codeconsole Apr 30, 2026
b999949
default logRemoteAddr to false
codeconsole May 1, 2026
627a1e9
add grails.exceptionresolver.logFullStackTraceOnFilter
codeconsole May 8, 2026
1583d59
Merge branch '7.1.x' into 7.1.x-stop-4x-exceptionlogging
codeconsole May 8, 2026
92eae8b
grailsSpringSecurityVersion 7.0.3-SNAPSHOT
codeconsole May 15, 2026
f5d28ec
simplify AuditorAwareLookup.resolve() return type
codeconsole May 15, 2026
e4b96a8
disable logAuditor by default and cache shouldLog* config reads
codeconsole May 15, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ public Throwable filter(Throwable source, boolean recursive) {
if (recursive) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that filter(source) is called again after the cause chain loop. Since the loop already filters the source throwable (when current == source), is the second call on line 89 necessary?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sanjana2505006 that's true about the old code, but after this PR, the early return makes the two calls mutually exclusive now.

Throwable current = source;
while (current != null) {
current = filter(current);
doFilter(current);
current = current.getCause();
}
}
Expand All @@ -90,27 +90,38 @@ public Throwable filter(Throwable source, boolean recursive) {

public Throwable filter(Throwable source) {
if (shouldFilter) {
StackTraceElement[] trace = source.getStackTrace();
List<StackTraceElement> newTrace = filterTraceWithCutOff(trace, cutOffPackage);

if (newTrace.isEmpty()) {
// filter with no cut-off so at least there is some trace
newTrace = filterTraceWithCutOff(trace, null);
}

// Only trim the trace if there was some application trace on the stack
// if not we will just skip sanitizing and leave it as is
if (!newTrace.isEmpty()) {
// We don't want to lose anything, so log it
boolean modified = doFilter(source);
if (modified) {
// Log the full stack trace once for the top-level exception (includes causes)
STACK_LOG.error(FULL_STACK_TRACE_MESSAGE, source);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a different meaning now? Before it would have been the full stack trace while now it's the filtered. Wouldn't a better solution be to set the logging to trace or something for these detailed stack traces ? Instead of this change why not disable STACK_LOG if you don't want this output?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jdaugherty sensible defaults. logging a stracktrace 4x that has no useful information (especially with the groovy 4 clutter) quickly leads to a scenario where you can run out of disk space fast.

I have yet to see an example where the 4x logging provides anything useful. do you have one?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this and we think this should stay with a configuration setting.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consulted AI and it thinks it is a really bad idea:

can you just introduce another config value and default it to false?

It would work, but the mess comes back for opt-in users. Let me trace it.

Setup

  • Keep logFullStackTrace where it is (resolver emits once per request exception when on).
  • Add logFullStackTraceOnFilter (default false). When true, filter(Throwable) emits to STACK_LOG as it did pre-PR.

Output matrix for a 3-cause request exception

logFullStackTrace logFullStackTraceOnFilter STACK_LOG entries Resolver entries Total
false (default) false (default) 0 1 1
true false 1 1 2
false true 3 (one per cause via filter loop) 1 4
true true 4 (1 from resolver + 3 from filter loop) 1 5

The bottom two rows are exactly the per-cause duplication the PR fixes. The user who opts into logFullStackTraceOnFilter for
non-resolver coverage pays for it with 3+ STACK_LOG entries every time a request exception happens.

Non-resolver sanitize path

logFullStackTraceOnFilter STACK_LOG entries from one sanitizeRootCause(ex) call
false 0
true 1

This is the only clean win. But it only kicks in when the flag is on, at which point the user is also paying the 4-entry penalty on
every request exception.

Plus the layering problem

DefaultStackTraceFilterer lives in grails-bootstrap, which doesn't depend on grails-core where Config lives. Options:

  1. Pull Config into bootstrap — module cycle.
  2. Setter on the filterer — resolver reads the flag and calls filterer.setLogFullStackTraceOnFilter(flag) at construction.
    Every other caller (new DefaultStackTraceFilterer() in TestStacktraceSanitizer, user subclasses, plugin code) defaults to
    false because nobody wires the flag for them.
  3. System property (Boolean.getBoolean("grails.full.stacktrace.onFilter")) — matches the existing
    SYS_PROP_DISPLAY_FULL_STACKTRACE pattern in this file. Avoids the module coupling. Costs YAML-config discoverability.

Option 3 is the cleanest plumbing-wise but means users opt in via JVM flag rather than application.yml. A bit inconsistent with
the other grails.exceptionresolver.* settings, all of which live in YAML.

Cleaner alternative

Document one line in the upgrade guide for teams relying on non-resolver sanitize coverage:

// before: relied on side-effect log
Throwable cleaned = GrailsUtil.sanitizeRootCause(ex);                                                                               
log.error("job failed", cleaned);
                                                                                                                                    
// after: explicit                                                          
Throwable cleaned = GrailsUtil.sanitizeRootCause(ex);
DefaultStackTraceFilterer.STACK_LOG.error(StackTraceFilterer.FULL_STACK_TRACE_MESSAGE, cleaned);                                    
log.error("job failed", cleaned);                                                                                                   

No new flag. No layering workaround. No duplicate-per-cause regression for users who happen to need non-resolver coverage.

My call

If you want a concrete non-resolver path you want covered, better the two-line migration. If the project wants blanket pre-PR compatibility, the second flag works but I'd go the system-property route (option 3) to avoid the layering mess, and I'd document clearly that turning it on without also turning logFullStackTrace off produces N+1 entries per exception.

Honestly though: the second flag is solving the problem by re-exposing the old mechanism. The cleaner fix is the two-line code change at the actual call sites that want the behavior. That's explicit, discoverable at the code site, and doesn't involve tuning two interacting flags.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a config setting preserves both and we agreed in the weekly that this was the way forward, I am -1 on this without this

StackTraceElement[] clean = new StackTraceElement[newTrace.size()];
newTrace.toArray(clean);
source.setStackTrace(clean);
}
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

filter(Throwable source) currently calls doFilter(source) (which mutates source via setStackTrace) before logging FULL_STACK_TRACE_MESSAGE. That means the "Full Stack Trace" log is no longer the full/original trace, but the already-filtered trace. To preserve prior behavior and the PR description, log the exception before applying the filtered stack trace (e.g., have doFilter compute/return the cleaned trace without mutating, then apply after logging, or temporarily save/restore the original trace around the log call).

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change modifies when/how often the StackTrace logger logs (STACK_LOG.error(...)) during recursive filtering, but existing specs (e.g., grails-core/src/test/.../StackTraceFiltererSpec.groovy) only assert the sanitized stack trace contents and don't cover log emission/count. Adding a test that captures the StackTrace logger output and asserts a single "Full Stack Trace" entry for filter(e, true) would help prevent regressions.

Copilot uses AI. Check for mistakes.
}
return source;
}

private boolean doFilter(Throwable source) {
if (!shouldFilter) {
return false;
}
StackTraceElement[] trace = source.getStackTrace();
List<StackTraceElement> newTrace = filterTraceWithCutOff(trace, cutOffPackage);

if (newTrace.isEmpty()) {
// filter with no cut-off so at least there is some trace
newTrace = filterTraceWithCutOff(trace, null);
}

// Only trim the trace if there was some application trace on the stack
// if not we will just skip sanitizing and leave it as is
if (!newTrace.isEmpty()) {
StackTraceElement[] clean = new StackTraceElement[newTrace.size()];
newTrace.toArray(clean);
source.setStackTrace(clean);
return true;
}
return false;
}

private List<StackTraceElement> filterTraceWithCutOff(StackTraceElement[] trace, String endPackage) {
List<StackTraceElement> newTrace = new ArrayList<>();
boolean foundGroovy = false;
Expand Down
Loading