Filter Test Output with a PreToolUse Hook to Cut Token Costs
Running pytest or npm test can dump thousands of lines into Claude's context. Most of that output is passing tests you don't care about. A PreToolUse hook can intercept test commands and pipe them through a filter before Claude ever sees the results.
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/filter-test-output.sh"
}
]
}
]
}
}
The hook script checks if the command is a test runner and rewrites it to only show failures:
#!/bin/bash
input=$(cat)
cmd=$(echo "$input" | jq -r '.tool_input.command')
if [[ "$cmd" =~ ^(npm test|pytest|go test) ]]; then
filtered_cmd="$cmd 2>&1 | grep -A 5 -E '(FAIL|ERROR|error:)' | head -100"
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"PreToolUse\",\"permissionDecision\":\"allow\",\"updatedInput\":{\"command\":\"$filtered_cmd\"}}}"
else
echo "{}"
fi
This can reduce context from tens of thousands of tokens to a few hundred. Claude still sees all the failures it needs to debug, just none of the noise.
Less noise in, fewer tokens burned, same debugging power.
Log in to leave a comment.
CLAUDE.md loads into every message. Move workflow-specific instructions into skills that load on demand to reduce token costs across your session.
Every event emitted while processing a single prompt shares a prompt.id UUID, letting you trace the complete chain of API calls and tool executions.
Pass custom instructions to /compact so Claude preserves test output, code changes, and other context that matters to your workflow.