Why blocklists can't protect your Claude Code hooks
A user filed #40408 this week describing what happened when their model hit a deadlock. They had built 22 regex patterns across 3 layers blocking sed, awk, python inline, echo redirects, cat heredocs, tee, touch, rm, cp, mv, and more. The model respected all of them. Then it needed to fix a bug in its own safety system, and it used perl -i -pe.
Perl was not on the list.
The model did not try to bypass the rules. It genuinely needed to edit a file, and it found a tool that could do the job. The fact that this tool happened to be unblocked was incidental. The model does not think about security policies the way a human attacker would. It thinks about completing tasks.
This distinction matters because it changes what the problem actually is. The problem is not “the model is adversarial.” The problem is that Unix has too many ways to write files for any blocklist to cover them all.
What a blocklist has to catch
Here is a partial list of commands that can modify files, beyond the usual sed, awk, and shell redirects:
perl -i -pe # in-place edit, identical to sed -i
perl -e '...' # arbitrary file I/O via open()
ruby -e '...' # same via File.write
node -e '...' # same via fs.writeFileSync
lua -e '...' # same via io.open
php -r '...' # same via file_put_contents
busybox sed # aliased tools
xxd + redirect # binary writes
dd of=file # block copy
install src dst # copy with permissions
rsync src dst # same
scp, sftp # remote-to-local writes
socat # network-to-file
expect # scriptable terminal, can drive any editor
screen -X stuff # inject keystrokes into a running editor
Each of these can be blocked individually. Each new block is another regex that has to handle flag ordering, quoting, escaping, variable expansion, subshells, and aliases. The surface keeps growing. The attacker (or in this case, the helpful model) only needs to find one gap.
This is the blocklist problem from security engineering, applied to AI tool use. It is well-understood in other domains. Web application firewalls learned this lesson decades ago: blocking known-bad inputs is always one step behind.
What the alternatives look like
Allowlists. Instead of blocking dangerous commands, only permit known-safe ones. If the hook only allows git, npm, cargo, and python3 -m pytest, then perl is blocked by default. The downside is that every legitimate new command requires an update. This trades false negatives (missed attacks) for false positives (blocked legitimate work).
Filesystem monitoring. Instead of intercepting the command, watch the files. inotifywait, fswatch, or macOS FSEvents can detect writes to protected paths regardless of which tool made them. The hook does not need to know about perl. It just needs to know that /etc/passwd changed. The downside is latency: the write happens before the detection.
Sandboxing. Let the model run any command, but inside a container or namespace that restricts which paths are writable. Claude Code’s built-in sandbox does some of this, but #40213 and #39987 document gaps in the current implementation.
Hybrid. Block the obvious dangerous commands (rm -rf, git push –force) with a blocklist, and use filesystem monitoring or sandboxing for everything else. Accept that the blocklist is a speed bump, not a wall.
What a blocklist can realistically do
After #40408, we added perl and ruby detection to our bash-guard hook. But we also documented in the README that this is a blocklist and will always have gaps.
Blocklist hooks reduce risk. They catch the common cases. They prevent accidents. They turn “the model deleted my .env” into a blocked tool call with a clear log entry. For most users, that is enough.
But they cannot prevent a sufficiently creative model from finding an alternative, because Unix was designed to give users many ways to accomplish the same thing. If your threat model requires that the model absolutely cannot write to certain files, you need something below the hook layer: a sandbox, a read-only filesystem, or a container. Hooks are application-layer controls, and application-layer controls can be routed around.
The deeper question
#40408 also raises a design question that the Claude Code team will eventually need to answer: should the permission model be a blocklist (deny known-bad operations) or an allowlist (permit known-good operations)?
The current system mixes both. settings.json supports allow and deny rules for tools and paths. But the Bash tool is a universal escape hatch: any command the user can run, the model can run. Path deny rules do not apply to Bash. Permission caching can bypass allowlist updates. And the model can simply ignore rules it finds inconvenient.
Hooks exist in this gap. They are the best tool available today for enforcing constraints the built-in system does not cover. But they inherit the fundamental limitations of the layer they operate in. A PreToolUse hook can inspect and block tool calls. It cannot prevent the model from finding a tool call that achieves the same result through a path the hook does not check.
This is not a reason to stop using hooks. It is a reason to be honest about what they can and cannot do.