
From Conceptual Quality Gates to Automated Enforcement
Part 3 of our AI Technical Debt series covered CI/CD quality gates as a conceptual framework: block merges on security findings, require test coverage thresholds, enforce technical debt ratios. But concepts need implementation.
Claude Code hooks are that implementation layer. Released in early 2026, hooks are user-defined commands, prompts, or agents that execute automatically at specific points in Claude Code's lifecycle. They transform best-practice guidelines into enforced rules that run every time Claude touches your codebase.
This post covers what hooks are, the three handler types, practical production patterns, and how they connect to the governance framework described in our thread-based engineering guide.
HOOK LIFECYCLE FLOW
Hooks fire at four key points in every tool use cycle
Evaluate before action
Claude performs the action
Check results, auto-format
Final quality validation
Key point: PreToolUse is the only hook that can block actions. Use it for security gates, file protection, and mandatory review enforcement.
Understanding the Hook Architecture
The official hooks documentation uses precise terminology for three levels:
- Hook Event: The lifecycle point where hooks can fire (PreToolUse, PostToolUse, Stop, etc.)
- Matcher Group: A regex filter that determines which tool uses trigger the hook
- Hook Handler: The command, prompt, or agent that runs when the matcher matches
The 12 Lifecycle Events
Claude Code provides 12 hook events covering the full agent lifecycle:
| Event | When It Fires | Can Block? | Primary Use Case |
|---|---|---|---|
| PreToolUse | Before any tool execution | Yes | Security gates, file protection, command blocking |
| PostToolUse | After tool execution completes | No | Auto-formatting, linting, logging |
| Notification | When Claude sends a notification | No | Slack alerts, email triggers, monitoring |
| Stop | When Claude finishes responding | No | Final quality checks, summary generation |
| SubagentStop | When a subagent completes | No | Subagent output validation, coordination |
The PreToolUse event is the most powerful because it can approve or deny the pending action. If your hook returns a deny signal, Claude cannot proceed with that tool use. This makes PreToolUse the enforcement mechanism for security policies, file protection rules, and mandatory review gates.
Three Handler Types
THREE HANDLER TYPES
Each handler type suits different verification needs
Command
Simple
Run shell commands, check exit codes
Receives JSON on stdin, returns pass/fail via exit code
Best for:
Formatting, linting, file checks
Prompt
Moderate
LLM evaluates a prompt with context
Sends prompt + $ARGUMENTS to model for single-turn check
Best for:
Security classification, pattern review
Agent
Advanced
Subagent with tool access verifies conditions
Spawns agent with Read, Grep, Glob to analyze codebase
Best for:
Cross-file verification, deep analysis
Start simple: Begin with Command hooks for formatting, graduate to Prompt hooks for security, then Agent hooks for deep verification.
Claude Code supports three distinct handler types, each suited to different verification needs:
1. Command Hooks (type: "command")
Shell commands that receive the event's JSON input on stdin and communicate results through exit codes and stdout. These are the most straightforward: run a script, get a pass/fail result.
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"handler": {
"type": "command",
"command": "npx prettier --write $FILEPATH"
}
}
]
}
}
This runs Prettier on every file Claude edits or writes, enforcing formatting automatically.
2. Prompt Hooks (type: "prompt")
Send a prompt to a Claude model for single-turn evaluation. Use the $ARGUMENTS placeholder to inject the hook's JSON input data into your prompt text. The model evaluates and returns a decision.
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit",
"handler": {
"type": "prompt",
"prompt": "Review this edit: $ARGUMENTS. If it modifies auth or payments, DENY. Otherwise APPROVE."
}
}
]
}
}
Prompt hooks enable intelligent verification without writing shell scripts. The LLM evaluates context-dependent conditions that would be difficult to express as regex patterns or exit codes.
3. Agent Hooks (type: "agent")
Spawn a subagent with access to tools like Read, Grep, and Glob to verify conditions before returning a decision. This is the most sophisticated handler type, enabling deep codebase analysis before approving or denying an action.
Agent hooks are particularly useful for checks that require understanding multiple files or project context, such as verifying that a new API endpoint follows the existing authentication pattern or that a database migration includes required RLS policies.
Production Patterns
Pattern 1: Auto-Format on Every Edit
The simplest and most impactful hook. Every file Claude modifies gets automatically formatted:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"handler": {
"type": "command",
"command": "npx prettier --write $FILEPATH"
}
}
]
}
}
You can chain ESLint by changing the command to npx prettier --write $FILEPATH && npx eslint --fix $FILEPATH.
Why this matters: Formatting inconsistencies are one of the top sources of noise in AI-generated code. Auto-formatting eliminates them before you even see the output.
Pattern 2: Protect Critical Files
Block Claude from editing production-critical files without explicit approval:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit",
"handler": {
"type": "command",
"command": "node scripts/block-critical.js"
}
}
]
}
}
The block-critical.js script reads JSON from stdin and checks the file path against a blocklist:
// scripts/block-critical.js
const fs = require('fs')
const data = JSON.parse(fs.readFileSync('/dev/stdin', 'utf8'))
const blocked = ['src/middleware.ts', 'src/app/api/chat/route.ts', '.env']
if (blocked.some(f => data.filePath?.includes(f))) {
console.error('BLOCKED: requires manual editing')
process.exit(1)
}
This prevents Claude from modifying your middleware, AI chat route, or environment files without you explicitly overriding the protection. These are files where a small change can have outsized production impact.
Pattern 3: Security-Aware Dependency Changes
Block new dependency installations without review (directly addressing the slopsquatting threat):
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"handler": {
"type": "command",
"command": "node scripts/block-deps.js"
}
}
]
}
}
The block-deps.js script checks for production dependency installs:
// scripts/block-deps.js
const fs = require('fs')
const data = JSON.parse(fs.readFileSync('/dev/stdin', 'utf8'))
const cmd = data.command || ''
const isInstall = cmd.match(/npm install|yarn add|pip install|pnpm add/)
if (isInstall && !cmd.includes('--save-dev')) {
console.error('BLOCKED: Production deps require approval.')
process.exit(1)
}
Pattern 4: Type Check After Edits
Run TypeScript type checking after every file modification to catch type errors immediately:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"handler": {
"type": "command",
"command": "npx tsc --noEmit 2>&1 | head -20"
}
}
]
}
}
Pattern 5: Prompt-Based Security Review
Use a prompt hook to evaluate whether edits touch sensitive areas:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit",
"handler": {
"type": "prompt",
"prompt": "Analyze edit: $ARGUMENTS. Check for: auth, DB, rate limits, payments, secrets. If ANY affected, DENY. Otherwise APPROVE."
}
}
]
}
}
This delegates security classification to the LLM itself, which can understand semantic meaning better than regex patterns.
Pattern 6: Notification on Completion
Send a notification when Claude finishes a task, useful for long-running operations:
{
"hooks": {
"Stop": [
{
"handler": {
"type": "command",
"command": "osascript -e 'display notification \"Task complete\" with title \"Claude Code\"'"
}
}
]
}
}
CI/CD Integration
GitHub Actions with Claude Code
Claude Code supports GitHub Actions integration for automated quality gates on pull requests. The pattern:
- A PR is opened with AI-generated changes
- GitHub Actions triggers Claude Code in CI mode
- Hooks run quality gates (lint, type check, security scan, test suite)
- Results are posted as PR comments
- Merge is blocked until all gates pass
This ensures AI-generated code meets the same standards as human-written code before entering the main branch. The quality gates described in Part 3 (Snyk for dependencies, SonarQube for code quality, GitGuardian for secrets) run as hooks in the CI pipeline.
GitLab CI Integration
Claude Code also supports GitLab CI/CD integration for teams on the GitLab platform. The configuration mirrors GitHub Actions, with hooks running at merge request time.
| Integration | Trigger | Hook Pattern | Output |
|---|---|---|---|
| GitHub Actions | PR opened/updated | PostToolUse: lint + type check | PR comment with results |
| GitLab CI | MR opened/updated | PostToolUse: lint + security scan | MR comment with results |
| Pre-commit (local) | git commit | Stop: full quality suite | Pass/fail with details |
| Scheduled | Cron/manual trigger | Agent: codebase audit | Report to dashboard |
How Hooks Compare Across AI Coding Tools
Claude Code is not the only tool with hook support. Here is how the landscape looks in early 2026:
Claude Code
- Three handler types: Command, Prompt, Agent
- 12 lifecycle events including PreToolUse (can block actions)
- Subagent integration: Agent hooks spawn verification subagents with tool access
- Configuration:
.claude/settings.jsonat project or user level
Cursor (v1.7+)
Cursor introduced hooks in version 1.7 (October 2025) with support for lifecycle events including beforeShellExecution, beforeMCPExecution, beforeReadFile, afterFileEdit, and stop. Hooks are configured via JSON and executed as standalone processes. As of January 2026, Cursor added 10-20x faster hook execution.
GitHub Copilot
GitHub Copilot supports hooks stored in .github/hooks/*.json. The preToolUse hook is the most powerful, capable of approving or denying tool executions for security enforcement and compliance logging.
| Feature | Claude Code | Cursor | GitHub Copilot |
|---|---|---|---|
| Handler types | Command, Prompt, Agent | Command | Command |
| Can block actions | Yes (PreToolUse) | Yes (before* events) | Yes (preToolUse) |
| LLM-based hooks | Yes (Prompt type) | No | No |
| Subagent hooks | Yes (Agent type) | No | No |
| CI/CD integration | GitHub Actions, GitLab CI | Limited | GitHub Actions (native) |
| Config location | .claude/settings.json | .cursor/hooks/ | .github/hooks/*.json |
The key differentiator for Claude Code is handler diversity. Command hooks handle straightforward checks. Prompt hooks handle semantic evaluation. Agent hooks handle deep analysis requiring tool access. This three-tier system maps naturally to different quality gate requirements.
Connecting Hooks to the Governance Framework
THE GOVERNANCE LOOP
Hooks connect standards to enforcement to measurement
Define coding standards and constraints
Enforce standards at every tool use event
Governance framework for human review points
Same hooks run in merge pipeline
Measure churn, failure rate, security findings
CLAUDE.md is advisory. Standards are suggestions, not gates.
Every rule becomes an enforced gate that cannot be bypassed.
The full loop: Standards → Enforcement → Governance → Automation → Measurement → Optimization. Each component strengthens the others.
Hooks are the technical implementation of thread-based engineering checkpoints. Here is how they map:
Thread-Based Engineering Says...
"Human review is required for authentication changes, database migrations, and payment logic."
Hooks Enforce It...
A PreToolUse prompt hook evaluates every edit against these categories and blocks changes to sensitive areas until a human reviews them. The governance policy becomes an automated gate.
The Full Loop
- CLAUDE.md (Part 3) defines the coding standards and constraints
- Hooks enforce those standards at every tool use event
- Thread-based engineering (Part 2) provides the governance framework for when human review is needed
- CI/CD integration runs the same hooks in the merge pipeline
- Metrics (code churn, change failure rate, security findings) measure the result
Without hooks, CLAUDE.md is advisory. Claude follows it most of the time, but there is no enforcement mechanism. With hooks, every rule becomes a gate that cannot be bypassed.
Practical Setup Guide
Step 1: Start with PostToolUse Formatting
The lowest-risk, highest-impact hook. Configure auto-formatting on every file edit:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"handler": {
"type": "command",
"command": "npx prettier --write $FILEPATH"
}
}
]
}
}
Step 2: Add PreToolUse File Protection
Identify your critical files and block AI edits to them:
- Middleware (
src/middleware.ts) - Auth routes
- Payment processing
- Environment files
- Database connection configuration
Step 3: Add Dependency Guards
Block production dependency installations without approval (prevents slopsquatting):
- Match
Bashtool use - Filter for
npm install,yarn add,pip installcommands - Block unless explicitly approved
Step 4: Add Prompt Hooks for Semantic Review
Once you are comfortable with command hooks, add prompt hooks for context-dependent checks:
- Security classification of edits
- Architecture pattern verification
- API design review
Step 5: Integrate with CI/CD
Connect hooks to your merge pipeline so the same quality standards apply in both local development and automated review. For teams formalizing this beyond hooks, the broader team collaboration framework covers how testing, deployment, and review responsibilities map to roles at scale.
What This Means for AI Technical Debt Prevention
The AI technical debt crisis exists because AI-generated code enters production without sufficient quality gates. Hooks close that gap by making quality enforcement automatic and consistent.
Consider the three critical failure modes from our series:
-
66% productivity tax (code that is "almost right"): PostToolUse hooks catch formatting, type errors, and lint violations immediately, reducing the "almost right" problem at generation time.
-
41% code churn (code revised within 2 weeks): PreToolUse prompt hooks that verify architectural patterns prevent code that will need refactoring later.
-
45% vulnerability rate: PreToolUse security hooks block edits to sensitive areas without review, and PostToolUse hooks run security scans on every change.
Hooks do not replace human judgment. They automate the repeatable checks so human reviewers can focus on the nuanced decisions: architecture choices, business logic correctness, and system design.
Claude Code Hooks: Questions Developers Ask
Common questions about this topic, answered.
Conclusion: Enforcement Over Documentation
The gap between knowing what quality standards to follow and actually enforcing them is where AI technical debt accumulates. CLAUDE.md defines the standards. Thread-based engineering defines the governance. Hooks close the loop by making enforcement automatic.
Start with formatting hooks (zero risk, immediate impact), add file protection for your critical paths, then layer in prompt hooks for semantic security review. Each hook you add converts a manual review step into an automated gate, freeing human reviewers to focus on the decisions that actually require judgment.
Ready to implement automated quality enforcement in your AI development workflow?
- Full-Stack AI Development - Hooks and governance built into every project
- Contact Us - Let us help you configure production-grade Claude Code hooks
