Writing
·8 min read

I Automated My Dev Workflow With Three Claude-Powered CLI Tools

Commit messages, PR descriptions, and code explanations are high-friction, low-value work. I built three CLI tools that use Claude to eliminate them: ai-commit (staged diff → 3 conventional commit options → you pick), smart-pr (branch diff → structured PR description), and ai-explain (pipe any code, get an explanation). Here's how they work and why raw fetch beats the SDK for CLI tooling.

developer toolscliaiworkflowengineeringanthropic

I spend a lot of time writing commit messages, describing pull requests, and trying to understand unfamiliar code. All of that is high-friction, low-value work that slows down the actual building. So I built three CLI tools that use Claude to eliminate it. Here's what they do, why I built them, and what building them taught me about AI-native development.

The problem with dev workflow friction

Modern software development has a lot of overhead that isn't actually software development. Writing commit messages. Describing pull requests. Reading code you didn't write (or code you wrote six weeks ago). These tasks take time and mental energy, but they produce no new functionality. They're the friction between the work and the artifact.

There's also a quality problem. Commit messages are where developers go fastest and think least. "Update files." "Fix stuff." "WIP." These messages are useless to anyone reading git history — including future you. PR descriptions are similar: either too sparse to be helpful or so much work to write that they get skipped entirely.

AI changes the economics here. A model that has seen your staged diff can write a conventional commit message faster than you can think of one. It never writes "update files." It reads the code and knows what changed.

ai-commit: staged diff → 3 options → you pick

The workflow is: run ai-commit in any git repo. It stages everything, sends the diff to Claude Haiku, and returns three commit messages in conventional commit format. You select with arrow keys and press enter. The commit is made.

$ ai-commit

🤖 ai-commit — generating commit messages...

Changed:
  app/api/stream/route.ts  | 47 +++
  app/page.tsx             | 132 +++
  package.json             |  3 +

💡 Added streaming SSE endpoint and UI for real-time response comparison.

Choose a commit message: (↑↓ or 1-3, Enter to select)

▶ [1] feat(stream): add SSE endpoint and side-by-side comparison UI
  [2] feat: streaming model comparison with TTFT and token metrics
  [3] feat(api): real-time SSE streaming for multi-model response display
  [c] Write my own...

Three options, not one, because there's usually a tradeoff. The first option is usually the most complete and accurate. The second is shorter and punchier. The third focuses on whatever Claude thinks is the most important change. You pick the one that fits your team's conventions or your mood.

The implementation is deliberately minimal: a Node.js script, no dependencies beyond Node's built-in readline and child_process. It reads your API key from ~/.config/anthropic/api_key or the environment, diffs your staged changes, sends them to the API with raw fetch, parses the JSON response, and renders the interactive picker with ANSI escape codes.

Two flags matter in practice: --staged (skip the auto-staging, use what's already staged) and --push (commit and push in one step). --dry shows suggestions without committing — useful when you want the message but aren't ready to commit yet.

smart-pr: branch diff → structured PR description

PR descriptions are harder than commit messages because they span multiple commits and need to explain intent, not just changes. smart-pr compares your current branch against main, collects all commits and the full diff, and produces a structured markdown description with four sections: Summary, Changes, Testing, and Notes.

$ smart-pr --save

📝 smart-pr — generating PR description vs main...

Branch: feat/streaming-comparison → main
Commits: 4

──────────────────────────────────────────────────────────
## Summary

Adds a real-time side-by-side model comparison interface with a streaming
SSE backend. Users can run the same prompt against multiple Claude models
simultaneously and see TTFT, total time, and token counts per model.

## Changes
- Added /api/stream route using raw fetch to Anthropic API for SSE streaming
- Built two-panel comparison UI with live streaming into each panel
- Added TTFT measurement by recording timestamp of first event_type: delta
- Exposed input/output token counts from the final message_delta usage event
- Added example prompts and temperature/model controls

## Testing
- Test with the default example prompts to verify both panels stream
- Verify TTFT and token counts update after completion
- Test with a long prompt (e.g. code generation) to verify truncation handling
- Check mobile layout (panels should stack)

## Notes
None.
──────────────────────────────────────────────────────────

✅ Saved to /tmp/pr-description.md

The key design decision: Claude writes it as if it were the engineer who made the changes. Not a neutral summary — an explanation from the person who knows why each decision was made. This makes the PR description actually useful for code review, not just a changelog.

--copy pipes to xclip or pbcopy for instant clipboard use. --save writes to /tmp/pr-description.md. On platforms without clipboard support, it prints to stdout — you can paste from there.

ai-explain: pipe any code, get an explanation

The third tool is the simplest: you pipe code or point it at a file, and it explains what the code does. Three modes:

  • Default: "What it does", "How it works", "Key concepts" — full explanation
  • --brief: One paragraph. Fast, for when you just need orientation.
  • --debug: Full explanation plus "Issues & Improvements" — bugs, security problems, things worth refactoring
$ cat some-api-route.ts | ai-explain --brief

🔍 ai-explain [brief]

This Next.js API route implements a streaming SSE endpoint that proxies
requests to the Anthropic API. It reads the request body, constructs a
messages array with the provided system and user content, streams the
response via ReadableStream, and closes the connection when the model
sends message_stop. The caller is responsible for parsing the
event-stream data format.

The most useful flag is --debug. It's the difference between "what does this do" and "what's wrong with this and what would you change." Running ai-explain --debug on a file you're about to refactor is a faster way to identify issues than reading it yourself.

Why raw fetch, not SDK calls

All three tools use raw fetch to api.anthropic.com, not the Anthropic Node.js SDK. This is deliberate — the SDK adds a dependency layer and, in certain runtime environments, has streaming compatibility issues. For CLI tools running in Node 22 on a standard OS, raw fetch is simpler, more debuggable, and has no cold start implications.

The pattern is the same in all three tools: construct the request body, fetch with the API key in the header, parse the JSON response, extract content[0].text. No streaming needed for these tools (responses are short), so the full response JSON parsing works cleanly.

The broader pattern

These three tools share a pattern that generalizes across AI-native developer tooling:

  1. Capture context automatically. ai-commit runs git diff --staged. smart-pr runs git log and git diff. ai-explain reads the file. The user shouldn't have to paste anything.
  2. Ask for what you'd ask a senior engineer. The prompt isn't "summarize this diff" — it's "write three commit messages in conventional format, here's the style context, here are the constraints." The prompt is where the real engineering is.
  3. Show options, not one answer. ai-commit gives three options because the right commit message depends on context the AI doesn't have — your team's conventions, what you're trying to communicate, how this commit will be used in git history. The human makes the final call.
  4. Pipe-friendly by default. All three tools work with stdin. They compose with standard Unix tools. They save output to files. They're tools in the Unix sense, not applications.

AI-native development isn't about replacing the engineer — it's about eliminating the overhead that slows the engineer down. Every minute not spent writing commit messages is a minute available for the work that actually requires a human.

What I'd build next

The next obvious tool in this category is ai-review: run it before pushing and get a code review of your staged changes, flagging potential bugs and security issues before they hit CI. The same raw diff input, different prompt, different output format.

After that: ai-test, which generates test cases for new functions based on the implementation. You write the function, the tool writes the tests. You review and edit, but the scaffolding is already there.

The pattern is always the same: find the high-friction, low-value task. Identify the context that's already available (staged diff, file contents, test output). Write a prompt that asks for what a senior engineer would produce. Give the human the final call.