All projects
Live

AI Code Reviewer

Code review in seconds, structured by category and severity

Built in One overnight session
Next.jsAnthropic ClaudeStructured OutputTypeScript

12 languages

6 categories

4 severity levels

Score ring + streaming fix

01The Problem

Code review is one of the highest-leverage uses of LLMs — but most AI coding tools just annotate inline. I wanted to build something that returns structured, categorized output: bugs vs. security vs. performance vs. style, each with a severity level, so you can triage and prioritize rather than wade through a monolithic essay.

02The Approach

Claude Haiku analyzes the pasted code and returns a structured JSON object with a quality score (0–100), a summary, a list of positives, and categorized issues. Each issue has a category (bugs/security/performance/logic/style/docs), severity (critical/major/minor/info), title, description, line reference, and a concrete suggestion. A second endpoint generates streaming corrected code, either for a single issue or all issues at once.

03Architecture Decisions

Structured JSON output from Claude

The review endpoint enforces a strict JSON schema via the system prompt — no markdown, no explanation, just the object. Claude Haiku is reliable enough at structured output that a well-formed schema + 'return ONLY valid JSON' instruction works consistently. The response is parsed and typed with TypeScript interfaces.

Severity-ranked triage

Issues are sorted by severity (critical → major → minor → info) before display, and the score ring and severity badges use color coding that makes the severity immediately obvious. The goal is that you can open the review and know within 5 seconds whether this code needs urgent attention.

Streaming fix generation

The fix endpoint streams corrected code using Claude Haiku's SSE API. You can fix a single issue (clicking the Fix → button on any card) or all issues at once. The streamed output appears in a code panel below the review with a copy button. This keeps the UX tight — review and fix in the same viewport.

Per-category filtering

Category tabs (Bugs, Security, Performance, Logic, Style, Docs) let you focus on what matters. If you only have time to fix security issues before a release, you can filter to just those. Counts in each tab give immediate triage context.

04Key Insight

The quality of structured output from Claude Haiku scales directly with schema specificity. A vague 'return JSON with issues' prompt produces inconsistent results. A detailed schema with enum constraints, example severity levels, and explicit instructions about line numbers and suggestions produces reliable, parseable output every time.

05Why It Matters

A practical developer tool with immediate everyday utility. Shows structured LLM output, dual-mode generation (structured analysis + streaming code), and the design pattern of using AI to produce actionable, triage-friendly feedback rather than narrative essays.