Writing
·7 min read

I Built an AI-Powered Job Tracker. Used It to Apply for Jobs the Same Day.

Three Claude tools embedded in a job application Kanban board: streaming cover letter generator, job fit analyzer with visual scoring, and tailored interview prep. The AI has my full background in context — published papers, portfolio apps, role differentiators. It generates letters that are actually specific, not generic.

aiengineeringcareerclaudeworkflowstreaming

I built a job application tracker with an AI twist: three Claude-powered tools embedded directly into the workflow — streaming cover letter generator, job fit analyzer with visual scoring, and tailored interview prep. The meta-observation: the AI tools in this app were used to apply for jobs the same day they were built.

The problem with job search tools

Most job search tools do one of two things: they help you track applications (Airtable, spreadsheets, Notion) or they help you write materials (LinkedIn AI suggestions, ChatGPT prompts). Very few combine structured pipeline management with AI tools that are actually contextual — that know your background, your projects, your specific differentiators.

Generic AI cover letters are obvious. They start with "I am writing to express my interest" and reference "your esteemed organization." Hiring managers have seen thousands of them. They're worse than no letter at all because they signal that you didn't think carefully about the role.

The fix is context. An AI cover letter that knows you have two peer-reviewed publications, 24 deployed apps, and a specific research-to-product project that directly maps to what the company is building — that's a different thing entirely. It's only possible if the AI has that context embedded in its system prompt.

Three tools, one workflow

The app has three AI tools, each as a separate API route:

Cover Letter Generator — you paste the job description, pick a tone (professional, warm, or technical), and the letter streams back token-by-token. The system prompt includes my full background: published papers, portfolio apps, current role, GitHub activity. The output references specific projects when they're relevant. If the role is about RAG pipelines, it mentions the research agent I built. If it's about production LLM ops, it mentions Matua. Context-driven, not generic.

Job Fit Analyzer — paste the JD, get back a 0-100 fit score, a verdict (Strong Match / Good Match / Partial Match / Stretch Role), and structured analysis: strengths with evidence, gaps with severity ratings, key talking points to emphasize, and red flags to address proactively. The UI built around this uses a custom SVG score ring — animated stroke-dasharray rendering the number visually before the user reads anything.

Interview Prep — 8-10 questions tailored to the role, each with: why interviewers ask it, the specific angle I should take given my background, and key points to hit. Plus an opening statement for "tell me about yourself," smart questions to ask the interviewer, and watch-outs for tricky framings. The questions aren't generic ("tell me about a time you showed leadership") — they're specific to the role and to what my background makes interesting or potentially concerning.

The UX pattern: URL as state

The cleanest part of the technical implementation is how the app connects the tracker to the AI tools. From any application detail page, three buttons link to /ai:

/ai?tab=cover-letter&company=Anthropic&role=AI+Engineer

The AI page reads these via useSearchParams() and pre-fills the form. One tap, company and role already loaded, paste the JD and generate. No state management, no prop drilling, no shared context object. The URL is the state. It's the simplest possible architecture that works well.

Streaming the cover letter

The cover letter streams using the same SSE pattern I've used in other apps — the API route calls Anthropic with stream: true, reads the response body chunk by chunk, and re-emits content delta events to the client. The client accumulates the text and updates state with each chunk.

This matters for UX. A cover letter is ~300 words. Without streaming, you wait 3-5 seconds for nothing, then get a wall of text. With streaming, you see the first sentence in under a second and can start evaluating whether the direction is right before it finishes. If it opens wrong, you can cancel and tweak the prompt. Faster feedback loop.

The meta-observation

This app was built in one overnight session and was in use the same day. I used the Job Fit Analyzer on three actual applications while testing it. The cover letters it generates are better than what I was writing manually — not because Claude is a better writer, but because it has complete context and never forgets to mention the relevant project.

That's the real value of embedding AI in a workflow tool: the AI has the context that's already in the system. A standalone ChatGPT conversation for cover letters requires you to re-paste your background every time. An embedded tool has it always, and uses it automatically.

The Kanban board tracks 8 stages: Bookmarked → Applied → Phone Screen → Technical → Final → Offer / Rejected / Withdrawn. Each application has a full timeline, contact tracking, follow-up dates, and a notes field. The AI tools are accessible from any application's detail page. The whole thing is local-storage-first (no account required, no data sent to servers except during AI generation).

What I'd build next

The obvious extension is a "response probability" predictor: given my background and the role requirements, what's the rough likelihood of getting a response? This would require training data (applications sent + outcomes) that I don't have yet, but after a few months of use the tracker would have enough signal.

Another extension: auto-generate follow-up emails. You applied three weeks ago, no response — generate a concise, specific follow-up that references the role and a relevant project. One tap. That's the kind of AI augmentation that saves time on exactly the tasks that feel rote but matter.