Now

Updated 3 March 2026 · Vancouver, BC

Status

Available for new opportunities

Looking for AI engineering, ML ops, or full-stack AI roles. Specifically interested in companies where AI is the product, not a feature bolt-on. Based in Vancouver (IEC working holiday visa) — hybrid or remote, open to both.

Job Search

Actively applying to AI engineering roles in Vancouver and remote. Cover letters written and ready for six companies: Clio (Senior Developer, Enterprise AI — built a legal AI workflow app that mirrors exactly what their team ships), OpusClip (Growth AI — built a direct clip-detection analogue), Sanctuary AI (ML Engineer — research background the differentiator), Cohere (Applied AI Engineer, Agentic Workflows — eval culture and multi-agent experience), Giga (Software Engineer — agent orchestration and voice AI), and Cursor (Technical Support Engineer).

Last updated: 3 March 2026

Building

  • Agent Memory Demo — an AI that actually remembers you. Chat on the left, live memory graph on the right. Every message triggers two parallel calls: Claude responds using injected memory, a lightweight extraction call captures new facts. Memory persists across sessions. Makes the extract-store-inject architecture behind AI personalization tangible. live demo.
  • Context Engineering Studio — six context strategies run simultaneously on any prompt: Baseline, Role+Persona, Grounding, Few-Shot, Constraints, and Full Stack. Each panel streams with TTFT and token count. Andrej Karpathy coined the term in 2026 — this makes the concept tangible. live demo.
  • Clip Finder — AI YouTube highlight detector. Drop a URL, get the 5-7 most shareable moments with timestamps, captions, and platform recommendations.
  • Interview Prep AI — paste a job description, get 12-15 tailored Q&A with ideal answers in your voice. Claude with your full background baked in as context. live demo.
  • Portfolio (/ask) — AI chat where recruiters and engineers can ask questions about my projects, research, and background. Streaming Claude with Harrison-specific context.
  • beef + Durability Analyzer — the two apps I'd never stop building. Workout tracker with CV rep counting, real-time PR detection, e1RM trend charts, and body weight tracking with relative strength ratios; cycling performance tool from peer-reviewed research —now live.
  • AthleteIQ — sports science AI chat backed by my published EJAP papers. Built on a hand-crafted knowledge base (CTL/ATL/TSB, VT1, HRV, durability) — precision over RAG for a narrow scientific domain. live demo.
  • DocIQ — document intelligence demo. Paste any contract, spec, or research paper and ask questions in plain language. Citation-required responses, no server storage, full context window over RAG. The same pattern Clio's Enterprise AI team builds at scale. live demo.
  • LLM Orchestration Pipeline — 4-stage document processing pipeline (Extract → Analyze → Synthesize → Act). Every stage shown live with timing and token counts. Enterprise AI architecture made visible. live demo.
  • RAG Pipeline Demo — full retrieval-augmented generation stack built from scratch: Okapi BM25 in TypeScript (no vector DB, no embeddings API), chunk scoring with matched term highlighting, grounded generation with citations. Makes every retrieval decision auditable. live demo.
  • Claude Model Face-Off — real-time side-by-side streaming comparison of Claude Haiku vs Sonnet. Parallel SSE streams, TTFT, tokens/second, winner banner. Makes model tradeoffs visceral instead of theoretical. live demo.
  • LegalFlow — three AI workflows for legal practice: matter activity → client update emails, time entries → billing narratives, court documents → structured calendar events with deadline classification. Built to mirror what Clio's AI team is actually shipping. live demo.
  • Research Canvas — autonomous multi-step research agent. Enter a topic → watch the AI plan, research each section, and synthesize a structured report with live streaming at every step. Shows what production agentic pipelines look like under the hood. live demo.
  • Prompt Lab — side-by-side comparison of four prompting techniques: zero-shot, few-shot, chain-of-thought, and system-prompt tuned. All four stream simultaneously with TTFT and token counts visible. Temperature slider, model selector. Makes prompt engineering concrete instead of theoretical. live demo.
  • AI Eval Lab — systematic LLM prompt evaluation tool. Define a prompt template with variables, add test cases with expected outputs, run all against Claude and get pass/fail scoring with an overall quality score. "Fix my prompt" button streams AI-powered improvement suggestions based on failures. The workflow real AI teams use to ensure reliability.
  • AI Dev Toolkit — seven Claude-powered CLI tools for the daily dev workflow: ai-commit (staged diff → 3 commit options → pick), smart-pr (branch diff → structured PR description), ai-explain (pipe any code → explanation), ai-review (pre-push review with severity ratings), ai-changelog (git log → grouped changelog), project-context (AI-ready context doc for any repo), and ai-standup (git log → daily standup in seconds). All running locally on VPS.

Work

  • AI Ops Lead at Supermix — building internal tools, orchestrating AI agents, keeping the ops stack running. We make startup and tech podcasts.
  • Research at SPRINZ — consulting on exercise physiology and durability research projects at the Sports Performance Research Institute New Zealand.

Learning

  • Model Context Protocol — built several MCP servers, using them in production daily
  • Multi-agent systems and long-horizon task orchestration
  • How to build things that ship, not just things that demo well
  • Writing about what I build — 31 posts covering MCP servers, sports science, AI agents, multi-agent systems, RAG, LLM orchestration, prompt engineering, evaluation culture, context engineering

Recent Commits

52-week activity