AthleteIQ
I built an AI on my own published papers — and it knows more than ChatGPT about exercise physiology
2 published EJAP papers as knowledge base
6 starter topics
Streaming via Haiku
DOI citations in footer
01The Problem
My published research on physiological durability and carbohydrate metabolism exists behind academic paywalls, described in language that coaches and athletes can't easily act on. ChatGPT gives confidently wrong answers about aerobic thresholds and training load. I wanted a sports science AI that was precise, honest about uncertainty, and grounded in actual research methodology.
02The Approach
Built a streaming AI chat with a hand-crafted knowledge base encoding my two EJAP papers, the Banister fitness-fatigue model, Epley e1RM formula, and key exercise physiology concepts. Deliberately chose a curated system prompt over RAG for a narrow scientific domain — precision matters more than breadth when getting the Epley formula slightly wrong is worse than useless.
03Architecture Decisions
Curated knowledge base over RAG
For narrow scientific domains, a hand-crafted knowledge base gives more reliable results than vector retrieval. The system prompt encodes exact findings, methodologies, and implications from specific papers — no risk of approximate retrieval returning the wrong chunk.
Scientific honesty instructions
The system prompt explicitly instructs the model to say 'the evidence is mixed' when it is, distinguish 'one study' from 'well-established', and never extrapolate beyond what the research supports. Epistemic honesty is a design requirement, not an afterthought.
Claude Haiku for streaming speed
Haiku 4.5 handles scientific content with sufficient precision while being fast enough that streaming feels conversational. A response appearing in 2 seconds feels like a chat; one taking 10 doesn't.
Starter topics for immediate utility
Six pre-built questions covering the most common training questions — durability, CTL/ATL/TSB, HRV, carbohydrate timing, periodization, tapering. Users see value immediately without needing to know what to ask.
04Key Insight
Domain expertise and AI engineering are multiplicative, not additive. The system prompt quality — knowing which findings to encode, which to caveat, and which to leave out — determines whether the AI is trustworthy. That judgment can't be replicated by someone who doesn't know the research.
05Why It Matters
A portfolio piece that could only be built by someone who both published the underlying research and can write the code to surface it. Direct intersection of Harrison's two specialties.