All projects
Live

Research Canvas

An autonomous research agent that plans, executes, and compiles — step by visible step

Built in 8 hours
Next.jsAnthropic ClaudeAgentic AISSE StreamingTypeScript

3-stage pipeline

Real-time SSE per section

Structured plan generation

Sequential context-aware execution

01The Problem

Most AI demos treat the model as an answer machine: input → output. But real research work is iterative: define scope, gather information section by section, synthesize across sources. I wanted to build a demo that makes the agentic loop — planning → execution → synthesis — the literal UI, not an implementation detail hidden behind a spinner.

02The Approach

Built a 3-stage pipeline with distinct API routes for each phase. Stage 1 (plan) generates a structured research agenda: 3-5 section titles with descriptions and search queries. Stage 2 (research) executes each section sequentially, streaming the content via SSE. Stage 3 (compile) merges all sections into a final formatted report. The client renders each stage's progress live — you see the plan populate, then watch each section write itself in real time.

03Architecture Decisions

Structured plan generation

The /api/research/plan route prompts Claude to output a strict JSON array of section objects: {id, title, description, searchQuery}. The client uses this to render the research agenda UI before any content is written — the user sees the full plan, then watches it execute section by section.

Sequential SSE streaming per section

Each section is researched via /api/research/section, which streams Anthropic's SSE delta events directly to the client. Sections run sequentially (not in parallel) because each section prompt includes summaries of previous sections as context — the later sections build on earlier ones.

Graceful partial state

The UI is designed to be live from the first token. The research log panel shows each section as it writes, the compiled report panel merges completed sections in real time. If a section errors, the pipeline continues with the remaining sections rather than failing entirely.

04Key Insight

Making the plan visible before execution changes how users perceive the output. When users see 'Researching: Historical Context → Current Landscape → Technical Challenges → Future Directions' before any text appears, they understand what they're about to receive and trust the output more. The plan is both a UX feature and a structural constraint on the model — it commits to sections before executing them.

05Why It Matters

Demonstrates the plan-then-execute pattern used in production agentic systems: the agent's 'thinking' (planning phase) is separated from its 'doing' (execution phase). This separation makes the system more reliable, more transparent, and easier to debug. The same pattern applies to Clio's legal research workflows: define scope → retrieve relevant materials → synthesize.