Clip Finder
A YouTube clip finder that thinks like a social media editor
2-hour build time
No YouTube API key required
5-7 clips per video
Platform recommendations
01The Problem
Content creators spend hours scrubbing through long videos to find the 60-second moment worth clipping. YouTube gives you raw video — it doesn't tell you what's worth sharing, who it's for, or how to caption it. I wanted to collapse that work into seconds.
02The Approach
Built on top of the youtube-transcript package (no YouTube API key needed) and Claude Haiku. The transcript is fetched, cleaned, and sent to Claude with a structured prompt optimized for identifying viral moments. Claude returns a ranked JSON array of clips with type classification, hooks, shareability rationale, and platform-specific recommendations.
03Architecture Decisions
No-key transcript extraction
The youtube-transcript package fetches CC transcripts directly from YouTube without requiring an API key or OAuth. This keeps the app zero-config to deploy — just an Anthropic key. Transcripts longer than 60,000 characters are truncated to stay within Claude's context window.
Structured JSON extraction via Claude Haiku
Claude Haiku was chosen over Sonnet/Opus for cost and speed — this is a pure text extraction task, not a reasoning task. The prompt specifies the exact JSON schema expected, and the API response is parsed and validated before returning to the client.
Timestamp deep-linking
Each clip includes a direct YouTube URL with a ?t= parameter pointing to the exact second. This means you can click through to the original video at the precise moment, making it easy to verify and use the clip without seeking.
Platform-aware recommendations
Beyond finding clips, Claude recommends which clip is best for each platform (TikTok, LinkedIn, Twitter) based on content type, duration, and tone. A technical insight at 3 minutes is a LinkedIn post; a funny reaction at 45 seconds is a TikTok.
04Key Insight
The quality of clip detection lives entirely in the prompt. The first version asked for 'interesting moments' and got mediocre results. Adding the concept of 'would someone stop scrolling for this?' and specifying the six clip types (insight, funny, quotable, surprising, emotional, actionable) dramatically improved the outputs — Claude needs the vocabulary to think with.
05Why It Matters
A proof-of-concept for AI-assisted video editing workflows. The same pattern — transcript → structured AI analysis → actionable outputs — applies to podcast summarization, meeting highlight detection, and course content extraction. Built directly in OpusClip's problem space.