Writing
·8 min read

Building 15 Apps in 30 Days: What I Learned About AI-Native Development

A month-long sprint to build a portfolio from scratch. What I shipped, what I scrapped, and what the process taught me about the future of software development.

shippingaidevelopmentportfolio

January 17th. I made a decision: build a credible software portfolio from zero, in one month. I shipped 15 projects. Here's what actually happened.

The setup

I had a problem. I was trying to transition into AI engineering roles, and I had a strong resume — AI ops experience, published research, genuine technical understanding — but nothing I could point to and say "I built that." No GitHub green squares. No deployed apps. No portfolio.

The conventional advice is to build one really polished app and do it over several months. I tried a different bet: ship fast, ship often, build the habit. My goal was 10 apps in 30 days. I ended up with 15.

What I shipped

The range was intentional. I wanted to cover different problem spaces, different tech approaches, different levels of complexity:

A workout tracker with camera-based rep counting using MediaPipe. A finance app that imports real bank CSVs and builds a spending picture. A snow forecast aggregating mountain weather for ski planning. A real estate explorer for NZ property data. A durability analyser based on my own published cycling research. An NZ adventure planner with real-time DOC track conditions and weather scoring. A collab whiteboard. A flight scanner. A currency converter. A code explainer. Job and training load trackers.

Each one does something real. Not just a CRUD demo with fake data — these connect to actual APIs, parse real files, handle real edge cases.

What AI-native development actually means

"Vibe coding" is a real thing and a misleading framing at the same time. Yes, I use Cursor and Claude to write most of the implementation code. No, that doesn't mean I'm just pressing buttons and watching apps appear.

The bottleneck isn't typing. It never was. The bottleneck is knowing what to build. Understanding the data model before you write the schema. Knowing why server components matter before you architect the page. Spotting that the component needs a key prop because you understand React reconciliation, not because you memorised the ESLint rule.

AI dramatically accelerates execution. But execution without understanding produces bugs that compound. I spent the first week hitting walls I couldn't debug — not because I couldn't code, but because I didn't understand what the code was doing. The apps that shipped cleanly were the ones where I understood the domain first.

Speed comes from clarity. Clarity comes from understanding. AI is a multiplier, not a substitute.

The meta-lesson: thin vertical slices

The highest-ROI thing I learned: never build horizontally. Don't build the full data model, then all the API routes, then all the UI. Build one feature end-to-end, deployed, working for real users. Then the next.

With AI tools, it's tempting to scaffold everything at once because you can generate it so quickly. That's a trap. You end up with a perfectly structured codebase for an app that doesn't actually work yet.

My rhythm became: ship the MVP in one sitting (3-4 hours), get it deployed, then iterate. That constraint forced every decision to be about what matters most right now. It also meant every project had a deployed URL from day one, which changes how you think about it.

What I scrapped

Three projects didn't make the cut. A mood tracker that felt too generic. A recipe generator that was technically fine but solved nothing interesting. A habit tracker I abandoned halfway through when I realised I was building what every tutorial builds.

The scraps taught as much as the ships. If I couldn't articulate what problem it solved in one sentence, the project wasn't ready to build.

The honest accounting

I have a large language model running on a VPS helping me work. Matua — my AI assistant — handles overnight work, runs cron jobs, scouts for useful tools. This is not cheating. This is the new baseline. Every serious builder is figuring out how to use AI agents effectively. I'm running mine.

The code I'm most proud of is the code I understand well enough to change. The features I'm most proud of are the ones that solve real problems for real people — including myself.

What comes next

The portfolio sprint was always a means to an end. The end is: be a meaningfully better builder than I was 30 days ago, and have the receipts to show it. I hit both.

The next phase is depth over breadth. Some of these apps deserve to be real products. The workout tracker has 3 months of my own data in it now. The finance app processes my actual transactions. The durability analyser is based on research I co-authored. These aren't toy projects — they're tools I use.

That's the best signal I know of that something is worth building.