0 to 1: Frontend Architecture for a Stealth AI Startup
Two ML engineers, a designer, and a Jupyter notebook. Ten weeks to a live product.
Client name withheld under NDA. Industry, scope, and results are accurate.
The Challenge
The founding team had a working AI model and a clear product vision, but nothing on the frontend side. Their demo was a Jupyter notebook. Investors were interested, but "open this notebook and run the cells" wasn't going to close a seed round.
The constraints were real:
- Ten weeks. Non-negotiable. There was an investor milestone that would make or break the raise.
- The UI was interaction-heavy. Real-time AI outputs meant streaming patterns, optimistic updates, and careful loading-state management — not a static marketing page.
- The team had zero frontend experience. Two ML engineers and a designer. Whatever I built needed to be simple enough for them to maintain and extend after I left.
- Budget was tight. Infrastructure costs had to stay near zero until post-raise.
They needed someone who could make fast, opinionated decisions and ship without asking for permission at every step.
The Approach
Week 1 was about decisions, not code. I evaluated three candidate stacks and wrote a one-page decision doc for each trade-off. The final picks:
- Next.js App Router — SSR for marketing pages, client-side rendering for the app. One framework instead of two.
- Tailwind + shadcn/ui — Rapid UI without a custom design system. The designer could tweak styles without learning a component API.
- TanStack Query — Server-state management with built-in caching, retries, and streaming support. The kind of library that handles the boring stuff correctly so you don't have to think about it.
- Zustand — Client state for auth and UI preferences. Nothing more.
- Vercel — Zero-config deploys, preview URLs on every PR, generous free tier. The right call for a pre-seed team that shouldn't be managing infrastructure.
Every tool was chosen with one question: will ML engineers who've never touched React be able to work with this after I'm gone?
Weeks 2-3 were foundation. Before any feature code, I set up the project skeleton: folder structure, path aliases, ESLint + Prettier, commit hooks, and CI. I also wrote an ARCHITECTURE.md documenting every convention — not for me, but for the founders who'd be maintaining this alone in a few weeks.
Weeks 4-8 were vertical slices. Three surfaces: a prompt workspace with streaming output, a history view, and settings/billing. Each slice shipped end-to-end — UI, API integration, error handling, loading states — before moving to the next. The streaming AI responses used server-sent events (SSE) through a custom useStream hook that managed connection lifecycle, buffering, and reconnection.
Weeks 9-10 were hardening. Error boundaries on every route segment, Sentry for runtime errors, SEO basics, rate-limiting on API routes, and a launch checklist covering security headers and environment hygiene.
The product launched on schedule. Zero incidents on launch day.
Tech Stack
The Results
Empty repo to live product in exactly ten weeks. The startup closed its seed round shortly after.
- Lighthouse Performance: 94 at launch, with LCP under 1.8 s and CLS at 0. This mattered because investor demos were happening on the live product — first impressions counted.
- Commit-to-deploy: under 15 minutes — type-check, lint, test, build, and Vercel deployment. Multiple safe deploys per day from day one.
- Zero production incidents in the first 60 days.
The real validation came after handoff. The two ML engineers shipped their first frontend PRs within one week of me leaving. Six months later, the team hadn't needed to hire a frontend specialist — the conventions and documentation were enough. The useStream hook I wrote got reused for two more AI features without any architectural changes.