sbozh.me- Personal startup
sbozh.me logo
The Architect
12 min read

From Zero to Production: 115 hours of directing the slop

AIBuilding in PublicOpen SourceStartupReportVibe codingMistakes
TL;DR:

What $750 actually bought vs. up to €110k in 2019. Mistakes, advice, and a formula to measure your vibe-coding

sbozhme_Split-screen_composition_year.png

The platform shipped. The mission's complete.

- The Founder

Hi there!

That quote is from GARRET. I have thoughts on it.

We're still investigating this with Viktor - he recently shipped LOAD MORE, a deep dive into the claude-cognitive tool.

Everything started from From Zero to Production: One Month, $750, and an AI Co-Founder.

You don't need to read 25 minutes of fiction to follow this post. Just know these pieces exist - they're context, not prerequisites.


It's not only a blog

I'm not advertising our posts. I'm showing you what kind of platform we're building.

In three weeks we published: a fiction piece about AI context management, an investigation into token efficiency, and a stats report on the $750 build. They all reference each other. Already forming a web.

Now look at what's missing:

December 2025 usage chart

We need something like Obsidian Graph. The connections exist - we just can't visualize them yet.

My point here: It's a platform that grows and evolves. We're documenting challenges as they appear - and how a human resolves them with AI-augmented assistance.


The topic

What we actually got for $750 - and whether it was worth it.

We'll cover:

  • Why it cost that much (mistakes included)
  • Why it was actually cheap (2019 would've been €24,000–€110,000)
  • The Vibe Efficiency Ratio - putting math behind the vibes
  • 98.4% acceptance rate that's actually 33.9% useful output
  • And yes: is it slop?

Enjoy.


Why so expensive?

We're architecting something bigger than it looks right now. For me, this is a foundation for at least the next 18 months. In that context, $750 is a drop in the ocean.

But speaking technically - there were a lot of mistakes.

I've been architecting features for 10 years, applications for about 4 years (at a serious level), and I've been using Claude Code for only half a year. All the documentation is public - you can observe what the tokens were burning on and when. That's part of the sbozh.me project's purpose.

And yes, keeping everything public makes it more expensive to deliver.

Despite all the justifications I could make, here's what I actually learned about using Claude Code:


Advice (Claude Code specific)

/compact over /clear

After completely switching tasks, use /compact rather than /clear. Maybe obvious to some - but I doubt everything I read. Had to test it myself. /compact is much cheaper.

Specs first

Planning key iterations before executing - that's what kept it under $1000. Without specs, no way. But I don't think there's one "right" way to plan. Everyone's built different.

There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently.

- Boris Cherny (creator of Claude Code)

Trust, then verify

I trust Claude Code as a developer. Most of the time I act as QA - reviewing for hallucinations first, then checking the technical side. Feels cheaper. I didn't set up an experiment to confirm.

When I spot problems, I pay attention to what's actually happening in the code and point Claude in the right direction. I stay Architect - I don't rewrite, I redirect.

Example: Claude built a star animation using React state. Worked, but lagged. I noticed the issue, suggested native DOM instead - Claude implemented it. Claude also documented why it mattered. I reviewed.

Be product focused

I'm not reviewing every piece, only key moments. What's important - that's what AI can't define for you. No one can. Only your own product vision will help you focus on what matters.

Code will never be perfect, decisions will never be perfect. I've been working 10 years in the IT industry, hired by 15 different companies. I learned deeply from 15 different codebases. They're all chaotic. Most of them are held together by "bus factor risk".

The only difference - you can't blame AI if you made a mistake and now your DB is public because of an obvious breach. Never happens to humans, right?

AI is Magic. Understand it.

Clickbait. It is magic, but mathematical. Important to remember this throughout any AI-augmented development session.

This is a pattern recognition machine. Nothing more, nothing less. At least for now.

Today's living geniuses united, put together the world's knowledge, and delivered it to us. Now the author of this website - a self-taught "engineer" - is trying to advocate for this product. But honestly, I have very little understanding of how it actually works.

I tried to understand transformer architecture and the math behind it. Didn't get it. But what I do get: it's math, not an entity speaking to you. It just feels like magic - real results from sources I can't fully grasp.

If you want to understand LLM better and don't know where to start - my suggestion: Transformers, the tech behind LLMs


The 98.4% that's actually 33.9%

We have a ~98.4% acceptance rate. Doesn't mean I'm agreeing with everything.

At that point we had:

  • 9,906 lines of code
  • 7,530 lines of markdown (public planning docs)
  • ~20,000 lines total (including package.json and misc)

So the actual useful acceptance rate is 33.9%. The other 66.1% is development margin - the cost of iteration, mistakes, and learning.


$287 in 48 hours

The deployment sprint. Every error, bug, and misalignment across the whole project surfaces at once. Context management becomes critical.

We're currently investigating this with Viktor. (Yes, that Viktor.)

What went wrong:

  • /compact? Skipped.
  • Planning? None.
  • Staged reviews? Didn't happen.
  • Tried to deploy monitoring that didn't work on first attempt. Zero traffic at launch - monitoring wasn't even needed.

The pattern: pushing forward without stepping back. Treating AI as something that will "just ship it" if you keep pressing enter. That's not how it works.

$287 in 48 hours. Lesson noted.


Why so cheap?

59k lines. 20k if you strip the noise. Here's what that actually delivers:

A complete GDPR-compliant, self-hosted solution. Blog. CV builder. Project layouts. Ready for extension - documented every step by design since day 1. Self-hosted analytics, error tracking, CMS. Deploy in ~15 minutes. 90% test coverage.

$750.

Let's decompose.


WordPress exists

Yes. I know WordPress exists.

Right now this website is just a blog. We're writing a blog about building a blog. I get it.

My take: would it be this interesting if I just installed WordPress? I won't claim it's interesting even without WordPress - but I hope the point lands.

This platform won't stay just a blog. We started there and the project is only one month old since release. The main point: this code is AI-generated. AI works better with its own abstractions - same as any developer understands their own code faster.

I personally think everyone's noticed AI struggles with legacy projects. Hallucinations on every corner. This project has an 18-month roadmap. Fresh start.


Back to 2019

April 2019. The first black hole image just dropped. You decide to build this platform to the state it reached December 29th, 2025.

You would need:

  • Designer
  • Backend Engineer
  • Frontend Engineer
  • Product Manager
  • Project Manager
  • DevOps

Six roles. Coordination overhead. Context switching. Meetings.

AI writes ridiculous numbers. Try the prompt yourself:

Write price estimations for a complete GDPR-compliant self-hosted solution. Blog, CV builder, project layouts - ready for extension, documented from day 1. Self-hosted analytics, error tracking, CMS. ~15 minute deployment. 90% test coverage. Three tiers: solo (one person, all roles), small team (one dev + managers), full team (every role separate).

My results:

TierTeam StructureCostTimeline
Solo1 person (all roles)€24,000 – €28,00010–14 weeks
Small Team1 dev + PM + part-time QA€48,000 – €55,0008–10 weeks
Full Team9 specialized roles€95,000 – €110,0006–8 weeks

For America: multiply by 2, change € to $. For Asia: multiply by 0.5, change € to $.


The price in 2026

Who would need this platform built by hand in 2026?

I don't know. But I'm ready to prove the point: Vue 3, no AI, publicly streamed. 500 hours. €50,000.

Makes no sense. You can try finding it cheaper.

Fair price today: $750.

That's the frightening truth.


The Vibe Efficiency Ratio

Putting math behind the vibes.

I spent time thinking about this. How do you measure AI-assisted development efficiency? I wanted a metric that's universal and simple - usable by anyone who actually documents their features.

I don't know if it's useful yet - I have exactly one month of data. But if you're tracking your own numbers and want to compare, share your results in our private Discord. We need data. We need discussion. That's how this formula becomes useful.

Here's what I came up with.


The Formula

VER = (F × 100,000) / (C × H)
VariableWhat it means
FFeatures shipped (minor releases)
CAI cost in USD
HDevelopment hours

Interpretation

VERRating
< 5Struggling
5–15Building
15–30Shipping
30–50Optimized
50–100Exceptional
> 100Verify methodology

December 2025

F = 14 features
C = $750
H = 115 hours

VER = (14 × 100,000) / (750 × 115) = 16.2

Rating: Shipping.

Foundation month. Infrastructure, design system, CI/CD, blog, CV builder. Not optimized — but shipping.

Reaching "Optimized" would've been hard - I wasn't thinking about optimization at all. I was building

But indeed - it shipped. I think this formula works for most vibe-coded projects.

Monthly Digest

Every month, The Founder will publish a digest: what shipped, what it cost, VER result.

The formula details - methodology, validation flags, edge cases - will live in a separate post soon. This is just the introduction.

Track along. Compare your own numbers.


Is it a Slop?

My honest answer: No. One rule I follow: criticism without solutions is just noise.

So let me check. Open any file in the repo. It's clear what each file does. I do this exercise often - never hit a dead end. Sure, there's already some tech debt. That's normal for any project.

Not slop. Developers were writing "optimized" code for maintanability purposes. To be able to jump into the code and quickly apply the changes. This is not an issue anymore until you have tests in your project.

Indeed, if we are speaking of performance - most of the times you only need to actually review and double check what is happening. Or even write some parts on your own...

Industry is changing. Stop recording music on vinyl while CDs are everywhere. If you work at NASA or in healthcare - write code by hand. If you're building your own platform, or a product without life-or-death stakes - use agentic coding.

Technical take

Code is absolutely maintainable. Remove all the AI tooling and I can continue this project. Not easily, but it feels like joining a new company. I've done that 15 times. I know what I'm talking about.

Philosophical take

You don't like it? Improve it. Until music exists, vinyl will exist too. Less popular but still needed. Same story with coding now. CDs were easier to use, but now we have Spotify - the essence of what most people actually need from music. AI research is moving in the same direction. At least that's what I've convinced myself of.

If you don't know how to improve it - give me feedback on anything at sbozh.me. That's how we make AI workflows better.


Conclusion

These 115 hours were something I would repeat with no doubt. And spoilers - I will. In this post, I even created a metric to measure my results better.

Architecting an app with AI assistance is pure fun. Exhausting - but fun. All tradeoffs are worth it. There's only one big concern I have - where it leads. But that's material for one of the next posts.

This journey will be incredible.

I hope I got you sbozhed and now you'll start documenting your project the same way I do. Metrics are key to calculating your success - or measuring your failure.

Let's reach our goal wisely and architect it end-to-end.

- The Architect 🦆

Attribution