// socials

esc to close

·4 min read·677 words
tags:["ai","product","building"]

Very easy now.

Setup used to be the expensive part of building software. Now it's cheap, and the real questions come faster.

Follow along on X @IterateArtist

What used to be hard is now very easy. So easy that I sometimes forget what the old cost felt like.

I started an experiment called LoopLad with one simple loop:

  • Start from zero
  • Pick one goal
  • Analyze performance
  • Update instructions and docs
  • Run again every 2 hours

The goal was intentionally narrow: keep an autonomous art project shipping itself on a fixed cadence.

I wrote the idea as a GitHub issue while I was at the gym. That night, a basic version was already running. I connected Vercel, bought a domain, fixed a few tokens, cleaned up the schedule, and moved on.

Total effort over two days was maybe 30 minutes of direct attention.

That sentence still sounds fake to me, but it's true.

Not because the idea was trivial. Because the setup tax collapsed.

Why this used to be hard

LoopLad looks simple from far away, but it's multiple systems stitched together:

  • Instruction files for behavior and decision criteria
  • Workflow files for scheduling and automation
  • Repository rules for how updates are proposed and merged
  • Deployment wiring so every accepted change is live
  • Basic measurement so the loop has something to react to

Before AI, this kind of setup meant hours of docs, trial-and-error YAML, auth mistakes, timing bugs, and small integration failures across tools that don't naturally cooperate.

None of it is impossible. It's just expensive.

What changed

AI collapsed the cost of "known but tedious" work:

  • First draft workflow files
  • Scaffolding and glue code
  • Refactors across multiple files
  • Test generation for predictable logic
  • Clear handoff notes for the next pass

This is exactly the work I talked about in Get to the hard problems fast: execution got cheaper, but judgment didn't.

The hard part is still hard:

  • Is this experiment producing real insight or just noise?
  • Is the loop measuring something meaningful?
  • Are the updates improving reliability or just creating activity?
  • Should this keep running, be redesigned, or be shut down?

AI didn't answer those questions for me. It just got me to them in days instead of weeks.

That time compression matters more than convenience. It changes the economics of experimentation. More ideas survive first contact with reality because the price of a serious attempt is now low.

The "easy" trap

There's one danger in this new world: confusing speed with truth.

When setup is cheap, you can ship a lot of nonsense very quickly. You can generate motion, commits, and dashboards that look alive while learning nothing.

So I now use a simple rule:

Every automation loop must have a human-readable success condition.

For LoopLad, that condition is operational, not commercial. The loop keeps running safely. Pull requests merge cleanly, deployments go live, and the cadence holds without babysitting.

No reliable loop, no celebration.

A practical way to use this

The point isn't that every idea needs an automation loop. The point is that almost any idea can become real fast now.

If you want to use this shift well, keep your first pass brutally small:

  1. Write one sentence for the idea.
  2. Define the smallest usable version.
  3. Build that version in hours or days, not weeks.
  4. Put it in front of real people, or use it yourself for a week.
  5. Decide with evidence: expand, revise, or stop.

Planning and design still matter. They just no longer need to block first contact with reality.

Don't start with architecture. Start with a usable thing.

LoopLad isn't a mature product. It might never be one. It's a small experiment I probably wouldn't have built in the old setup economy. But the overhead is now small enough that ideas move from notes to working software quickly. Not mockups. Not planning docs. Real things you can run and evaluate.

That's the real change. The cost of trying serious ideas has collapsed. The bottleneck isn't "can I build it?" It's taste, focus, and honesty about what's working.