Chapter 1: The Origin Story
Seven years ago, I worked for the city.
Government job. Union contract. Benefits, pension, the whole deal. The kind of job your parents tell you to be grateful for. And I was — for about six months.
Then I started counting.
Seven hours. That was my workday. Not eight. Seven. The union had negotiated it down, and nobody questioned it. Show up at 8, leave at 3, take an hour for lunch. Six hours of actual availability, maybe four hours of real work on a busy day. Most days, less.
I'd finish my tasks by noon and spend the afternoon looking busy. Alt-tabbing between spreadsheets when someone walked by. Taking long walks to the printer. Having conversations about conversations we'd already had.
The work wasn't hard. The work wasn't even bad. The work just... wasn't enough.
Here's what nobody tells you about government work: the ceiling isn't your salary. The ceiling is your output. There's a maximum amount of impact you're allowed to have, and it's enforced not by policy but by culture. Moving fast makes people uncomfortable. Suggesting improvements gets you labeled as "not a team player." Finishing early means you get assigned to help someone who's behind — which teaches you to never finish early.
I watched smart, capable people slow themselves down to match the pace of the institution. And the institution's pace was glacial.
Not because the people were lazy. Because the system was designed for stability, not speed. For consistency, not innovation. For process, not outcome.
And I realized: I was building someone else's thing, at someone else's pace, inside someone else's constraints.
Those aren't constraints that make you better. Those are constraints that make you smaller.
I didn't quit dramatically. No manifesto. No middle finger on the way out. I just started building on the side.
First it was freelance work. Then small products. Then slightly less small products. Each one taught me something the government job never could: what happens when you remove the artificial speed limit.
Turns out, one person moving fast can outproduce a team of ten moving at institutional speed. Not because that person is ten times smarter. Because they don't have nine other people to coordinate with.
No standup meetings. No sprint planning. No "let's circle back on this." No design committee. No approval chains. No consensus-building exercises disguised as collaboration.
Just: see the problem, build the solution, ship it.
Fast forward to today.
I run multiple companies. Different industries, different products, different revenue models. I have zero employees. Zero contractors. My entire "engineering team" costs less than a thousand dollars a month.
That's not a typo. Less than a thousand dollars a month.
The companies aren't side projects. They're real businesses with real users, real revenue, and real infrastructure. One is a SaaS analytics platform. One is a comparison site in a niche financial vertical. One is an AI-powered site builder for local businesses. I have others in various stages.
Each one would traditionally require a team. A developer or three. A designer. Someone handling customer support. Someone doing marketing. Someone managing the others.
Instead, there's me. And the machines.
This book isn't a manifesto about how everyone should fire their team and go solo. It's not a polemic against hiring. Some companies need people. Some problems require human collaboration at scale.
But most don't. Most companies hire because that's what companies do. Because "we need to grow the team" feels like progress even when it's not. Because founders mistake headcount for capability.
What I've learned — what I'm still learning — is that the constraints you choose define the company you build. And choosing to stay small, to stay solo, to replace headcount with technology... that's not a limitation.
It's an advantage.
The biggest advantage I've ever had.
This is the story of how I got here. Not a framework. Not a formula. Just the honest, sometimes messy reality of running multiple companies with zero employees, less than a thousand dollars a month in AI costs, and a deep conviction that the way we've been building companies is about to change forever.
It already has. Most people just haven't noticed yet.
Chapter 2: The $0 Engineering Team
Let me tell you about my engineering team.
One of them monitors my production servers 24 hours a day. If something breaks at 3 AM, it diagnoses the issue, writes a fix, deploys it, verifies it's working, and leaves me a summary. I wake up to a message that says "fixed" with a timestamp.
Another one handles customer support emails. It reads every incoming message, triages urgency, drafts responses, and sends the routine ones automatically. The ones that need my judgment get flagged with context so I can respond in thirty seconds instead of ten minutes.
Another manages my social media content. It generates posts from my product data, formats them for each platform, and schedules them across accounts. Different voice for each brand. Different strategy for each channel.
Another writes code. Not autocomplete suggestions — actual features. It reads the codebase, understands the architecture, writes the implementation, runs the tests, and commits the changes. I review and merge.
None of them take vacations. None of them need health insurance. None of them have opinions about the office thermostat.
They're AI agents. And they cost me less than a thousand dollars a month combined.
I need to be precise here, because the term "AI agent" has been abused to the point of meaninglessness. Half of Silicon Valley is slapping the word "agent" on what is essentially a chatbot with a to-do list.
An AI agent, the way I use them, is an autonomous system that:
1. Receives a goal or trigger
2. Decides what actions to take
3. Takes those actions (reading files, running commands, calling APIs, sending messages)
4. Evaluates the result
5. Iterates until the goal is met or it hits a boundary
That last part matters. Boundaries. My agents have hard limits on what they can do autonomously and what requires my approval. They can read anything, fix anything, build anything — inside the workspace. They cannot send emails to clients, publish content, or spend money without my sign-off.
This isn't about trust. It's about architecture. You design the system so that autonomous action handles the 90% that's routine, and human judgment handles the 10% that's consequential.
Here's what the org chart looks like for a traditional startup doing what I do:
CEO/Founder — that's me in both scenarios
CTO — $180K salary + equity
2 Senior Engineers — $320K combined
1 Junior Engineer — $90K
Designer — $120K
DevOps — $140K
Customer Support (2) — $100K combined
Marketing — $110K
Operations/Admin — $85K
Total: roughly $1.15 million in salary alone. Add benefits (20%), office space, equipment, software licenses, recruiting, onboarding, management overhead... you're looking at $1.8-2.3 million per year before the company earns a single dollar.
My version:
Me — same as above
AI Agents — <$1,000/month
Infrastructure — ~$200/month (servers, databases, CDN)
Total: roughly $15,000 per year.
That's not a rounding error. That's a 99.3% reduction in operating cost.
"But the AI can't do everything a real team does."
Correct. And I'll be honest about that in Chapter 8. There are things AI agents genuinely cannot do yet, and pretending otherwise is how you end up with a broken product and no customers.
But here's what most people get wrong: they compare the AI to the *ideal* version of a human team. The team where everyone's an A-player, communication is frictionless, nobody quits, nobody has a bad quarter, and the sprint velocity never drops.
That team doesn't exist.
Real teams have coordination costs. Real teams have the person who's quiet-quitting but hasn't told anyone yet. Real teams have the dependency chain where Alice is blocked on Bob who's blocked on Carol who's on PTO. Real teams have the standup meeting that takes 45 minutes because Dave wants to relitigate yesterday's architecture decision.
AI agents don't have any of that. They just work. Consistently. Every day. At whatever hour you need them.
Is the output always perfect? No. But it's consistent, it's fast, and it's available. Three things that are surprisingly hard to get from a human team of any size.
The shift in my thinking happened gradually, then all at once.
First, I used AI to write boilerplate code. Saved me an hour here and there.
Then I used it to generate first drafts of features. Saved me a day.
Then I connected it to my codebase, my servers, my databases, my email, my calendar. Gave it tools. Gave it context. Gave it *agency.*
And suddenly I wasn't using AI to help me do my job. I was using AI to *do* jobs I would have hired people for.
The first time my agent handled a production issue while I was asleep — actually detected it, diagnosed it, fixed it, deployed the fix, and verified it — I stared at the log the next morning for a full minute.
Not because it was surprising that AI could do it. Because it was surprising how *normal* it felt. How quickly "my agent handled it" became a sentence I said without thinking.
The economics are almost unfair.
A senior engineer costs $160K+ per year and can focus on one project at a time. They need context-switching time. They need code review from peers. They need to attend meetings. Realistic productive output: maybe 5-6 hours of actual coding per day, on a good day.
An AI agent costs pennies per task and can work on multiple projects simultaneously. It doesn't context-switch — it just runs another instance. It doesn't need code review from peers — it *is* the reviewer if you want it to be. It doesn't attend meetings.
Realistic productive output: as many hours as you have tasks. Twenty-four hours a day, seven days a week.
I'm not saying an AI agent is as *good* as a senior engineer at every task. For novel architecture decisions, creative problem-solving, and judgment calls in ambiguous situations — a great human engineer wins every time.
But for the other 80% of engineering work? The CRUD endpoints, the bug fixes, the test coverage, the documentation, the deployment scripts, the monitoring setup, the dependency updates, the CSS tweaks, the data migrations?
The agent is faster, cheaper, and more consistent. And it doesn't give two weeks' notice right before your product launch.
This chapter isn't about replacing people. It's about questioning the assumption that people are the default answer to every business problem.
For seven years, I've been running the experiment. Zero employees. AI agents handling everything from code to customer support to content.
The experiment is working. The companies are growing. The costs are absurd — absurdly low.
And I'm just getting started.
Chapter 3: Big Teams Are a Bug
There's a famous equation in project management that nobody wants to talk about.
The number of communication channels in a team is n(n-1)/2, where n is the number of people.
- 2 people: 1 channel
- 5 people: 10 channels
- 10 people: 45 channels
- 20 people: 190 channels
- 50 people: 1,225 channels
This isn't theory. This is physics. Every person you add to a team creates new communication pathways that must be maintained, synchronized, and managed. Every pathway is a potential source of misunderstanding, delay, and conflict.
A 10-person startup has 45 communication channels. Not Slack channels — *relationship channels.* Forty-five pairs of people who need to stay aligned on priorities, context, and decisions.
You know what has zero communication channels? One person with AI agents.
Zero.
I've worked on teams of every size. Government teams. Startup teams. Agency teams. Enterprise teams.
The pattern is always the same:
Phase 1 (2-3 people): Everything is fast. Decisions happen in minutes. Everyone knows everything. Shipping is effortless.
Phase 2 (5-8 people): Meetings start appearing. Someone creates a "communication protocol." There's a weekly sync that nobody thinks is necessary but everyone attends. Shipping slows down, but you blame it on "growing complexity."
Phase 3 (10-15 people): You now have managers. The people who were doing the work are now managing the people who are doing the work. You have a project manager whose job is to coordinate between teams that didn't exist six months ago. Shipping requires approval from three people, one of whom is on vacation.
Phase 4 (20+ people): You have meetings about meetings. Someone's job title is "Director of Engineering" and they haven't written code in a year. There's an "alignment session" every quarter that costs $50,000 in combined salary-hours and produces a document nobody reads. You've hired a recruiter to help you hire more people to do the work that three people used to do.
Each phase feels necessary from the inside. "We need more people because we're growing." But the growth itself is causing the need for more people. It's circular.
You're not scaling. You're inflating.
Fred Brooks wrote about this in 1975. *The Mythical Man-Month.* His core insight: adding people to a late project makes it later. The communication overhead of each new person outweighs their productive contribution.
That was fifty years ago. The book is still relevant because the fundamental problem hasn't changed. Human communication is expensive. Coordination is expensive. Alignment is expensive.
What *has* changed is that we now have an alternative.
AI agents don't need to communicate with each other through meetings. They share state through databases and files. They don't need to "align on priorities" because they have one priority: whatever you told them to do. They don't have opinions about the roadmap. They don't politic for resources. They don't form factions.
When I need to build a feature, I don't schedule a meeting to discuss the feature, then a sprint planning session to size the feature, then assign the feature to an engineer who needs two days of context before starting, then a code review meeting when they're done, then a QA cycle, then a deployment approval.
I describe the feature. The agent builds it. I review it. It ships.
Start to finish: hours, not weeks.
"But what about complex problems that need multiple perspectives?"
Good question. And it reveals a real advantage of human teams — diverse viewpoints catch blind spots.
But I'd argue that most of the "multiple perspectives" in team settings are *redundant* perspectives. Eight engineers in a room debating whether to use PostgreSQL or MongoDB aren't bringing eight unique worldviews. They're having the same argument that's been had ten thousand times, and the decision will be made based on whoever is most stubborn or most senior, not on the merits.
The actually valuable perspectives — "this approach has a security flaw," "this won't scale past 10K users," "the customer doesn't actually want this" — don't require eight people. They require *one* person who's thought carefully about it.
I get my "multiple perspectives" from AI. I ask it to argue against my approach. I ask it to find flaws. I ask it to consider edge cases I'm missing. I get 90% of the value of a team brainstorm in 5% of the time, with 0% of the politics.
Here's the math that made me a believer.
In 2023, Basecamp — the company that literally wrote the book on small teams — had about 75 employees running a business with millions of users.
In 2024, WhatsApp was acquired for $19 billion with 55 engineers.
Instagram had 13 employees when Facebook bought it for $1 billion.
These are extreme examples, but they prove the point: headcount and output are not correlated the way people assume.
The companies that change industries are almost always small teams that stayed small longer than anyone thought possible. They achieved this by being disciplined about what work actually needs to happen versus what work feels productive.
Most work in most companies is coordination work. It's work about work. Meetings about what to build, documents about what was decided, updates about what's in progress, reviews of what was done.
Take all that away and what's left? The actual building.
That's what a solo operator with AI agents has. Just the building. No overhead. No drag. No organizational theater.
I'm not naive about this. There are companies that genuinely need large teams. If you're building a physical product, you need manufacturing. If you're in healthcare, you need licensed professionals. If you're SpaceX, you need rocket engineers and you can't automate the welding (yet).
But for software companies? Digital products? Content businesses? Professional services?
The number of people you actually *need* is almost certainly smaller than the number of people you *have.* And for a surprising number of these businesses, the right number might be one.
One person who's clear about what they're building and why.
One person with the right tools.
One person who treats headcount as a last resort, not a first instinct.
Big teams aren't a feature. They're a bug. A bug we've been shipping for decades because we didn't have a better option.
Now we do.
Chapter 4: Constraints I Choose
At some point the narrative shifted.
For the first couple years of running solo, I described my situation apologetically. "It's just me right now." "I'm bootstrapped so I don't have a team yet." "I'm planning to hire when the revenue is there." All of it was true. None of it was the real story.
The real story is that I stopped wanting to hire before I was able to hire.
That's the shift nobody talks about. Not "I can't afford a team" but "I've thought about it carefully and a team would make me slower, more expensive, and harder to maneuver." The constraint stopped being external. I started choosing it.
There's a concept in architecture called the constraint-driven design. The best bridges aren't built despite their constraints — they're built *because* of them. The span you need to cross, the load you need to carry, the materials you have available — these constraints don't limit the design. They *generate* the design. Remove the constraints and you don't get a better bridge. You get a blob of indecision.
I've watched dozens of founders celebrate funding rounds. They post on LinkedIn about the "exciting chapter ahead" and the team they're about to build. Two years later, half of them are managing the team instead of building the product. They're having conversations about culture and process and organizational structure. The thing they were building got slower.
The funding removed their constraints. And without constraints, they built a company in the shape of every other company.
My constraints are:
No employees. Ever.
No office. If I'm paying rent for a space that exists to house people, something went wrong.
No venture funding. I own everything or I own nothing.
No meetings. Every decision that requires a meeting is a decision that should be made differently.
AI cost cap. Less than $1,000 per month, across all companies. If I'm spending more, I'm either building the wrong thing or using AI inefficiently.
Some of these I never broke. Some I've tested and come back to. The point isn't that these are the right constraints for everyone. The point is that having explicit constraints — ones you chose, that you can articulate and defend — forces rigor that growth and abundance don't.
"Less than $1,000 a month in AI" is the constraint that sharpens everything else.
When you have a budget, you allocate. When you allocate, you prioritize. When you prioritize, you understand your actual business better.
A company with unlimited AI spend becomes sloppy. Spinning up agents for every half-baked idea. Running expensive models on tasks that a cheap model handles just fine. Using AI to feel busy rather than to accomplish something specific.
A company with a tight AI budget asks: *what does this machine actually need to do?* And then it designs the machine precisely. No fat. No waste. Every dollar pointed at something that produces value.
My monthly AI bill is predictable because I know exactly what each agent does, why it exists, and what it would cost to not have it. Most founders can't tell you what their team is doing in that level of detail. I can tell you what my AI is doing task by task.
The counter-argument I hear most: "You're leaving money on the table by not scaling."
Maybe. Probably. I'm sure there are opportunities I've missed because I didn't have three salespeople or a growth team or a VP of partnerships.
But here's what I also know: companies that scale too fast die of complexity more often than they die of missed opportunity. The startup graveyard is full of companies that raised too much, hired too many, and collapsed under the weight of their own overhead before the product was ready.
I'd rather be a small, profitable, low-overhead company that's still here in ten years than a well-funded, 50-person operation that ran out of runway because the costs scaled faster than the revenue.
Constraints aren't a consolation prize for not raising money. They're a survival mechanism. They're a decision-making framework. They're the thing that keeps you from building a company that needs a Director of Operations to manage the person who schedules the meeting that updates the spreadsheet tracking the progress of the thing you actually care about.
There's one more constraint I haven't mentioned yet, because it's the hardest one to talk about: ego.
Founders get their identity wrapped up in headcount. "We're a 12-person team" feels like an accomplishment in a way that "it's just me" doesn't. There's status in the org chart. There's something that feels real about having a team.
I had to let that go.
Not the ambition. Not the drive. Not the desire to build something real. Just the idea that the size of the team is evidence of the quality of the work.
It isn't. It never was. The best software companies in the world — the ones that actually changed how people live — were almost all built by embarrassingly small teams. The org chart is the thing you build *after* you've built something worth having.
Choose your constraints before the world chooses them for you. Because the ones you choose will make you sharper. The ones the world chooses will just make you smaller.
Chapter 5: The Uncomfortable Truth About Hiring
Most startups hire to solve problems they haven't diagnosed.
The product isn't growing fast enough. So they hire a growth person. The code is buggy. So they hire another engineer. The customers keep complaining. So they hire someone for support. The revenue isn't where it should be. So they hire a salesperson.
None of these are wrong, exactly. But they're all second-order responses to first-order problems.
Why isn't the product growing? Usually not because there aren't enough people working on growth. Usually because the product isn't good enough yet, or the market fit is wrong, or the positioning is off. Hiring a growth person to spray traffic at a leaky bucket just increases the water bill.
Why is the code buggy? Usually not because there aren't enough engineers. Usually because the architecture was rushed, or the testing culture is poor, or the senior engineer who understood the tradeoffs left six months ago and took the context with them. Adding another engineer without fixing those things just means more bugs written faster.
Hiring is how startups avoid doing the hard thinking.
I'm going to say something that will make some people uncomfortable: most early-stage startups hire for emotional reasons.
Not financial reasons. Emotional ones.
There's the *loneliness hire* — the founder who's been working solo for a year and needs someone to talk to, so they hire a "co-founder" or a "head of product" who's really just a companion.
There's the *credibility hire* — the startup that brings on a notable name to put in the pitch deck, even though that person will never do meaningful work.
There's the *hope hire* — the sales leader who's going to "unlock the enterprise market" based on a resume that describes a very different company in a very different stage.
There's the *anxiety hire* — the founder who's terrified of a specific risk, so they hire someone to manage that risk, even though what they actually needed was to fix the thing that created the risk.
None of these are malicious. They're human. Building a company is terrifying and exhausting and isolating, and hiring feels like progress. The headcount goes up, the bank balance goes down, and for a few weeks, the founder feels like they're doing something.
Then the new hire needs onboarding. And context. And management. And suddenly the founder is spending twenty hours a week supporting the people they hired to take work off their plate.
The uncomfortable truth is this: hiring is often an admission that you haven't figured out the problem yet.
When you're forced to solve problems with fewer people — because you can't afford to hire, or because you've decided not to hire, or because you're running the experiment I've been running for seven years — you solve them differently.
You automate. You simplify. You eliminate.
You ask: does this task actually need to happen? And if it does, does a human need to do it? And if a human needs to do it, does it need to happen now?
Most tasks fail at least one of these tests. Most of the work that companies hire people to do is work the company invented for itself.
Customer support tickets that wouldn't exist if the product were clearer. Meeting notes that exist because the decisions aren't being made clearly. Status updates that exist because nobody trusts the system to surface what needs attention. Process documentation that exists because the processes are too complicated for anyone to remember.
Fix the product, fix the decisions, fix the system, simplify the processes — and the headcount you thought you needed never materializes.
I have a rule: before I consider whether a person could help, I ask whether better tooling or a better process would solve the same problem.
Nine times out of ten, it would.
The tenth time, it's worth asking: is this actually a problem I need to solve, or is it a symptom of a larger problem I'm not seeing?
I will say this clearly: there are companies where hiring is the right answer. If you're building a product that requires human judgment at scale — consulting, therapy, sales of complex products, physical services — you need people. AI can assist, but it can't be the product.
But if you're building software, digital content, or data products? The assumption that growth requires proportional headcount is a legacy belief from a world where that was true. It was true when the tools were worse. It was true when automation required custom engineering. It was true before AI agents.
It's not true anymore.
Here's the test I use before any hire — real or hypothetical.
The $200K Question: If I had this person's salary to spend on tooling, automation, or product improvement instead, what could I build? Would that $200K in product improvement generate more value than the person would?
Most of the time, yes.
Not because people aren't valuable. Because the leverage of good tooling compounds in ways that salary doesn't. The $200K system runs while you sleep, doesn't take vacations, doesn't need a performance review, and can be replicated without a recruiting process.
The startup ecosystem has convinced founders that the path to success runs through headcount. Investors ask "what's your hiring plan?" as though more people is an obvious sign of ambition. Press covers "so-and-so startup raises $X, plans to double team" as though doubling the team is the thing worth writing about.
It's not. The thing worth writing about is whether you're solving the problem.
I've seen $5M teams lose to $200K one-person companies because the one person was moving faster, staying leaner, and wasn't burning 40% of their runway on salaries before the product had proven itself.
You can always hire later. You can't easily un-hire. You can't easily unwind the culture, the processes, the communication overhead, the management layers that accumulate once you start building a team.
Stay small longer than feels comfortable. The leverage you're protecting is more valuable than the help you think you need.
Most early hires are a hedge against discomfort, not a genuine unlock for the business. Learn to sit with the discomfort instead. That's where the real problem-solving happens.
Chapter 6: What AI Agents Actually Are
The word "agent" has been stretched so far it's nearly meaningless.
Your bank's chatbot that says "I'm sorry, I didn't understand that. Did you mean: check balance?" — not an agent. The auto-responder on a support ticket that says "We've received your message and will respond in 24 hours" — not an agent. The autocomplete that finishes your sentences in Gmail — definitely not an agent.
But marketing teams love the word. Every product with a language model bolted on is suddenly an "AI agent" and every company is breathlessly explaining their "agentic AI strategy." The result is that when you actually want to describe what a real AI agent does, you have to spend five minutes clearing out the noise first.
So here's the clear version.
An AI agent is a system that takes goal-directed autonomous action using real tools in the real world.
Not "answers questions." Takes action.
Not "generates text." Does things.
When my agent monitors my servers and a problem appears, it's not composing a summary of what's wrong. It's SSHing into the server, reading the logs, running diagnostics, writing a patch, testing it, deploying it, and verifying the fix. Then it sends me a summary.
The difference between a chatbot and an agent is the same as the difference between a consultant and an employee. The consultant gives advice. The employee does the thing. Most "AI agents" on the market today are consultants. Mine are employees.
Let me break down what an actual agent needs to function:
A model. The language model that understands the goal, reasons about how to achieve it, and decides what to do next. This is the brain. Different models have different capabilities and costs — I'll cover the specific stack in Chapter 11.
Memory. The agent needs to know things: the current state of the task, facts about the environment, what it's already tried. This can be a file it can read and write, a database, or conversation history. Without memory, every agent call starts from zero. With memory, it learns the landscape over time.
Tools. The ability to take action. Read a file. Write a file. Run a command. Call an API. Send an email. Make a payment. Query a database. The more tools an agent has, the more it can do. Tools are the agent's hands.
A runtime. The loop that runs the agent: give it the goal, let it think, watch it take action, evaluate the result, let it think again, take another action. The runtime is what makes it *autonomous* rather than a one-shot query.
Guardrails. What it's not allowed to do. This is the one most people forget to design carefully, and it's the one that matters most. A powerful agent without good guardrails is a liability. A powerful agent with thoughtful guardrails is a force multiplier.
The question I get most often: "How do you trust it?"
The honest answer: carefully, incrementally, with verification.
I didn't wake up one morning and hand an AI agent the keys to my production databases, my email accounts, and my Stripe account. I built trust the same way you'd build trust with a new employee — gradually, with oversight, starting with low-stakes tasks.
The first agent I ran had one job: read my emails and summarize the important ones. No sending, no deleting, no replying. Just read and report. I could verify every output. I could catch every error. Over weeks, I built confidence in how it handled different types of content.
Then I added a tool: draft a reply. Still no sending. I'd read the draft, fix anything wrong, and send it myself. The agent learned my style through corrections. I learned its failure modes through observation.
Then I let it send routine replies autonomously, with a rule: flag anything that involves money, commitments, or anything I haven't explicitly pre-approved.
That progression — observe, assist, automate — is how you build a trustworthy agent stack. Not by giving it everything at once and hoping for the best.
There are three patterns I use for most of my agents:
The Monitor. Watches something continuously and alerts or acts when a condition is met. My server monitoring agent is a monitor. My email triage agent is a monitor. The trigger is an external event; the agent decides what that event means and what to do about it.
The Worker. Given a goal, works until it's done. My coding agent is a worker. I describe a feature; it builds the feature. It might take 20 tool calls to get there — reading files, writing code, running tests, fixing failures — but it runs autonomously until the job is complete.
The Scheduler. Runs on a clock. My content generation agent is a scheduler. Every morning at 7 AM, it checks what needs to be posted today, generates the content, and queues it. No trigger from me. No initiation required.
Most real-world use cases are combinations of these patterns. A monitor that detects a condition, triggers a worker to handle it, and then the worker reports back on a schedule. You build these like Lego.
The thing that surprises people most when they see this in action isn't the sophistication. It's the mundanity.
AI agents doing real business work mostly look boring. Log in, read the data, write the report, send the message. It's not dramatic. There's no moment where the AI reveals some insight that changes everything. It's just... the work, getting done, while you're somewhere else.
That mundanity is the point. The 70% of business operations that are routine, repetitive, and rule-following — that's what agents are for. Not the inspiring 30% that requires real judgment. The boring 70% that eats your calendar, drains your energy, and keeps you from doing the work that actually matters.
Free up the 70%, and the 30% becomes your whole job. That's what I've been building toward for seven years. Not AI that's smarter than me. AI that handles the parts I shouldn't be spending my time on anyway.
Chapter 7: The Real Cost Math
Let's do the math that most startup founders never sit down and actually do.
Not the revenue math. The cost math. Specifically: what does a person actually cost versus what they produce versus what an agent could produce instead?
This isn't a rhetorical exercise. These are real numbers from real businesses, and I want you to run them against your own situation.
The True Cost of a 10-Person Startup
Most founders think in salary. "We have a $1.2M payroll." But salary is maybe 65% of the actual cost of an employee. Here's what you're actually paying:
Salaries (illustrative 10-person team):
- 1 CEO/Founder: $130K (below market, you're "committed")
- 1 CTO: $180K
- 2 Senior Engineers: $160K each = $320K
- 1 Junior Engineer: $90K
- 1 Designer: $120K
- 1 DevOps/Infra: $140K
- 2 Customer Support: $55K each = $110K
- 1 Marketing: $110K
Subtotal, salary: $1,200,000
Benefits (US employer, typically 20-30% of salary):
- Health insurance: ~$12,000/year per employee = $120,000
- Payroll taxes (FICA, FUTA, SUTA): ~8% = $96,000
- 401K match (3%): $36,000
- Workers comp, unemployment: ~$15,000
Subtotal, benefits: ~$267,000
Equipment and tools:
- Laptops ($2,500 each, replace every 3 years): $8,333/year
- Software licenses (Figma, GitHub, Linear, Slack, Notion, etc.): $30,000/year
- Equipment for remote workers (monitors, peripherals): $3,000
Subtotal, equipment/tools: ~$41,000
Real estate (NYC/SF, even partial):
- Desks in a co-working space: $800/desk/month x 10 = $96,000/year
- Or: leased office space at $50/sqft x 2,000 sqft = $100,000/year
Subtotal, space: ~$96,000-100,000
HR and recruiting:
- Recruiting fees (typically 15-20% of first-year salary per hire): if you hired half your team this year, that's $90,000-120,000
- HR software (Gusto, Rippling, etc.): $6,000/year
- Employer of record services, legal, compliance: $10,000/year
Subtotal, HR: ~$116,000
Management overhead (often invisible):
- Every manager spends 30-50% of their time on people management rather than individual contribution
- Your CTO is spending 15 hours/week managing instead of building
- That's ~$85,000/year of a $180K salary going to coordination, not output
Subtotal, management overhead: ~$85,000+
Total annual cost of a 10-person startup team:
| Category | Annual Cost |
|---|
| Salaries | $1,200,000 |
| Benefits | $267,000 |
| Equipment & tools | $41,000 |
| Office space | $96,000 |
| HR & recruiting | $116,000 |
| Management overhead | $85,000 |
| Total | $1,805,000 |
Round up for the inevitable surprises, and you're at $1.8-2.3 million per year.
Before you've written a line of code. Before you've acquired a customer. Before you've made a single dollar.
That is the cost you are betting against when you build a team.
The AI Alternative
Here's my actual monthly bill across all companies:
Language model API costs: ~$400/month
- Primary model (Claude): $280/month
- Secondary/cheaper models for high-volume tasks: $90/month
- Specialized models (image, audio): $30/month
Infrastructure for agents (servers, not counting product infra): ~$150/month
- Agent runtime server: $50/month
- Databases agents use: $40/month
- Storage, networking: $60/month
Agent platform and tooling: ~$200/month
- OpenClaw runtime: $79/month
- Workflow orchestration: $49/month
- Monitoring and logging: $40/month
- Misc. SaaS (no longer need per-seat pricing): $32/month
Total AI operating cost: ~$750/month, or $9,000/year.
This isn't aspirational. This is what I actually spend across multiple operating companies.
The difference: $1,805,000 vs $9,000.
The AI alternative costs 0.5% of the team alternative. One half of one percent.
"But you're not doing as much as a 10-person team."
This is where it gets interesting.
Let me describe what my agents handle in a typical week:
- 3-5 production deploys across 4 companies
- ~50 customer support emails triaged and responded to
- 30+ social media posts published across multiple brands
- Weekly analytics reports generated and distributed
- Database backups, monitoring, alerting
- Code reviews and automated refactors
- Invoice tracking and payment follow-up
- Content calendar management
- SEO performance monitoring
The equivalent human cost for this workload in the US market is roughly $400,000-600,000/year in salary for the roles that touch each of these areas.
I'm doing it for $9,000/year in AI costs plus my own time in the 30% that genuinely needs me.
There's one more number I want you to internalize.
Breakeven math:
If my AI costs $9,000/year and a comparable team costs $1,800,000/year, I need to produce *1.005%* of what that team produces just to break even on cost.
In other words: I could be radically less productive than a 10-person team and still be more cost-efficient. Which means as long as I'm competitive on output — not identical, just competitive — I'm winning on unit economics by a factor most businesses can't match.
None of this means a 10-person team isn't better at certain things. A well-run team with great people can do things I genuinely can't do alone, even with agents. Speed at true scale. Human relationships with enterprise clients. Physical presence. Deep domain expertise across many specialties simultaneously.
But for the majority of digital businesses at the seed to Series A stage — where you're still figuring out what you're building and why — the cost comparison is devastating. You're paying $1.8 million per year to run an experiment that a solo operator with agents can run for $9,000.
That's not a rounding error. That's a structural advantage.
Use it.
Chapter 8: Things AI Can't Do (Yet)
I want to earn your trust by being honest about the limits.
Anyone who tells you AI agents can do everything is either selling you something or hasn't actually tried to run a business with them. There are real gaps. Significant ones. And pretending they don't exist is how you make decisions you regret.
Here's what AI genuinely can't do well — yet.
1. Build Real Relationships
Sales is the most important skill in business, and the most irreplaceable one.
I don't mean "send emails and book calls." I mean the actual human art of understanding what someone needs, why they need it, what their politics are, what they're afraid of, what they're trying to prove — and building trust across months and years.
AI can draft the cold email. It can research the prospect. It can generate objection-handling scripts and follow-up sequences. But it cannot be in the room when the deal is being made. It cannot feel the hesitation in someone's voice and know to stop talking. It cannot build the kind of relationship where a client calls you first because they trust you more than anyone else they know.
For B2C products at scale, this matters less. For enterprise sales, for high-touch services, for anything where the relationship is the product — you still need humans.
2. Make Novel Strategic Decisions
AI is extraordinarily good at reasoning within established frameworks. It's surprisingly bad at deciding that the framework is wrong.
"Given these assumptions, what's the best pricing strategy?" — AI handles this well. It can synthesize competitive data, consider positioning, model revenue scenarios.
"Are we building the right product for the right market?" — AI struggles here. Because answering that question requires reconsidering the assumptions that everything else is built on, and AI systems are biased toward optimizing within existing frames rather than challenging them.
The decisions that change the direction of a company — pivot, kill a product line, enter a new market, bet on a technology that looks insane today — require genuine strategic creativity. The ability to look at the evidence and believe something different from what the evidence seems to say. That's still a human capability.
Use AI for analysis. Make the consequential decisions yourself.
3. Exercise Taste
Taste is the judgment that something is good *before* you have data to prove it.
Good product taste. Design taste. Writing taste. The sense that this copy is off even though you can't explain exactly why. The intuition that this feature will resonate even though the market research doesn't support it.
AI can produce work that meets specifications. It can generate competent design, readable copy, functional product flows. It can learn your preferences through feedback and get better over time.
But taste — the thing that separates good from memorable — is still human. The founders who build products people love aren't just good at execution. They have strong opinions about what the right experience feels like. That opinion can't yet be fully delegated.
Where this gets complicated: most people overestimate their taste. They think they have strong intuitions when they have weak preferences. If you're honest with yourself about which category you're in, you'll know whether to trust your taste or lean more on data.
4. Navigate Truly Ambiguous Judgment Calls
When the situation is genuinely unprecedented and the stakes are high, AI becomes less reliable.
"Should I accept this acquisition offer?" No. That involves your life goals, your risk tolerance, your read on the acquiring team's culture, your attachment to the thing you built, your financial situation, your sense of what you still want to prove. AI can model scenarios. It cannot make the call.
"Is this the right time to fire my only contractor?" Again — involves relationships, timing, reputational considerations, your gut on whether they can turn it around, what message it sends to the market. AI can list pros and cons. It cannot weight them for your specific situation.
The closer a decision is to "irreversible, high stakes, uniquely human" — the more you want a human making it, with AI as a research assistant, not the decision-maker.
5. Do Novel Physical or Sensory Work
Obvious, but worth stating. AI agents can't taste your product, experience your retail store, hear the tone of a live customer conversation, or assess whether the hardware prototype feels right in the hand.
If your business has a physical component that requires sensory judgment, you need humans for that component. AI can help with everything around it.
Where This Leaves You
The honest picture: AI agents today handle roughly 70% of what a small business needs done, at dramatically lower cost and higher consistency than humans.
The remaining 30% — strategic judgment, relationship-building, taste, irreversible decisions, physical presence — still needs you.
That's not a limitation. That's a feature.
The work that's left is the interesting work. The stuff worth doing. The part that requires what makes you specifically good at this.
What AI has done is eliminate the 70% that was eating your time and convincing you that you were busy when you were actually just occupied. Now you can spend your capacity on the 30% that only you can do.
That's not a trade-off. That's a gift.
One more thing: the 70/30 split is moving.
Every six months, the 70 gets bigger. Things AI couldn't do a year ago, it does routinely today. The capability gap on relationship-building is narrowing. On strategic reasoning, it's improving. On taste, there are cases where AI judgment is now indistinguishable from expert human judgment on specific tasks.
I'm not saying AI will replace everything. I don't know that. Nobody does.
What I do know is that the capabilities are increasing faster than most people are adjusting for. And the founders who build habits and systems that can absorb those improvements — rather than treating AI as a fixed tool — will compound their advantage over the next five years in ways their peers won't be able to keep up with.
Build for a moving target. Assume the constraints of today won't be the constraints of tomorrow.
Chapter 9: The 3 AM Deploy
I have never been on-call.
That's a strange thing to say when you're the only person running your company's infrastructure. Traditional logic says the opposite should be true — if it's all on you, then *you're* always on call. If the server goes down at 3 AM, you're the one who wakes up.
That's how it was for the first few years. My phone was always on. Every Slack alert was a potential 2 AM situation. I'd wake up, open the laptop, squint at the logs. Sometimes it was nothing. Sometimes it was real. Either way, sleep was optional in a way that slowly grinds you down.
Then I changed how I built the system. And the 3 AM calls stopped.
Not because the incidents stopped. Because someone else started handling them.
The first time it happened, I almost didn't believe the log.
I woke up to a message from my monitoring agent. The timestamp was 3:17 AM. The message was:
*"Production database hit connection pool limit at 02:58 UTC. Identified root cause: batch job in analytics service running uncapped queries during peak traffic window. Killed the batch job, connection pool cleared. Rescheduled batch for 05:00 UTC. Stack trace and fix committed to branch fix/analytics-batch-throttle. Recommend merging and deploying when you're awake. Services healthy, no user-facing impact."*
The incident started, got diagnosed, got fixed, and got documented while I was asleep. I read about it over coffee.
I reviewed the fix. It was correct. I merged and deployed it. The whole review took four minutes.
For the next year, I kept a document of every production incident that got handled autonomously while I wasn't working. It became one of the most compelling records I have of why this approach works.
What makes autonomous incident response possible isn't magic. It's access.
The agent has access to:
- Server logs (SSH, cloud provider APIs)
- Application metrics (error rates, latency, request volume)
- Database connections (read access for diagnosis, write access within defined parameters)
- The codebase (to understand what's happening and write fixes)
- Deployment pipelines (to deploy changes when they meet safety criteria)
- Communication channels (to notify me and write summaries)
With those tools and a well-defined playbook, most production incidents follow predictable patterns. Connection pool exceeded. Memory leak from runaway process. Failed third-party API causing cascading timeouts. Configuration drift. Database query without an index.
An experienced engineer handles these in their sleep — sometimes literally. The agent does the same thing, except it actually never sleeps, never gets frustrated, and documents every step.
There's a mental shift that happens when you stop being the person who handles the 3 AM problem.
You stop designing systems with the implicit assumption that you'll catch problems manually. You start designing systems where detection and response are built in from the start. You invest more in observability because the agent needs instrumentation to diagnose problems you won't be around to see. You invest more in runbooks because the agent needs clear procedures for situations outside its judgment.
In other words, you build better systems because you've created a forcing function to define "better" precisely.
When you're going to be the one responding to incidents, you tolerate ambiguous alerting because you'll figure it out when it happens. When an agent is responding, you need alerts to be specific. You need metrics to be meaningful. You need the system to surface the right information at the right time.
The discipline required to delegate incident response to an agent makes the underlying system more robust. The tail wags the dog in the best possible way.
Not everything gets handled automatically. I have clear criteria for what the agent can resolve independently and what requires me.
Agent handles autonomously:
- Restarting crashed services
- Clearing connection pools
- Killing runaway processes
- Scaling resources within pre-approved limits
- Deploying pre-approved hotfix types (config changes, environment variables)
- Routing around failed dependencies with fallback logic
Agent wakes me up:
- Data loss risk (anything involving deletion or corruption)
- Security incidents
- Anything that would require customer communication
- Novel failures outside known patterns
- Any action that would cost more than $X or change billing
Agent only notifies, no action taken:
- External service outages I can't control
- Metrics outside normal range but not critical
- Anomalies that need my judgment
The boundary matters. An agent with too much autonomy in an incident is dangerous — it might make a decision that makes the situation worse before I can intervene. An agent with too little autonomy isn't useful for the thing I actually need: to not be woken up.
Calibrate the boundary based on your risk tolerance and your trust in the system. Move it gradually as confidence builds.
The 3 AM deploy is a metaphor for something larger.
Every business has its 3 AM deploys. The customer complaint that comes in on a Sunday. The invoice that needs follow-up when you're traveling. The social media mention that requires a response during a family dinner. The competitive move that needs analysis on a holiday.
Business doesn't respect your schedule. It operates in all directions simultaneously, at any hour, demanding attention.
A solo operator without agents is constantly pulled toward the reactive. You manage the crisis, then the next crisis, then the next. The work that builds long-term value — the strategic thinking, the product decisions, the relationship building — gets squeezed into the margins.
A solo operator with agents handles the reactive automatically. The agents manage the incidents, the emails, the monitoring, the routine. You get to work in the margins — which is actually where the value is built.
I sleep better now. Not because my companies have fewer problems. Because I built a system where most problems don't need me to solve them.
That's the goal: not to be indispensable to the routine. To be indispensable to the decisions that actually matter.
Chapter 10: How I'd Start a Company in 2026
If I were starting from scratch today, I'd do almost nothing the same as I would have in 2019.
Not because the fundamentals of business have changed. They haven't. You still need a problem worth solving, a customer willing to pay, and a way to reach them. Those don't change.
Everything around those fundamentals has changed. The tools, the cost structure, the speed at which you can test and learn, the realistic scope of what one person can operate — all of it is different. And the playbook that made sense in 2019 leaves significant leverage on the table in 2026.
Here's what I'd do.
Month 0: Define the Constraint First
Before writing a line of code or talking to a customer, I'd write down my operating constraints.
Not my goals. My constraints.
- Maximum monthly burn (what can I run this for indefinitely?)
- Maximum team size (mine is zero, yours might be one or two)
- Maximum time I'm willing to commit before assessing whether to continue
- Categories of work I'm not willing to do myself
The constraints come first because they define the architecture. A company designed to run on $2,000/month looks radically different from one designed to run on $50,000/month. Both can be profitable. But you can't reverse-engineer the constraints from the company — you have to build the company from the constraints.
Month 0-1: Validate Without Building
The biggest mistake first-time founders make is building before they've confirmed someone will pay.
In 2026, validation costs almost nothing:
1. Write the sales page first. Describe the product as if it exists. Force yourself to articulate the specific value for a specific person. If you can't write a compelling page, the product isn't ready — not because it isn't built, but because you don't understand it yet.
2. Run $100 in paid traffic to it. Not to your waitlist. To a buy button. A "pay now, get access in 30 days" page. See if anyone clicks.
3. Have 10 conversations. Not surveys. Conversations. Find the specific type of person you're building for and ask them to describe their current solution and its frustrations. Don't pitch. Listen.
If you can't get strangers to click a buy button and can't find ten people with the problem you're solving, the idea needs to change before any code gets written.
AI can help with almost all of this. It can draft the sales page in twenty minutes. It can research who has this problem and where they hang out. It can generate interview question frameworks. It can synthesize the notes from your ten conversations.
Month 1-3: Build the Smallest Possible Version
Once you have signal — people who clicked, or people who said "I would pay for that" with enough enthusiasm that you believe them — build.
But build the smallest thing that delivers the core value. Not the full vision. Not the thing you'd be proud to show your former colleagues. The thing that solves the specific problem for the specific person who told you they'd pay.
In 2026, a solo founder with a good AI coding agent can build a SaaS product in two to four weeks that would have taken three engineers three months in 2020. Not because AI has replaced engineering judgment — it hasn't — but because AI has eliminated the mechanical parts of implementation.
The mechanical parts used to eat 60% of engineering time. CRUD endpoints, authentication, payment integration, deployment configuration, testing boilerplate, database migrations. An AI agent handles all of this. You focus on the 40% that requires genuine design decisions.
Ship in weeks, not quarters.
Month 2-4: Sell Before You Optimize
The instinct after shipping is to improve the product. Add features, polish the UI, fix the annoyances.
The correct move is to sell.
Sales solves every other problem. If you're selling, you learn what's actually wrong with the product (from real customers who paid money). If you're selling, you have revenue to invest in improvements. If you're selling, you have proof that the thing works.
If you optimize before you sell, you optimize for your assumptions about what customers want. Sometimes you're right. Often you're not.
In 2026, sales without a sales team looks like:
- Content that attracts the right people (blog posts, videos, social threads on topics your customers search for)
- Community where your customers already gather (forums, Slack groups, subreddits, Discord servers)
- Direct outreach to the specific people who fit your ideal customer profile
- Partnerships with adjacent tools or communities where your customers already are
All of this can be assisted by AI. Content generation, research, outreach drafting, follow-up sequences. But the judgment — what to say, to whom, when — is still yours.
Month 3-6: Build the Agent Stack
Once you have paying customers and a validated product, this is when you build the operational layer that lets you scale without hiring.
Not before. Before you know what you're operating, you don't know what to automate. Automating a process you're going to change wastes effort. Automating a process that's been validated and stable is pure leverage.
The agent stack I'd build in order:
1. Customer support automation — this is the highest leverage starting point. Every ticket you handle manually is a template for an agent to handle next time.
2. Monitoring and alerting — you want to stop being the only thing standing between problems and impact.
3. Content and distribution — regular publishing to a regular schedule, handled by an agent that knows your brand and your calendar.
4. Financial tracking — invoices, payment follow-up, reconciliation. Not complex, but time-consuming and important.
5. Growth and outreach — prospecting, follow-up sequences, partnership outreach.
Build them in sequence, not all at once. Each one takes a few weeks to tune. Give each one time to prove itself before adding the next.
What's Different in 2026 vs 2020
Speed: A solo founder can now ship in weeks what used to take quarters. The timeline compression is real and it's still accelerating.
Cost: The infrastructure and tooling to run a software business has gotten dramatically cheaper. SaaS products that cost $20K/year to build and operate in 2020 cost $3-5K today.
Scope: One person can realistically operate more product surface area than ever before. I run four companies. That was not possible six years ago.
Moat: The traditional technical moat — "we have engineers and you don't" — is weaker than it's ever been. The new moat is distribution, brand, and relationships. These are still human things.
Competition: Because it's easier to build, more things get built. Standing out requires better judgment about what's worth building, not just better execution of building it.
The unfair advantage available to solo operators today isn't that AI does everything. It's that AI removes the operational floor that used to require a team. You can compete with companies ten times your size because your cost structure is one-tenth of theirs, and the leverage available per person is ten times what it was.
That gap won't stay this wide forever. But right now, it's there.
Use it before everyone else figures out how.
Chapter 11: The Stack
No affiliate links. No sponsored picks. No tools I'm paid to recommend.
This is what I actually use, what I actually pay, and why. It changes over time as better options appear — I'll note the date I last updated this list — but the categories stay constant.
*Last updated: March 2026*
The Foundation
### Language Models
The AI itself. I use multiple models for different tasks because price-to-performance varies significantly by task type.
Anthropic Claude (claude.ai / API) — $50-280/month depending on usage
My primary model for complex reasoning, code generation, and writing tasks that require nuance. The most capable model I use. More expensive than alternatives, so I route only the tasks that justify it.
Cheaper models for high-volume tasks — $50-100/month
For tasks where I'm running thousands of calls (email triage, content formatting, data extraction), the expensive models are overkill. I use smaller, faster, cheaper models for high-throughput pipelines.
Rule of thumb: use the smartest model for creative and reasoning tasks, the cheapest capable model for repetitive tasks with clear rules. Most people default to the best model for everything and spend 5x what they need to.
### Agent Runtime
OpenClaw — $79/month
The system that runs my agents. Handles scheduling, memory, tool access, inter-agent communication, and deployment. This is the operating system my agent team runs on. Chapter 12 covers this in detail.
### Infrastructure
AWS / Cloud hosting — $150-250/month total for agent infrastructure
Not counting product infrastructure (the servers my actual products run on), the agent support system costs roughly $150-250/month. One server for the agent runtime, a database for agent memory and logs, storage for outputs.
Bunny CDN — $20-40/month
For static sites and content delivery. Fast, cheap, global. Every static site I build goes through Bunny.
Neon (serverless Postgres) — $20-50/month
Managed Postgres for the few cases where I need relational data in my agent stack. Scales to zero when not in use.
Communication
Gmail (personal and business) — $12/user/month (Google Workspace)
I know. Everyone uses it. The advantage is that it's deeply integrated with everything and my agents can access it programmatically.
Discord — Free (I pay $0 for this)
My primary async communication channel with myself and with any external collaborators. Channels organized by company and function. Agents post updates here. I check it when I'm ready, not in real-time.
Payments and Finance
Mercury — Free for basic, percentage on wires
Business banking for all entities. Clean API, agent-accessible, solid for startup-stage companies.
Stripe — 2.9% + $0.30 per transaction
Standard payment processing. No monthly fee, pay per transaction. Every product that takes money uses Stripe.
Lemon Squeezy — Similar percentage to Stripe
For digital products where I want built-in tax handling and simpler global distribution. Less setup than Stripe for simple products.
Code and Deployment
GitHub — $4/month (Pro)
All code, all companies. Private repos, GitHub Actions for CI/CD.
GitHub Actions — Included in GitHub
Every deploy is automated through Actions. No manual deploys. Ever. The agent pushes code, Actions runs tests, Actions deploys to production.
This is a hard rule: if I can't automate the deploy, I haven't finished building the deploy system.
Domains and DNS
Porkbun — $9-15/year per domain
Cheap, simple, good UI. I buy all domains here.
Bunny DNS — Free
All DNS management through Bunny since it integrates with the CDN. Fast propagation, simple API.
Analytics
Measure.events — Built it, use it free
Privacy-friendly analytics I built myself. Simple pageview and event tracking without cookies or consent banners. This isn't an option for most people reading this — see below.
Plausible — $9-19/month
If I hadn't built Measure, I'd use Plausible. Same privacy-first approach, simple setup, fast. Not Google Analytics.
I don't use Google Analytics on any property I care about. The privacy overhead and complexity isn't worth it for the data I actually look at, which is: are people coming, are they staying, what are they doing.
Content and Distribution
Postpone — Varies
Social media scheduling. Connects to Twitter/X, Instagram, LinkedIn, TikTok. My content agent generates posts, Postpone publishes them.
Beehiiv — $39-99/month
Newsletter platform for the properties that have email lists. Clean, simple, good deliverability.
Project Management
Linear — $8/user/month
For tracking work that's in progress across companies. My agents can create issues, update status, and close tickets. I review the board once a day.
I don't use Jira. I've never used Jira and I never will.
The Full Monthly Breakdown
| Category | Tool | Monthly Cost |
|---|
| AI models | Anthropic + secondary | $280-380 |
| Agent runtime | OpenClaw | $79 |
| Cloud infra | AWS | $180 |
| CDN | Bunny | $30 |
| Database | Neon | $25 |
| Banking | Mercury | $0 |
| Email | Google Workspace | $12 |
| Code/deploy | GitHub | $4 |
| Analytics | Measure.events | $0 |
| Social scheduling | Postpone | $30 |
| Email marketing | Beehiiv | $49 |
| Project mgmt | Linear | $8 |
| Misc SaaS | Various | $50 |
| Total | | ~$747/month |
That's my full operating stack across four companies, less than $750/month.
Not counting the cost to run the products themselves (servers, databases for the actual products, third-party APIs the products use) — just the agent and operator infrastructure.
What I'd Cut If Budget Were Tighter
If I was just starting and needed to run lean:
- OpenClaw: could use cheaper or open-source alternative early on, add it when the leverage is obvious
- Beehiiv: skip until you have a real email list
- Postpone: post manually until you're producing enough content to justify automation
- Linear: use GitHub Issues or a Notion table until you have enough work to need real tracking
Getting the core working — language models, some hosting, GitHub — you can start for under $200/month.
What I'd Never Cut
- Good language model access. This is your leverage multiplier. Don't cheap out.
- GitHub + automated deploys. Manual deploys are how mistakes happen at 11 PM.
- Reliable hosting. Not the cheapest hosting — reliable hosting. Downtime costs more than the savings.
The rest is optimization. Get the core right, then optimize.
Chapter 12: OpenClaw — Your AI Operating System
There's a difference between having AI tools and having an AI operating system.
Tools are things you pick up and use. You open ChatGPT, type a question, get an answer, close the tab. You run a coding assistant, accept a suggestion, move on. The AI is a utility you invoke. You're still doing all the work — just with slightly better utilities.
An operating system is something that runs. It maintains state. It coordinates multiple processes. It handles things while you're not watching. It's the substrate on which everything else runs.
For the first few years of running companies with AI, I was using tools. Helpful, but limited. The leverage was real but bounded. I was still the coordinator. I still had to initiate everything.
When I switched to running an AI operating system, the leverage changed fundamentally. Not "this helps me do my job" but "this runs while I'm not here."
OpenClaw is the operating system I settled on. Here's what it is and how I actually use it.
What OpenClaw Does
At its core, OpenClaw is a runtime for AI agents. It handles:
Persistent agents. Agents that have memory across sessions. They know the history of what they've done, what the business state is, what you've told them matters. You're not starting from scratch every conversation.
Scheduled execution. Agents that run on a clock without you initiating anything. My morning briefing agent runs at 7 AM whether I'm awake or not. My monitoring agents run every 15 minutes. My content agents run on whatever schedule I defined.
Tool access. File system, terminal, email, databases, HTTP APIs — all available to agents within defined permissions. This is what makes agents useful for actual business tasks.
Multi-channel connectivity. My agents live in Discord. They send me messages. I send them messages. They respond, take action, report back. My phone is the dashboard.
Memory that persists. The agents maintain files that capture context — what's happened, what they've learned, what they should remember. When I tell an agent something once, it remembers it. Not just in the current session — permanently.
How I Have It Set Up
I have one main OpenClaw instance running on a server. Connected to:
- My Discord (primary interface)
- My email accounts (read access with write guardrails)
- My production servers (via SSH with defined permissions)
- My code repositories (GitHub)
- My business databases (read/write within defined schemas)
- My scheduling and task systems (Linear, calendar)
- My payment platforms (Stripe, Mercury — read access)
The agents live in this system. When I open Discord in the morning, I'm checking in with the operating system. It's already been running for hours. It has updates for me.
The Daily Interface
My typical morning interaction with OpenClaw takes about 15 minutes.
I read through what happened overnight:
- Any production incidents and how they were handled
- Email summary: what came in, what was auto-handled, what needs me
- Metrics that changed significantly
- Any scheduled tasks that ran and what they produced
I make any decisions that are waiting:
- Review content drafts that need my approval before publishing
- Look at code changes that need my merge
- Respond to anything the agent flagged as requiring my judgment
I give any new instructions:
- "Write a blog post about X"
- "Follow up with the prospect I emailed last week"
- "Check why our conversion rate dropped yesterday"
Then I close the tab and go work on something else. The agents execute throughout the day, update me as needed, and handle anything that fits within their defined parameters.
The interface is conversational but the execution is autonomous. That's the key. I'm not using it like a chatbot — I'm giving it direction and trusting it to carry that direction forward.
Setting It Up: The Basics
If you want to get OpenClaw running for your own operation, the setup is less technical than it sounds. You don't need to be a developer.
What you need:
- A server to run it on (a $10-20/month VPS works fine to start)
- A Discord account and server (your interface)
- Anthropic API access (the AI it runs on)
- The OpenClaw CLI to install and manage it
The setup sequence:
1. Install OpenClaw on your server (5 minutes, it's a single command)
2. Connect your Discord server
3. Give it context about your business — who you are, what you do, what matters, what it should never do without asking
4. Add tools one at a time (email first, then code/servers, then payments)
5. Let it run, observe, correct, expand trust over time
The hard part isn't the setup. The hard part is the context. The more clearly you can describe your business, your preferences, your rules, and your goals to the system, the more effectively it operates.
Think of it like onboarding an employee. You wouldn't hand a new hire full system access on day one. You'd explain what the company does, what matters, what the rules are, then watch how they operate before expanding access.
The Memory System
OpenClaw's memory is file-based. There are a few key files that the agent reads at the start of every session:
SOUL.md — who the agent is, its personality, its operating principles
USER.md — who you are, your preferences, how you like to communicate
MEMORY.md — long-term memory: decisions made, things to remember, context that persists
Daily memory files — what happened today and yesterday
This design means the agent is never starting from zero. It knows the history. It knows what you've asked for before. It knows what worked and what didn't.
Over time, this memory compounds. An agent that's been running for three months has three months of context. It knows your voice, your business, your preferences, your red lines. It's not an assistant you have to re-train every session. It's a colleague who's been paying attention.
The Limits (and Why They Matter)
OpenClaw, like any agent system, is only as good as its guardrails.
The worst thing you can do is give an agent too much autonomy too fast. Not because the AI is malicious but because the AI doesn't know what it doesn't know. A confident wrong answer with write access to your database is more dangerous than a confident wrong answer in a chat window.
Build in layers:
- Read access before write access
- Internal systems before external communications
- Low-stakes tasks before high-stakes ones
- Supervised automation before unsupervised automation
The agents I trust to act fully autonomously today took three to six months to reach that level. The trust was earned through track record, not granted by default.
That patience is worth it. An agent that's been proven over time is genuinely reliable. An agent given full access on day one is a liability waiting to materialize.
The point of an operating system isn't to do one thing well. It's to create the environment where everything else runs reliably.
OpenClaw is the environment where my agents live and operate. The agents are what do the work. But without the operating system underneath them — the memory, the scheduling, the tool access, the communication layer — each agent would be a one-off experiment rather than part of an integrated system.
That integration is what turns AI tools into an AI team.
Chapter 13: Prompts for Business Owners
The prompts in this chapter are starting points, not finished products. Copy them, adapt them, make them yours. The specific details of your business — your tone, your customers, your rules — are what turn a generic prompt into a reliable agent.
For each category, I've included a base prompt and notes on how to customize it.
Customer Support Automation
### Email Triage and Response
Use this to have an agent read incoming support emails and decide how to handle them.
You are a customer support agent for [COMPANY NAME], a [BRIEF DESCRIPTION].
When you receive a support email, follow this process:
1. CATEGORIZE the email as one of:
- BILLING: questions about charges, refunds, subscriptions
- TECHNICAL: product not working, bugs, errors
- FEATURE REQUEST: asking for something we don't have
- GENERAL INQUIRY: questions about pricing, how the product works
- SPAM/IRRELEVANT: not a real customer inquiry
2. For BILLING issues: draft a response that acknowledges the issue,
explains our policy ([INSERT YOUR REFUND/BILLING POLICY]), and
offers to resolve. Flag for human review before sending.
3. For TECHNICAL issues: check if the issue matches any known issue
in [YOUR KNOWLEDGE BASE]. If yes, draft a response with the
solution. If no, acknowledge receipt, tell them we're investigating,
and escalate to [YOUR ESCALATION CHANNEL].
4. For FEATURE REQUESTS: thank them, log the request in [YOUR SYSTEM],
and send a canned response: [INSERT FEATURE REQUEST RESPONSE].
5. For GENERAL INQUIRY: draft a response using our product documentation.
Do not make up features or pricing. Only state what's documented.
6. Never promise refunds over [AMOUNT] without human approval.
7. Never make commitments about release dates or feature timelines.
8. Always sign emails as [SUPPORT NAME OR TEAM NAME].
For every email, output:
- CATEGORY: [category]
- PROPOSED ACTION: [what you'll do]
- DRAFT RESPONSE: [the email text]
- FLAG FOR HUMAN: [yes/no] and why if yes
Customization notes: The key is the escalation rules. Be explicit about what requires human judgment. Err on the side of flagging more early on; narrow the escalation criteria as you build confidence.
Invoice Management
### Payment Follow-Up Sequence
You are managing outstanding invoices for [COMPANY NAME].
I will provide you with a list of overdue invoices. For each invoice,
send a follow-up email according to this schedule:
- 1-7 days overdue: Friendly reminder. Assume it was an oversight.
Tone: warm, no pressure.
- 8-14 days overdue: Second notice. Slightly more direct.
Reference the specific invoice number and amount.
Include payment link: [PAYMENT LINK]
- 15-30 days overdue: Formal notice. Mention that continued non-payment
may affect service. CC [YOUR EMAIL] on this one and flag for my review.
- 30+ days overdue: Do not send automatically. Draft the email and flag
for my review with a recommendation (payment plan offer, collection,
service suspension, etc.).
For every email sent:
- Log the date, invoice ID, and action taken in [YOUR LOG SYSTEM]
- Update the invoice status in [YOUR SYSTEM]
- If any reply comes back, route to me immediately
Never threaten legal action without my explicit instruction.
Our standard payment terms are [NET 30/NET 15/etc.].
Email Triage (General Inbox)
You are managing my email inbox for [YOUR BUSINESS].
When new emails arrive, categorize each one and recommend an action:
URGENT (respond today):
- Client complaints about service outages or failures
- Payment issues from customers
- Legal or compliance matters
- Time-sensitive business opportunities with a clear deadline
IMPORTANT (respond within 48 hours):
- Client questions about their accounts or usage
- Partnership inquiries from relevant companies
- Media requests
- Vendor issues that could affect operations
ROUTINE (batch respond 2x per week):
- Newsletter subscriptions and marketing
- Non-urgent vendor communications
- General questions that can wait
- Social notifications
ARCHIVE (no action needed):
- Receipts and confirmation emails
- Automated reports I receive regularly
- Social media notifications
- Newsletters I'm subscribed to but rarely read
For URGENT emails: draft a response for my review.
For IMPORTANT emails: draft a response and flag for my review.
For ROUTINE emails: draft responses in batches twice weekly.
For ARCHIVE emails: mark as read and archive.
Context about my business:
- [BRIEF DESCRIPTION OF YOUR COMPANY AND ROLE]
- Key clients to always flag: [CLIENT NAMES]
- Topics that are always urgent for me: [YOUR TOPICS]
- My approximate email response style: [FORMAL/CASUAL/BRIEF/DETAILED]
Social Media Content Generation
### Brand Voice Template
You create social media content for [BRAND NAME].
Brand voice: [DESCRIBE IN 2-3 SENTENCES — e.g., "Direct and data-driven.
No fluff. We make bold claims we can back up with numbers. Occasional dry
humor but never forced."]
Content pillars (topics we post about):
1. [TOPIC 1 — e.g., "Operator stories — real examples of solo founders
doing more with less"]
2. [TOPIC 2]
3. [TOPIC 3]
Platform rules:
- Twitter/X: punchy, under 280 characters, hooks that make people stop
scrolling. Strong opinion or surprising fact as the opener.
- LinkedIn: slightly longer, more context, professional but not corporate.
Start with a bold statement, not a greeting.
- Instagram: visual-first, caption adds context, ends with a question
or call to action.
What to avoid:
- Engagement bait ("Comment YES if you agree!")
- Vague inspiration ("Success takes hard work!")
- Self-congratulation without substance
For each post, generate:
1. The post text
2. Recommended posting time
3. Relevant hashtags (3-5 max, only if they're genuinely relevant)
Generate [NUMBER] posts on the topic of: [TODAY'S TOPIC]
Bookkeeping and Expense Tracking
You are reviewing my financial transactions for [MONTH/PERIOD].
I will provide you with a list of transactions from [BANK/CARD SOURCE].
For each transaction:
1. Categorize it as one of: [YOUR CATEGORIES — e.g., Software/SaaS,
Infrastructure, Marketing, Contractors, Misc. Operating]
2. Flag any that look unusual, duplicated, or unexpected
3. Identify subscriptions that have increased in price since last month
4. Flag any recurring charges for tools I might have stopped using
(look for charges from tools not mentioned in my active stack)
Context:
- My known recurring expenses: [LIST YOUR KNOWN MONTHLY CHARGES]
- Business entities: [YOUR ENTITY NAMES]
- Anything over $[AMOUNT] should be flagged for my review
Output:
- Categorized expense table
- Flagged items with explanation
- Total by category
- Month-over-month comparison if I've provided previous months
- Recommendations for any subscriptions to cancel or review
Scheduling and Calendar Management
You are managing my calendar for [YOUR NAME].
My priorities in order:
1. [YOUR TOP PRIORITY — e.g., "Deep work on product: I need 3 uninterrupted
hours per day, ideally morning"]
2. [SECOND PRIORITY]
3. [THIRD PRIORITY]
My rules:
- No meetings before [TIME] or after [TIME]
- No back-to-back meetings without a 30-minute buffer
- [DAY] is protected: no meetings, deep work only
- I need [TIME] for async check-ins each morning
When someone requests a meeting:
- Check if it fits my rules above
- If it does, propose [NUMBER OF TIME SLOTS] options in my available windows
- If it requires more than [DURATION], flag for my approval first
- Always confirm with me before committing to anything over [DURATION] hours
Context I give anyone I'm meeting: [BRIEF BIO OR CONTEXT IF RELEVANT]
Scheduling link format: [YOUR PREFERRED SCHEDULING METHOD]
Vendor Negotiation
I need to negotiate with [VENDOR NAME] regarding [ISSUE — e.g., pricing,
contract renewal, service level, billing dispute].
Context:
- Current arrangement: [WHAT WE PAY/RECEIVE]
- What we want: [YOUR GOAL]
- Our leverage: [WHAT GIVES US NEGOTIATING POWER — e.g., volume,
long tenure, competitive alternatives]
- Our alternatives: [WHAT WE'D DO IF THEY SAY NO]
- Minimum acceptable outcome: [YOUR FLOOR]
- Relationship importance: [HOW IMPORTANT IS THIS VENDOR LONG-TERM]
Draft a [EMAIL/LETTER/TALKING POINTS] for this negotiation.
Tone: [FIRM BUT PROFESSIONAL/COLLABORATIVE/WHATEVER FITS YOUR STYLE]
Do not:
- Make commitments I haven't authorized
- Reveal our alternatives unless strategically useful
- Accept worse terms than our current arrangement as a concession
Tips for All Business Prompts
Be explicit about escalation. Every automated process needs clear rules for "when to stop and get a human." Define these in every prompt.
Include negative instructions. "Don't do X" is often more valuable than "do Y." Specify what you never want to happen automatically.
Give it your voice. Paste in examples of how you actually communicate. The agent will learn your style from examples faster than from descriptions of your style.
Start with outputs, not actions. For new automations, have the agent produce a draft or recommendation rather than taking action. Review the outputs for a week or two before enabling autonomous execution.
Log everything. Require the agent to log what it did, what it decided, and why. The log is how you audit, improve, and catch problems before they compound.
Chapter 14: Prompts for Developers
If you're a developer using AI agents in your workflow, the leverage multiplier is even larger than for business operators. You can give agents direct access to your code, your infrastructure, and your deployment pipelines — and that access, used correctly, eliminates most of the mechanical work in engineering.
These are the prompts and patterns I use for my development workflow. Adapt them to your stack.
Code Review and PR Analysis
You are performing a code review for a pull request.
Repository context:
- Stack: [YOUR STACK — e.g., "Ruby on Rails API, React frontend, PostgreSQL"]
- Coding standards: [LINK TO YOUR STYLE GUIDE OR BRIEF DESCRIPTION]
- Key patterns we follow: [e.g., "Service objects for business logic,
no logic in controllers, test coverage required for all new methods"]
- What we care most about: [e.g., "Security, data integrity,
performance on queries touching the orders table"]
Review this PR with the following lens:
1. CORRECTNESS: Does this code do what it claims? Are there logic errors?
Edge cases not handled? Off-by-one errors? Null pointer risks?
2. SECURITY: Any injection risks? Authentication/authorization bypassed?
Secrets hardcoded? Input not validated?
3. PERFORMANCE: N+1 queries? Missing indexes? Expensive operations in loops?
Unoptimized database queries on large tables?
4. MAINTAINABILITY: Is this readable? Well-named? Appropriately commented?
Does it follow our patterns?
5. TEST COVERAGE: Are the new behaviors tested? Are edge cases covered?
Any obvious missing test cases?
For each issue found:
- Severity: BLOCKING / SHOULD FIX / SUGGESTION
- Location: file and line number
- Explanation: why this is a concern
- Recommendation: what to do instead
Output a summary at the top: APPROVE / APPROVE WITH COMMENTS / REQUEST CHANGES
Deployment Pipelines
### CI/CD Health Check
You are monitoring our deployment pipeline in [CI/CD SYSTEM — GitHub Actions/etc.].
After each deployment, verify:
1. The deployment completed without errors (exit code 0)
2. All post-deploy smoke tests passed
3. Error rate in [MONITORING TOOL] is within normal range
(define normal: [YOUR BASELINE])
4. Response time is within acceptable range (define acceptable: [YOUR THRESHOLD])
5. No new error patterns in logs in the first [TIME PERIOD] after deploy
If everything is healthy: log "Deploy [VERSION] healthy at [TIMESTAMP]"
and notify [CHANNEL] with a green checkmark.
If any check fails:
- Immediately notify [ONCALL CHANNEL] with details
- Do NOT roll back automatically — flag for human decision
- Include: what failed, what the current state is, what the rollback
command would be (for human to run if they choose)
Context:
- Our deploy process: [BRIEF DESCRIPTION]
- Critical services to monitor: [LIST]
- Acceptable error rate: [PERCENTAGE]
- Acceptable p95 latency: [MS]
Production Monitoring and Alerting
You are the production monitoring agent for [YOUR SYSTEM].
Monitor the following at [INTERVAL — e.g., every 5 minutes]:
1. Server health: CPU, memory, disk usage
- Alert if CPU > 80% for 10+ minutes
- Alert if memory > 85%
- Alert if disk > 75% (serious), > 85% (critical)
2. Application health: [YOUR HEALTH CHECK ENDPOINT]
- Alert if health check fails
- Alert if response time > [THRESHOLD]
3. Error rates: [YOUR LOG SOURCE]
- Alert if error rate exceeds [BASELINE]%
- Alert if any new error type appears that wasn't in yesterday's logs
- Specifically watch for: [YOUR CRITICAL ERROR PATTERNS]
4. Database health:
- Connection pool usage
- Slow queries (> [THRESHOLD] ms)
- Replication lag if applicable
For each alert:
- Severity: P1 (wake me up) / P2 (notify immediately) / P3 (batch notification)
- Include: what's happening, when it started, what the impact likely is
- For known issues: include the standard remediation steps
- For unknown issues: include raw data and your diagnosis attempt
P1 always means: send to [YOUR EMERGENCY CONTACT METHOD].
P2 means: post to [YOUR OPS CHANNEL].
P3 means: include in daily summary.
Known issues and resolutions:
- [KNOWN ISSUE 1]: [HOW TO RESOLVE]
- [KNOWN ISSUE 2]: [HOW TO RESOLVE]
Debugging Workflows
I need help debugging [BRIEF DESCRIPTION OF PROBLEM].
Environment: [PRODUCTION/STAGING/LOCAL]
Stack: [YOUR STACK]
When it started: [TIMESTAMP OR EVENT THAT TRIGGERED IT]
Here is what I know:
- Symptoms: [WHAT'S OBSERVABLE — error messages, wrong output,
performance degradation, etc.]
- What changed recently: [ANY RECENT DEPLOYS, CONFIG CHANGES, TRAFFIC SPIKES]
- What I've already tried: [IF ANYTHING]
Relevant logs:
[PASTE LOGS]
Relevant code:
[PASTE CODE OR DESCRIBE WHERE TO LOOK]
Please:
1. Identify the most likely root cause based on the evidence
2. List any other hypotheses worth investigating (in priority order)
3. Suggest the next diagnostic step to confirm or rule out the most
likely cause
4. If you can identify the fix, provide it with an explanation of why
it resolves the issue
Do not guess. If you're uncertain, say so and explain what additional
information would help narrow it down.
Documentation Generation
Generate documentation for [CODE COMPONENT/API/MODULE].
I want:
1. OVERVIEW (2-3 sentences): What does this do? Why does it exist?
2. USAGE: How do you use it? Show a simple example.
3. API/INTERFACE: If applicable, document all public methods/endpoints:
- Name/path
- Parameters with types and descriptions
- Return value or response structure
- Errors/exceptions that can be thrown
4. EXAMPLES: 2-3 real-world usage examples for the most common use cases
5. NOTES AND GOTCHAS: Non-obvious behavior, performance characteristics,
things to watch out for, common mistakes
Format: Markdown
Audience: A developer who's competent but hasn't seen this code before
Level of detail: Enough that they can use it correctly without reading
the source
Here's the code:
[PASTE CODE]
Database Management
### Schema Review
Review this database schema for potential issues.
Database: [YOUR DB TYPE]
Context: [BRIEF DESCRIPTION OF WHAT THIS DATA REPRESENTS]
Look for:
1. Missing indexes: any foreign keys without indexes? Any columns that
will be frequently queried in WHERE clauses that aren't indexed?
2. Data type mismatches: are data types appropriate for what's being stored?
Anything stored as VARCHAR that should be a more specific type?
3. NULL constraints: are nullable columns that should be required correctly
constrained?
4. Naming conventions: are they consistent?
5. Normalization issues: obvious duplication that should be refactored?
6. Performance risks: any obvious table structures that will cause problems
at scale?
Here's the schema:
[PASTE SCHEMA]
For each issue:
- Severity: CRITICAL / SHOULD FIX / CONSIDER
- Explanation of the problem
- Suggested fix
API Design and Testing
### API Test Coverage
Generate a comprehensive test suite for this API endpoint.
Stack: [YOUR TESTING FRAMEWORK — RSpec, Jest, pytest, etc.]
Endpoint: [METHOD] [PATH]
What it does: [BRIEF DESCRIPTION]
Cover:
1. Happy path tests:
- Successful request with all required parameters
- Successful request with optional parameters present
- Each valid value for enum/constrained fields
2. Authentication/authorization:
- Request with no auth token
- Request with invalid auth token
- Request with valid token but insufficient permissions
- Request from the correct user/role
3. Input validation:
- Missing required parameters
- Invalid types for each parameter
- Out-of-range values for numeric parameters
- Malformed string formats (bad email, bad UUID, etc.)
- Boundary values (empty string, max length, zero, negative numbers)
4. Edge cases:
- Resource not found
- Duplicate request (idempotency if applicable)
- Concurrent requests if there are race condition risks
5. Response format:
- All expected fields present in success response
- Error response format matches your standard structure
Here's the controller/handler code and the route definition:
[PASTE CODE]
And any existing tests to avoid duplicating:
[PASTE EXISTING TESTS OR "none"]
Security Audits
Perform a security review of this code.
Context:
- What this code does: [DESCRIPTION]
- Data it handles: [e.g., "user PII, payment data, authentication tokens"]
- Trust level of inputs: [e.g., "user-supplied input from a public API"]
Check for:
1. INJECTION RISKS: SQL injection, command injection, XSS
2. AUTHENTICATION BYPASS: Can auth be skipped or spoofed?
3. AUTHORIZATION GAPS: Can a user access another user's data?
4. INSECURE DATA HANDLING: Secrets in logs, unencrypted sensitive data,
tokens in URLs
5. DEPENDENCY VULNERABILITIES: Known CVEs in dependencies used
6. RATE LIMITING: Is this endpoint susceptible to abuse without limiting?
7. INPUT VALIDATION: Is all external input validated before use?
8. ERROR EXPOSURE: Do errors leak internal details to callers?
For each finding:
- Severity: CRITICAL / HIGH / MEDIUM / LOW
- Attack vector: how would this be exploited?
- Impact: what happens if exploited?
- Fix: specific code change to remediate
Here's the code:
[PASTE CODE]
Tips for Developer Prompts
Give the agent your full context. The more it knows about your stack, your patterns, and your standards, the more relevant its output. Paste in your style guides, your architecture docs, your conventions. Time spent on context is time saved on corrections.
Treat the agent as a junior engineer. Its output needs review. Not because it's bad — it's often quite good — but because it doesn't have full context on your system and will sometimes make assumptions that are wrong for your specific case. Review everything before merging.
Iterate in small steps. For complex debugging or refactoring tasks, break them into steps and verify each one before proceeding. "Find the bug, don't fix it yet" then "fix just this one thing." Smaller iterations catch errors before they compound.
The agent is great at the second draft. Write the first version of a function yourself so you understand what it's supposed to do, then have the agent refactor it, optimize it, or generate tests for it. You'll catch errors the agent makes more reliably when you wrote the code first.
Chapter 15: Prompts for Creators
If your business runs on content — blog posts, newsletters, social media, video scripts — AI agents shift you from "producing everything by hand" to "directing a content machine that runs on your ideas."
The key insight for creators: the agent handles the production. You provide the thinking.
Content Calendar Planning
You are planning my content calendar for [MONTH/QUARTER].
My content strategy:
- Primary platform: [e.g., "Twitter/X — daily"]
- Secondary platforms: [e.g., "LinkedIn — 3x/week, Newsletter — weekly"]
- Content pillars (the 3-4 topics I post about):
1. [TOPIC 1]
2. [TOPIC 2]
3. [TOPIC 3]
4. [TOPIC 4]
Goals for this period:
- [e.g., "Drive traffic to product launch on March 24"]
- [e.g., "Establish authority on topic X"]
- [e.g., "Grow email list by 500 subscribers"]
Build a content calendar with:
- Each day mapped to a content pillar (rotate evenly)
- Specific post topic for each slot (not just the pillar — the actual angle)
- Platform-specific format notes (thread vs. single post, image vs. text)
- Any seasonal/timely hooks (industry events, trending topics, holidays)
- One "big content piece" per week (long thread, article, or video)
supported by smaller posts throughout the week
Format: table with columns for Date, Platform, Pillar, Topic/Angle, Format, Notes
My past high-performing posts for reference (use these to calibrate what resonates):
[PASTE 3-5 OF YOUR BEST-PERFORMING POSTS]
Blog/Article Writing Workflows
### Research and Outline
I want to write an article about [TOPIC].
Target audience: [WHO IS THIS FOR]
Goal of the article: [WHAT SHOULD THE READER DO/THINK/FEEL AFTER READING]
Word count target: [RANGE — e.g., "1,200-1,800 words"]
Tone: [e.g., "Direct, data-driven, opinionated but fair"]
SEO targets: [PRIMARY KEYWORD, SECONDARY KEYWORDS]
Before writing:
1. Research the top 5 articles currently ranking for "[PRIMARY KEYWORD]"
2. Identify what they cover and what they miss
3. Find a specific angle or argument that differentiates my article
4. Create an outline with:
- Hook (opening that makes someone keep reading)
- 4-6 main sections with headers
- Key data points or examples for each section
- Conclusion with clear takeaway
Do not write the full article yet. Give me the outline for review first.
My writing style for reference:
[PASTE A PARAGRAPH OR TWO FROM YOUR BEST PREVIOUS WRITING]
### Full Draft
Write the full article from this approved outline:
[PASTE APPROVED OUTLINE]
Writing rules:
- Open with a statement, not a question
- No filler phrases ("In today's fast-paced world...", "It's no secret that...")
- Every paragraph must advance the argument or provide evidence
- Use specific numbers and examples over generalizations
- Break up text with headers every 200-300 words
- End sections with a transition that pulls the reader forward
- Conclusion should be a single, memorable takeaway — not a summary
of everything above
SEO requirements:
- Include [PRIMARY KEYWORD] in the title, H1, first paragraph, and
at least 2 subheadings
- Include [SECONDARY KEYWORDS] naturally (not forced) at least once each
- Meta description (under 160 characters): compelling, includes primary keyword
- Internal links to: [YOUR RELEVANT PAGES — if applicable]
Format: Markdown with frontmatter (title, description, date, tags)
Video Script Generation
Write a video script for [PLATFORM — YouTube/TikTok/Instagram Reels].
Topic: [WHAT THE VIDEO IS ABOUT]
Target length: [e.g., "90 seconds" or "8-10 minutes"]
Style: [e.g., "Talking head with screen recordings" or "Fast-paced
with text overlays"]
Script structure:
HOOK (first 3-5 seconds):
- The single sentence that stops someone from scrolling
- Must create curiosity or state something unexpected
- No introductions, no "hey guys," no throat-clearing
BODY:
- Main point broken into [3-5] clear sections
- Each section: claim → evidence → transition
- Include specific visual/screen cues: [SHOW: ...] [CUT TO: ...]
- Pacing notes: where to speed up, where to pause for emphasis
CTA (last 10-15 seconds):
- One clear action: [SUBSCRIBE/LINK IN BIO/CHECK DESCRIPTION]
- Tie the CTA to the value they just received
Format the script as:
[VISUAL CUE]
"Spoken text goes here."
[NEXT VISUAL CUE]
"Next spoken text."
My on-camera style: [e.g., "Casual, slightly sarcastic, moves fast.
No um's or filler. Cuts between points."]
Newsletter Management
You are writing my weekly newsletter for [NEWSLETTER NAME].
Newsletter details:
- Audience: [WHO SUBSCRIBES AND WHY]
- Frequency: [WEEKLY/BIWEEKLY]
- Typical length: [e.g., "800-1200 words"]
- Sections: [LIST YOUR STANDARD SECTIONS — e.g.,
"Main essay, 3 interesting links, one product update, one personal note"]
- Tone: [DESCRIBE — e.g., "Like an email from a smart friend who's
been paying attention to what you care about"]
This week's inputs:
- Main topic I want to write about: [TOPIC OR ROUGH IDEA]
- Any product updates to include: [UPDATES]
- Interesting things I read/saw this week: [LIST WITH LINKS]
- Personal note (optional): [ANYTHING PERSONAL TO SHARE]
Write the full newsletter draft.
Rules:
- Subject line: compelling, under 50 characters, no clickbait
- Preview text: first 90 characters should hook the reader
- Main essay: make a single argument, support it, conclude it
- Links section: each link gets 2-3 sentences of context
(why should they click? what's the insight?)
- Sign off as: [YOUR NAME/SIGN-OFF]
- Include unsubscribe footer: [YOUR STANDARD FOOTER]
For reference, here's a past newsletter that performed well:
[PASTE EXAMPLE]
Social Media Scheduling
### Content Repurposing
I have a [LONG-FORM PIECE — blog post/newsletter/podcast transcript].
Repurpose it into:
1. Twitter/X thread (5-8 tweets):
- Tweet 1 is a standalone hook — it must work without context
- Each subsequent tweet makes one clear point
- Final tweet: CTA with link to original
- No hashtags in the thread
2. LinkedIn post (1 post, 150-300 words):
- Open with a bold statement or contrarian take
- Tell a brief story or share one key insight
- End with a question that invites discussion
- No "I'm humbled to announce" energy
3. Instagram caption (1 post, under 2,200 characters):
- The hook goes first (before "more...")
- Value-dense, skimmable
- End with a question or CTA
- 5-8 relevant hashtags at the end
4. Short-form video hook (for TikTok/Reels, 15-30 seconds):
- Opening line that creates instant curiosity
- One surprising fact or claim from the piece
- CTA: "Full breakdown in the link"
Here's the original piece:
[PASTE FULL CONTENT]
My brand voice: [DESCRIBE BRIEFLY]
SEO Optimization
Review and optimize this content for search engines.
Target keyword: [PRIMARY KEYWORD]
Secondary keywords: [LIST]
Current content:
[PASTE CONTENT]
Check and improve:
1. TITLE: Does it include the primary keyword? Is it compelling
enough to click? Under 60 characters?
2. META DESCRIPTION: Under 160 characters? Includes primary keyword?
Gives a reason to click?
3. HEADERS (H2, H3): Do they include secondary keywords naturally?
Are they descriptive enough to serve as a table of contents?
4. KEYWORD DENSITY: Is the primary keyword present in the first
paragraph, at least 2 headers, and the conclusion? Is it natural
or forced?
5. INTERNAL LINKS: Are there natural places to link to [YOUR OTHER PAGES]?
6. READABILITY: Short paragraphs? Clear language? Active voice?
7. CONTENT GAPS: Based on what's currently ranking for [PRIMARY KEYWORD],
what topics or questions does this article miss that it should cover?
Output: a revised version of the content with all optimizations applied,
plus a summary of what changed and why.
Audience Research
I'm building content for [YOUR NICHE/TOPIC].
Research where my target audience spends time online and what they care about:
Target audience: [DESCRIBE SPECIFICALLY — e.g., "Solo SaaS founders
making $10K-100K MRR who don't have teams"]
Find:
1. Top 5 subreddits where this audience is active
2. Top 5 Twitter/X accounts they likely follow
3. Top 5 newsletters they probably subscribe to
4. Top 3 podcasts in this space
5. Common questions they ask (check Reddit, Quora, forum threads)
6. Common frustrations they express
7. Products/tools they mention frequently (potential competitors
or partnership opportunities)
8. Language and phrases they use to describe their challenges
(use their words, not industry jargon)
For each community/resource: include the URL, approximate size/reach,
and a note on whether it's a good place for me to participate or
just observe.
Tips for Creator Prompts
Feed it your best work. The single most effective way to improve AI content output is to show it examples of your writing that performed well. Pattern-match your wins, not your average.
Edit ruthlessly. The first draft from AI is a starting point. Your value is in the editing — cutting fluff, sharpening arguments, adding your specific experience and opinions. Nobody wants to read AI-generated content that reads like AI-generated content.
Separate ideation from production. Use AI for brainstorming topics and angles. Use AI for writing first drafts. But do the ideation and the editing yourself. Those are where your voice and judgment live.
Batch your content. Instead of generating one post at a time, generate a week's worth. Review them all at once. The quality of your editing improves when you see the posts side by side and can cut the weakest ones.
Chapter 16: The Daily Operating System
Theory is nice. Here's what it actually looks like.
This is a real day. Not a perfect day — a real one. With interruptions, decisions that didn't go well, and the kind of friction that no amount of AI eliminates completely.
6:30 AM — Wake Up, Don't Open Anything
I don't check my phone first thing. Not because of some wellness guru advice. Because if I check it first, I'll start reacting to other people's priorities instead of setting my own.
I know there's a morning briefing waiting for me. It was generated at 6:00 AM by my agent. It's in Discord. It can wait 30 minutes.
Coffee. Think about what matters today. Not what's urgent — what matters.
7:00 AM — The Morning Briefing (15 minutes)
I open Discord on my laptop. My agent has posted the morning briefing in our channel:
Overnight summary:
- No production incidents. All services healthy.
- 3 new support emails: 2 auto-responded (routine), 1 flagged for me (billing dispute, $79).
- Measure.events had 847 unique visitors yesterday (up 12% from last Tuesday).
- PropFirmDeck: 3 new affiliate clicks overnight. No conversions.
- 1 email from a prospect I contacted last week — they want to talk.
Pending decisions:
- Blog post draft ready for review: "Why Privacy Analytics Will Win in 2026"
- Content calendar for next week needs my approval
- One invoice past due — agent recommends second follow-up
Today's scheduled tasks:
- Newsletter goes out at 2 PM (draft ready for review)
- Social posts scheduled: 3 across 2 brands
- Weekly analytics report will generate at 5 PM
I handle the decisions:
- Read the blog post draft. It's 90% good. I fix two sentences where the tone is off and approve it for publishing.
- Glance at the content calendar. Looks fine. Approve.
- The overdue invoice: I read the agent's draft follow-up. It's appropriately firm. I approve sending it.
- The billing dispute: $79 is within my auto-refund threshold but the customer's tone suggests they might churn regardless. I write a personal response offering to extend their trial instead.
- The prospect email: I draft a reply suggesting a 15-minute call Thursday.
Total time: 15 minutes. Five decisions made. Nothing fell through the cracks because the agent surfaced what needed attention.
7:15 AM — Deep Work Block (3 hours)
This is protected time. No meetings. No Discord. No email.
Today I'm working on the thing that matters most this week: redesigning the onboarding flow for Measure.events. User data shows 40% of signups don't complete setup. That's the bottleneck.
This is the work that only I can do. Not because it's technically complex — the agent could write the code. But because the *decision* about what the onboarding should feel like, what it should prioritize, what to cut — that's product judgment. That's taste.
I sketch the new flow on paper. Five steps instead of nine. Remove the steps that ask for information I can infer later. Add a "here's your first insight" moment within 60 seconds of signup.
Once the design is clear in my head, I describe it to my coding agent. I give it the current code, the new flow, and specific instructions. It starts building while I move on.
10:15 AM — Check-In (10 minutes)
Quick scan of what happened in the last 3 hours:
- The coding agent has a PR ready for the onboarding redesign. It's 80% of what I described. Two things need adjustment — I leave comments on the PR.
- A new support ticket came in: a customer can't reset their password. The agent resolved it automatically with a password reset link and a note explaining the process.
- My LinkedIn post from this morning got 23 likes and 4 comments. One comment asks a thoughtful question. I reply personally.
Back to work.
10:30 AM — Administrative Work (30 minutes)
The boring stuff that keeps the business alive:
- Review the weekly financial summary my agent prepared. Revenue, expenses, runway. Everything looks normal. One subscription I forgot to cancel — Figma team plan. I'm the only user. I tell the agent to downgrade it.
- Check on the coding agent's PR updates. It incorporated my feedback. I review the diff, run through the test coverage, merge and deploy. Deploy takes 4 minutes via GitHub Actions. No manual steps.
- Quick look at Stripe: two new Measure.events subscriptions this week. $58 MRR added. Small, but compounding.
11:00 AM — The Prospect Call (15 minutes)
The prospect who replied this morning wants to talk about Baseline (the AI site builder). They're a real estate agent who needs a website. The current one was built in 2019 and looks like it.
This is human work. Relationship, tone, trust. I listen more than I talk. Understand what they need. Quote $500 for the site build. They say yes.
After the call, I tell my agent: "New Baseline client. Real estate agent. Name: [name]. Email: [email]. They'll send me their content and photos. Set up the project and send them the intake form."
The agent creates the project in Linear, sends the intake email with our standard template, and logs the sale.
12:00 PM — Lunch. Actually Lunch.
No working lunch. The businesses are running. Nothing is on fire. The agents are handling the routine. I eat food and think about something that isn't work.
1:00 PM — Content and Outreach (1 hour)
- Review the newsletter draft one more time before the 2 PM send. Minor tweaks. Approve.
- Write one Twitter thread myself. This is content I want to own — an opinion piece about solo operators that I don't want to fully delegate. I write the thread, then use the agent to check it for typos and suggest improvements to the hook. I take one suggestion, ignore two.
- Review the social posts the agent has queued for the week. Kill one that feels forced. The rest are solid.
- Cold outreach: the agent prepared 5 personalized emails to potential Measure.events customers based on criteria I defined (SaaS companies, <50 employees, currently using Google Analytics, mentioned privacy on their marketing site). I review each email, personalize one sentence in each, and approve sending.
2:00 PM — Build Time (2 hours)
Second deep work block. Today it's PropFirmDeck work — adding a comparison feature that lets users compare two prop firms side by side. The data model exists, I just need the UI.
I describe the feature to the coding agent with a rough wireframe (photo of a napkin sketch, literally). It builds the component, writes tests, and has a PR ready in 45 minutes.
I review, tweak the styling (the agent always makes things slightly too padded — it has no taste for density), and merge. Deploy runs automatically.
The remaining time I spend on strategic thinking for Baseline: what would a $1,000/month plan look like? What services could justify it? I jot notes in a doc. No agent needed.
4:00 PM — Wrap-Up (30 minutes)
End of day check:
- Coding agent PRs: all merged, all deployed, all healthy
- Support tickets: 7 total today, 6 handled automatically, 1 handled by me this morning
- Revenue: +$558 today ($500 Baseline sale + $58 MRR from new subscriptions)
- Content: newsletter sent (42% open rate, checking tomorrow), 3 social posts published, 1 thread posted
- No production incidents
- Tomorrow's briefing will include: weekly analytics deep dive, follow-up on the cold outreach responses
I write a quick note in my daily memory file about the onboarding redesign decision and the Baseline pricing idea. Tomorrow-me will appreciate the context.
What This Day Represents
Total working time: approximately 7.5 hours.
Of that:
- Deep product work: 5 hours (the work that builds value)
- Decision-making and review: 1.5 hours (the work only I can do)
- Administrative: 30 minutes (the work that used to eat 3 hours)
- Sales call: 15 minutes (human relationship work)
What the agents handled without me:
- 6 support tickets resolved
- Morning briefing prepared
- Content published on schedule
- Financial summary generated
- Cold outreach emails drafted
- New client onboarded
- Code review feedback incorporated
- Deployments executed
- Production monitoring (24 hours)
This isn't a fantasy day. This is a Tuesday. Some days are better, some are worse. Some days a production issue needs real attention and eats two hours. Some days a client call runs long. Some days I don't feel productive and I only get 4 hours of real work done.
But the floor is higher than it used to be. Even on a bad day, the agents keep running. The emails get answered, the monitoring stays active, the content still publishes. The business doesn't stop moving just because I had an off day.
That's what an operating system does. It keeps running even when you don't.
Chapter 17: When to Break the Constraint
I've spent sixteen chapters arguing that you should stay small, stay solo, and let AI do the work that teams used to do. I believe every word.
Now I'm going to tell you when to ignore all of it.
Constraints are tools. They're not religion.
The moment a constraint stops making you sharper and starts making you smaller, it's time to revisit it. The whole point of choosing your constraints is that you can also choose to change them when the situation changes.
The danger isn't in breaking a constraint. The danger is in breaking it without noticing — sliding from "solo operator by choice" into "overwhelmed founder in denial" without ever making a conscious decision to evolve.
Here are the signals that it might be time.
Signal 1: You're the Bottleneck on the Thing That Matters Most
If the most important work in your business — the work that drives revenue, serves customers, builds the product — is consistently waiting on you, and you've already automated everything you can automate, you have a constraint problem.
This is different from being busy. Everyone's busy. The signal is specific: the *highest-value work* is delayed because you're doing lower-value work that can't be delegated to an agent.
Example: you're a developer-founder and the product needs a major architecture change. You have the skills. But you're spending 4 hours a day on sales calls because the business has grown to a point where inbound leads exceed what one person can handle, and AI can't close enterprise deals.
The architecture work — the strategic, high-leverage work — waits. Every week it waits, the technical debt compounds. The sales calls are important, but they're also the kind of work where a good human salesperson would have immediate impact.
That's the signal. Not "I'm overwhelmed" — that's normal. The signal is: "The work that only I should be doing is being crowded out by work that someone else could do well."
Signal 2: The Quality Floor Is Dropping
AI agents are consistent, but they have a quality ceiling. For most tasks, that ceiling is well above what you need. For some tasks, especially as your business matures and your standards rise, the ceiling isn't high enough.
If you notice:
- Customer support responses that technically answer the question but miss the emotional context
- Content that's competent but not distinctive
- Code that works but creates technical debt because it doesn't account for where the architecture is headed
- Design that's functional but lacks the refinement that differentiates your brand
...and these gaps are affecting how customers perceive your product or brand — that's a signal.
The question isn't "can the agent do this?" It's "is the agent's version good enough for where we are now?" Early on, good enough is great. Once you have paying customers with expectations, good enough might not be.
Signal 3: You're Turning Down Revenue
If you have more demand than you can serve — customers wanting to buy, projects you could take on, markets you could enter — and the limiting factor is your personal capacity after automation, the constraint is costing you money.
This is the clearest signal, and the hardest to act on, because it means admitting that the solo model has a ceiling.
It does. For some businesses, that ceiling is very high — a solo SaaS founder can serve thousands of customers without help. For others, especially service businesses, the ceiling is lower.
Run the math honestly:
- How much revenue are you leaving on the table?
- Would a hire generate more than they cost within 6 months?
- Is the revenue you're turning down recurring or one-time?
- Could you serve it with a contractor rather than a full hire?
If the numbers clearly say "hire," then hire. The constraint served its purpose — it kept you lean while you figured out what works. Now you know what works. Scale it.
Signal 4: You're Lonely in a Way That Hurts the Work
I'll be honest about this because nobody else in the "solo operator" space seems willing to.
Working alone is hard. Not the tactical stuff — the emotional stuff. There's no one to celebrate wins with. No one to share the weight of a bad week. No one who understands the specific context of what you're building.
AI agents are incredible collaborators. They are not companions. They don't care about your wins. They don't share your anxiety. They don't push back on your ideas because they believe something different.
If the isolation is affecting your judgment — if you're making worse decisions because you have no one to pressure-test them with, if you're losing motivation because the work feels solitary, if you're avoiding hard problems because there's no accountability — that's a signal.
The solution might not be hiring. It might be a co-founder, an advisor, a community of peers, a mastermind group. But if what you need is another human who's invested in the outcome, that's a legitimate need and not a failure of the solo model.
How to Break the Constraint Without Breaking What Works
If you decide to hire, do it differently than most founders:
Hire for the work AI can't do. Don't hire an engineer to write CRUD endpoints — the agent does that. Hire for the things the agent genuinely struggles with. Sales. Design taste. Strategic thinking. Customer relationships.
Hire one person, not a team. The jump from 1 to 2 is manageable. The jump from 1 to 5 is a different company. Add one person, integrate them fully, see how it changes your operating system, then decide whether to add another.
Give them the agent stack. Your first hire should be amplified by the same AI infrastructure you use. They shouldn't be doing work manually that agents handle. The hire should multiply your capacity, not replace your agents.
Keep the constraints that still serve you. "No employees" might change to "one employee." That doesn't mean you abandon "no meetings" or "no office" or "AI cost cap." Evaluate each constraint independently.
Hire slow, fire fast. The cliché exists because it's true. When you've been running solo, adding the wrong person is worse than adding no one. Take your time finding the right person. Move quickly if it's not working.
The Goal Was Never Purity
I want to be clear about something: the goal of this book was never to argue that everyone should be a solo operator forever. The goal was to show that the floor — the minimum viable team — is much lower than most people think.
You don't need 10 people to run a software company. You might need 1 or 2. You might need zero.
The constraint advantage isn't about never hiring. It's about never hiring by default. It's about making the conscious decision: "I am adding a person because I've exhausted what I can do alone with AI, and the specific gap I'm filling requires a human."
That's a fundamentally different hiring decision than "we need to grow the team."
What Comes Next
If you've read this far, you're either already running lean or thinking about it.
Either way, the opportunity window is real and it's now. The cost of AI is dropping. The capability is increasing. The infrastructure for solo operators is better than it's ever been. The stigma of "it's just me" is fading as more companies prove that small can be powerful.
Start with your constraints. Build your agent stack. Ship faster than your competitors. Stay lean longer than feels comfortable.
And when the time comes to break a constraint — break it on purpose, with clear eyes, knowing exactly why.
The constraint advantage isn't about staying small. It's about being intentional about how you grow.
That's the whole game.