
For many developers, collaborating with an AI coding agent is a practice in hope over strategy. They give a single, vague instruction and cross their fingers—a process Ryan Carson calls “vibe coding” or “yoloing.” It’s a fun way to experiment, but as Carson notes, for “engineers that need to build real stuff,” it’s a recipe for frustration.
This isn’t a theoretical problem for Carson. As a serial founder, he’s experienced both ends of the startup spectrum. He built and sold Drop Send as a solo founder, then co-founded Treehouse, a VC-backed behemoth that taught a million people to code. Now, he’s returning to his roots, building a new startup, Untangle, as a solo founder once again—but this time, supercharged by AI. His highly structured, three-file system for agentic development isn’t just a collection of clever prompts; it’s a professional methodology born from years of experience. This article shares the most impactful and counter-intuitive takeaways from his battle-tested approach.
1. Slow Down to Speed Up: The Power of Deliberate Planning
The most striking part of Carson’s process is how much time is spent in structured planning before the AI writes a single line of code. In a live demo, this setup phase took a full 20 minutes. This deliberate planning is a direct refutation of the “prompt now, fix later” impulse that dominates amateur AI usage. Instead of a single vague request, the system first generates a detailed Product Requirements Document (PRD), then breaks that down into high-level “parent tasks,” and finally generates granular, atomic “subtasks” for each.
This methodical planning acts as a critical guardrail. It forces the developer to clarify their own thinking and provides the agent with a detailed, step-by-step roadmap. By investing time upfront, you prevent the AI from veering off-course, ultimately saving hours of debugging and rework. This isn’t a hack; it’s the discipline of an architect versus the impatience of a script-kiddie. It’s what professional, agent-driven software development actually looks like.
we’ve been talking for like 20 minutes right and like now it’s finally starting to code… this is actually the way real software development happens with agents.
2. Treat Your AI Like a New Hire, Not a Magician
Carson’s core philosophy is to treat the AI agent like a very smart, but context-free, new engineer who just showed up on your doorstep. This simple analogy is a powerful forcing function that combats a developer’s natural tendency toward laziness when prompting. As interviewer Peter Yang admitted, “I become so lazy… I just hey go build this… this is forcing me to actually provide some more details.”
Carson’s system operationalizes this principle with its first file, create_prd.md. The prompt explicitly instructs the AI agent to begin by asking clarifying questions about the project’s goals, target users, and the specific problem being solved. This step is crucial for two reasons: it forces the developer to articulate their idea with precision, and it equips the AI with the essential context needed to generate a relevant and effective plan.
imagine that you had a very smart engineer show up on your doorstep they have no context no background you wouldn’t just tell you know a random new employee “Make me a game that’s super fun to play and then expect them to succeed.”
3. Require Human Approval Before Every Major Step
A common fantasy is that AI agents will build entire applications autonomously while we sleep. Carson’s system is a practical rejection of this idea, building in explicit checkpoints that keep the human developer firmly in the driver’s seat. This “human-in-the-loop” approach is essential for guiding the agent and ensuring the project doesn’t veer off course.
The system enforces this in two key ways. First, the generate_tasks.md prompt instructs the AI to create a short list of high-level “parent tasks” and wait for user confirmation before generating detailed subtasks. Second, the process_task_list.mdprompt forces the agent to ask for permission (a “yes” or “y”) before executing each individual subtask. However, this isn’t rigid dogma. As AI models improve, the system adapts. Carson notes that the need for constant supervision is already lessening with more advanced models.
i wouldn’t want the AI to run off and create 30 tasks i would want it to create a high level you know give me five tasks and then I want to approve those.
As he later reflected on the tight control loop:
i think you know when I shipped this uh we were on sonnet 37 um and I think with sonnet 4 you really don’t need to handhold it you know quite as tightly
4. Make Your Test Suite the AI’s Real Co-Pilot
In a traditional workflow, Test-Driven Development (TDD) is a best practice. In an agentic workflow, it becomes the non-negotiable feedback mechanism that separates success from failure. Without tests, a developer is stuck in a frustrating, subjective loop of “vibe coding,” telling the agent "Hey this is not working go fix this... it's not working it's still not working."
In Carson’s demo, when he noticed the initial plan lacked testing, he instructed the agent to add a Jest test after each functional change. This highlights the developer’s crucial role in refining the AI’s strategy. Tests provide the agent with a clear, automated, and objective signal of success or failure. This loop replaces subjective frustration with objective signals, forming the foundation of any reliable, professional AI development process.
the reason why you have to really care about test driven development now is because it’s the loop that the agent needs to actually know if it’s doing things right.
5. Use Different Models for Different Kinds of Thinking
One of the most sophisticated techniques in Carson’s workflow is leveraging a portfolio of AI models for their unique strengths. His agent of choice, AMP, has an “Oracle” feature that demonstrates this perfectly. For most implementation tasks, the agent uses a faster, more cost-effective model like Claude 3 Sonnet. For summarization, it might use Gemini Flash. But when a high-level strategic review is needed, Carson can invoke the Oracle.
This action makes a tool call to a more powerful, slower, and more expensive reasoning model—Claude 3 Opus—not to perform an action, but to review a plan. This is a subtle but critical distinction. He isn’t asking the powerful model to code; he’s asking it to think. As Carson puts it, “what you’re doing is saying I just want someone to to double check what I’m doing.” This is analogous to asking a senior architect for a second opinion on a blueprint before letting a junior engineer start building.
Conclusion: The Operating System for the Solo Founder
Building production-grade software with AI requires a mental shift from coder to architect. But Carson’s system reveals a deeper truth: this disciplined, architectural mindset is not just a better way to code—it’s the operating system for a new kind of entrepreneur.
Carson is building Untangle to solve a painful, real-world problem for a niche audience, a business he calls a “pain pill, not a vitamin.” This is the classic solo founder playbook, but now enabled by an unprecedented level of leverage. His structured process is what makes it possible for one person to build, ship, and manage a complex application that once would have required a team. It transforms the developer from someone who merely writes code into someone who designs a system of collaboration between human insight and machine execution. This isn’t just about building apps anymore; it’s about building a one-person engine of value.
References