Tag

Claude code

Claude Code vs n8n for Solo Builders

Claude Code vs n8n for Solo Builders

Claude Code and n8n are not replacements for each other. They are two layers of a solo-builder operating system.Layer Tool JobBuild and judgment Claude Code Read context, edit files, draft, review, implementTrigger and routing n8n Detect events, gather inputs, retry, notify, routeApproval Human Protect quality, voice, brand, money, productionIf your workflow needs repo context, use Claude Code. If your workflow needs a recurring trigger, use n8n. If your workflow needs both, use both. Why solo builders confuse them Both can touch AI. Claude Code can run commands and make changes. n8n can call an LLM. So it is tempting to ask, "Which one should run the business?" Wrong question. The better question is: Which part of the workflow needs judgment, and which part needs reliability? Claude Code is for judgment. n8n is for reliability. A practical example Say you want to turn a Search Console export into a new search asset. n8n should:detect the export save the file notify the system route the final outputClaude Code should:read the repo score opportunities create or update the page add internal links run the buildYou should:approve before publishingThat is the Ship Lean pattern. Start with the planner Before building, use the Claude Code + n8n Workflow Planner. If the workflow is agent-heavy, use the n8n AI Agent Workflow Builder. FAQ Should solo builders use Claude Code or n8n? Use Claude Code for codebase work, repo context, writing, and judgment. Use n8n for triggers, routing, integrations, retries, and schedules. Can Claude Code and n8n work together? Yes. n8n can detect the event and gather inputs; Claude Code can create the draft, plan, script, or diff; then n8n can route it for approval.

Best AI Stack for Solo Founders: The Lean Version

Best AI Stack for Solo Founders: The Lean Version

The best AI stack for solo founders is not the biggest stack. It is the smallest stack that helps you build, remember, automate, and publish without turning your business into a SaaS subscription museum. For most solo builders, that means four layers:Layer Tool JobBuild layer Claude Code Write code, inspect repos, create agents, ship pagesAutomation layer n8n Move data between tools, run repeatable workflows, trigger approvalsMemory layer Obsidian or Notion Store decisions, prompts, workflow notes, content ideasDistribution layer Astro, MailerLite, YouTube, X Turn build proof into searchable and social assetsHere is the move: pick one tool per job, then connect those tools around a weekly shipping rhythm. The lean stack I would start with Start with Claude Code, self-hosted n8n, Obsidian, MailerLite, and a simple Astro site. That stack gives you:one place to build one place to automate one place to remember one place to publish one owned email channelYou can add tools later. But if you cannot explain what a tool does for revenue, distribution, or saved hours, it probably does not belong in the stack yet. The stack should answer four boring questions Before you add a tool, ask what question it answers:Question Tool layer Good answerWhat am I building? Build Claude Code can inspect the repo and make the changeWhat needs to happen again? Automation n8n can trigger, route, retry, and log the workflowWhat did I already learn? Memory Obsidian or Notion stores decisions and reusable promptsHow does this become trust? Distribution The site, email list, and social channels turn proof publicMost solo founders do this backward. They start with "what tool is hot?" and end up with seven disconnected dashboards. Start with the work instead. Claude Code is the build layer ChatGPT is useful for thinking. Claude Code is useful for operating inside a codebase. For solo builders, that distinction matters. Claude Code can inspect files, update pages, create scripts, and work directly in the repo. That makes it better for repeatable systems work: site updates, content pipelines, workflow docs, and internal tools. Use Claude Code when:the task touches files the system needs judgment context matters a page, script, workflow doc, or internal tool needs to change you need the agent to read before it writesDo not use Claude Code as a recurring scheduler. That is not the job. n8n is the automation layer n8n is the automation layer. It should handle triggers, data movement, retries, and approvals. Use n8n for:RSS scans webhook intake Notion or Airtable status changes email list updates content routing scheduled checksDo not use n8n as the brain for every task. Let Claude Code or an LLM handle judgment. Let n8n handle the pipes. That split matters because recurring automation breaks in boring ways: expired tokens, changed fields, failed webhooks, missing approvals. n8n is better at making those failures visible. Obsidian or Notion is the memory layer Your AI stack gets weaker when every decision is trapped in chat history. You need a place for:reusable prompts workflow runbooks project decisions content ideas product notes bugs and fixes what you tried that did not workObsidian is great if you like local markdown and fast notes. Notion is great if your workflows already live in databases. Pick one. The expensive mistake is using both badly. Distribution is part of the stack A solo builder does not just need to build faster. You need to make the work visible. That means the stack needs a distribution layer:Astro site for search pages and tools MailerLite or ConvertKit for owned email YouTube for proof and discovery X or LinkedIn for fast feedback free tools for search and AI visibilityThis is why I do not treat blogging as "content." A good search page is infrastructure. A useful calculator is infrastructure. A workflow page is infrastructure. Where most solo founders go wrong: tools before workflow They buy tools before they have a workflow. The better order is:Do the task manually once. Write down the steps. Remove steps that should not exist. Automate the repeatable pieces. Keep a human approval step anywhere quality matters.That order saves you from automating a mess. The $100-ish version I would run Use the AI stack cost calculator if you want to model your own number, but the lean version looks like this:Category Example setup Monthly rangeAI assistant Claude/Claude Code $20-$100Automation self-hosted n8n or starter plan $5-$30Site Astro on Vercel $0-$20Email MailerLite/ConvertKit $0-$30Notes Obsidian or Notion $0-$15You can spend more. But the first goal is not a perfect stack. The first goal is a stack that ships proof every week. A simple starter stack for solo builders If you are starting today, use this:Claude Code for building and content ops n8n for workflow automation Obsidian for durable notes MailerLite for email Astro for the site YouTube/X/LinkedIn for distributionThat is enough to run a serious one-person AI business without drowning in SaaS subscriptions. What I would automate first Do not automate your whole business first. Automate the loop that creates distribution from work you already did:Capture a build note, video transcript, or workflow run. Save it to your memory layer. Generate one search page idea. Generate one newsletter draft. Generate 2-3 social posts. Route everything for human approval. Publish only the pieces that are actually useful.That gives you leverage without handing your brand to a content slot machine. For prioritizing what to automate, use the automation priority audit. For content math, use the content flywheel ROI calculator. FAQ What is the cheapest AI stack for solo founders? The cheapest useful stack is Claude Code, self-hosted n8n, Obsidian, and a static site. You can keep the recurring cost low while still getting serious leverage. Should solo founders use Zapier or n8n? Use Zapier for simple app-to-app workflows. Use n8n when you want self-hosting, lower marginal cost, and more control over multi-step automations. Do I need a vector database? Probably not at the start. Most solo builders need better files, cleaner workflows, and searchable notes before they need a custom RAG system. What should I automate first? Automate the repetitive workflow that directly supports revenue or distribution. Content repurposing, lead capture, research intake, and publishing approvals are good first candidates. Is this stack enough to grow SEO traffic? The stack is only the machine. Traffic comes from what the machine ships: useful tools, workflow pages, comparison posts, refreshed articles, internal links, and pages that answer real search questions better than the generic results.Want help mapping the lean stack to your actual workflow? Start here.

Claude Code vs n8n: Which One Should Solo Builders Use?

Claude Code vs n8n: Which One Should Solo Builders Use?

Claude Code and n8n are not competitors. They are different parts of the same operating system. Use Claude Code when the task needs judgment, file edits, writing, reasoning, or codebase awareness. Use n8n when the task needs triggers, data movement, scheduled runs, retries, and integrations.The boring answer is the useful answer: Claude Code builds and thinks. n8n runs and routes. Quick comparisonUse case Claude Code n8nEdit website files Best WeakBuild an internal script Best PossibleTrigger when a form is submitted Possible BestMove data between tools Possible BestWrite content in your voice Best Needs LLM nodeSchedule a daily workflow Possible BestInspect a repo and make changes Best WeakRoute content through approvals Possible BestThe 10-second decision rule Ask this: Does the task need context and judgment, or does it need a reliable trigger? If it needs context and judgment, use Claude Code. If it needs a reliable trigger, use n8n. If it needs both, use both. That sounds too simple, but it prevents the common mistake: trying to make n8n think like an operator or trying to make Claude Code behave like a durable scheduler. When to use Claude Code: messy work with context Use Claude Code for work where the prompt is the product. Examples:writing a blog draft from a real build log refactoring a site page creating a new Astro page reviewing a workflow generating a script turning a messy idea into an implementation planClaude Code is strongest when it can read the surrounding context and make decisions. Claude Code is especially strong for solo builders because your business context often lives in files:site copy product docs workflow notes analytics exports newsletter drafts messy markdown docs code and configThat is not a clean API problem. That is an "understand the room before touching things" problem. When to use n8n: repeatable work with triggers Use n8n for the plumbing. Examples:when a YouTube video is uploaded, create content tasks when a Notion status changes, trigger a writing workflow when an RSS item matches a topic, save it for review every Friday, prepare the newsletter draft queue when a form is submitted, add the person to MailerLiten8n is strongest when the workflow has a clear trigger and repeatable steps. It also gives you visibility. When a workflow fails, you can inspect the run, find the bad node, fix the credential, retry the step, and keep moving. That matters once the workflow touches real business operations. The best pattern: Claude Code plus n8n plus human approval The clean pattern is:n8n detects the event. n8n gathers the inputs. Claude handles the judgment-heavy step. n8n saves the output. A human approves. n8n publishes or routes the result.That is the Ship Lean pattern: automation for the boring parts, human review for the parts with consequences. Here is what that looks like for content:Step Owner Job1 n8n Detect new video, build log, or GSC CSV2 n8n Gather transcript, URL, notes, metadata3 Claude Code Create brief, draft, edit, and file diff4 Human Approve quality and positioning5 n8n/GitHub Route PR, deploy, notifyThat is the version I trust. Not "AI posts directly to production while you sleep." That sounds good until it publishes something stale, generic, or wrong. What should solo builders choose first? If your problem is "I need to build or improve the system," start with Claude Code. If your problem is "I keep copying data between apps," start with n8n. If your problem is "I shipped a thing and nobody knows it exists," use both. Claude Code turns the proof into assets. n8n routes and schedules them. Common mistake: using n8n as the whole brain n8n can call LLMs. That does not mean the whole system should live inside n8n. Once prompts, examples, brand rules, page templates, and content logic get serious, they become easier to maintain in a repo. That is where Claude Code shines. Use n8n to collect inputs and trigger the run. Use the repo for durable instructions. Use Claude Code to operate on the repo. Use n8n again to notify and route the result. Common mistake: using Claude Code for recurring ops Claude Code can write a script. It can run a command. It can help you publish. But recurring business operations need:schedules retries run history credential handling webhook triggers alerts handoff to other appsThat is n8n territory. The Ship Lean setup I would run For a solo builder trying to grow traffic:Claude Code owns the content system in the repo. n8n watches for inputs: Search Console exports, YouTube videos, build logs, and newsletter notes. Claude Code creates the page/tool/workflow draft. The editor skill checks for thinness, reader fit, and whether the page actually helps. Visual skill generates a diagram or comparison asset. Human approves. GitHub/Vercel ships.Want to estimate whether an automation is worth building? Run the automation priority audit. Want the stack cost? Use the AI stack cost calculator. If your specific question is whether n8n should run an agent workflow, read what an n8n AI agent is and then map it with the n8n AI Agent Workflow Builder. FAQ Can n8n replace Claude Code? No. n8n can call an LLM, but it does not replace a code-aware agent working inside your repo. Can Claude Code replace n8n? Sometimes for small scripts. But for recurring workflows with integrations, triggers, and retries, n8n is cleaner. What is the best first workflow? A content repurposing workflow is usually a strong first build because it turns work you already did into distribution. Should I learn n8n if I already use Claude Code? Yes, if you want recurring workflows that touch multiple apps. Claude Code helps you build and maintain the system. n8n helps the system run on schedule.

How to Turn One YouTube Video Into 13 Content Assets

How to Turn One YouTube Video Into 13 Content Assets

One YouTube video should not stay one YouTube video. If you are a solo builder, every long-form video is proof. It can become Shorts, X threads, LinkedIn posts, a newsletter draft, and search pages. The baseline Ship Lean flywheel is:Output CountYouTube Shorts 7LinkedIn posts 3X threads/posts 2Newsletter draft 1Total 13The key is not "make more content." The key is to turn one real proof asset into multiple useful surfaces without flattening it into generic AI mush. The workflow in one screenStage Input OutputCapture YouTube video transcript, timestamps, screenshotsExtract transcript + notes proof moments, claims, examplesPackage proof moments Shorts, posts, newsletter, search page ideaReview drafts approved assets onlyPublish approved assets social, email, siteMeasure analytics next topics and refreshesThat review step is not optional. It is what keeps the system from becoming an automated content landfill. Step 1: Pull the transcript and receipts Start with the transcript, not the video file. You need:the raw transcript timestamps title description any notes or screenshots from the buildThe transcript becomes the source of truth. The screenshots and notes become the receipts. Do not skip the receipts. They are the difference between "here is some advice" and "here is what I actually built." Step 2: Find the proof moments Do not clip randomly. Find moments where something useful happens:a mistake gets fixed a tool choice is explained a cost is revealed a workflow is shown a before/after is obvious a decision is madeThose moments become Shorts and social posts. Use this filter:Moment type Why it works Asset fitMistake People trust honest friction Short, X postDecision Helps builders choose faster LinkedIn, comparisonBefore/after Shows concrete progress Short, newsletterCost/time Makes the system real Short, SEO sectionWorkflow Gives them something to steal Blog, workflow pageStep 3: Create the 7 Shorts from one idea each Each Short needs one idea. Good Short angles:"I tried X so you do not have to" "This saved me Y hours" "The mistake was not the tool" "Here is the stack" "Most builders skip this step"Do not end every Short with a CTA. Often the strongest ending is the verdict. Good Short structure:First line names the pain or surprise. Middle shows the proof moment. Last line gives the verdict.Example:I thought the tool was the bottleneck. It was not. The bottleneck was that I had no approval step, so every automation either stalled or published junk.Step 4: Create 3 LinkedIn posts with different jobs LinkedIn should not be a transcript summary. Use:one tactical post one lesson post one build-in-public postThe tactical post teaches the workflow. The lesson post explains what changed your mind. The build-in-public post shows what you shipped. Those are three different angles, not three rewrites of the same paragraph. Step 5: Create 2 X posts or threads with sharper edges X is the sharpest version. Use:one atomic takeaway one short thread with stepsIf it does not have a strong first line, it will die. For X, cut the setup. Start at the tension:"Most content automation fails because it automates before it understands the workflow." "Claude Code should not replace n8n. It should make n8n less painful to build." "The best AI stack is usually the one with fewer tools and better handoffs."Step 6: Create the newsletter draft as a field note The newsletter should feel like a field note:What I built Why I built it What broke What worked What you can stealThat format matches builders because it respects their time. Step 7: Create one search page or tool idea Every video should create at least one searchable page idea. Examples:"Claude Code vs n8n" "Best AI stack for solo founders" "How to automate content repurposing" "How much does an AI content system cost?"This is how your YouTube work becomes long-term search inventory. Sometimes the search asset should be a tool instead of a post:cost calculator automation priority audit content flywheel ROI calculator workflow checklist stack selectorThat is why I like this system. Social gives you feedback fast. Search tools and workflow pages compound slowly. The semi-automated version Here is the version I trust:n8n detects a new YouTube video. n8n saves the transcript, title, description, and URL. Claude Code extracts proof moments and drafts assets. The editor skill removes weak or generic assets. A human approves the final pieces. n8n routes approved assets to the scheduler/newsletter/site queue.Fully automated publishing sounds attractive. Semi-automated publishing is how you keep quality while still moving fast. Want to sanity-check the value of the workflow? Use the content flywheel ROI calculator. If you are deciding whether this should be your first automation, run the automation priority audit. Want help wiring this flywheel into your actual stack? Start here. FAQ How many assets should one YouTube video create? Thirteen is a strong baseline: 7 Shorts, 3 LinkedIn posts, 2 X posts, and 1 newsletter draft. Should AI fully automate content repurposing? No. AI should draft, format, and route. A human should approve the final asset before publishing. What tools do you need? Claude Code for judgment-heavy drafting, n8n for routing and triggers, Notion or Obsidian for storage, and a scheduler for publishing. Should every video become a blog post? No. Every video should create a search idea, but not every idea deserves a full post. Some should become tools, workflow pages, refreshes, or internal notes.

How to Build an n8n AI Agent (And Actually Make It Agentic)

How to Build an n8n AI Agent (And Actually Make It Agentic)

Quick answer: An n8n AI agent is the AI Agent node plus tools (HTTP, database, code, APIs) that lets an LLM read context, call those tools, and pick the next step on its own. Without tools, it's just a chatbot in a workflow. The Ship Lean pattern: n8n handles triggers and routing, Claude Code or another LLM handles judgment, and a human approves anything that touches customers or money. If you're trying to figure out whether you even need an agent, start with what an n8n AI agent is and n8n AI agent vs workflow automation. Short version: agents are for judgment calls, not every automation.I built my first "agent" in n8n and felt very smart for about ten minutes. Then I realized I'd just made a fancy ChatGPT call. Input went in. Output came out. Nothing decided. Nothing checked. No tools. That's the gap nobody flags in the tutorials: dropping the AI Agent node into a workflow doesn't make it agentic. It makes it an LLM with a trigger. This post is the version I wish I'd had when I started: what an n8n AI agent actually is, when to use one instead of a normal workflow, and the pattern I use now that keeps me out of multi-agent spaghetti. What Is an n8n AI Agent? An n8n AI agent is a workflow built around the AI Agent node with tools attached: usually HTTP Request, a database, Airtable, code, or other n8n nodes. That lets the LLM do three things in a loop:Read the input and current context Decide whether to call a tool (and which one) Use the tool's output to pick the next action or final answerThe "agentic" part is the loop. The model isn't just generating text. It's choosing actions based on what it finds. Without tools, the AI Agent node is a fancy LLM call. With tools, it can look things up, write to a database, hit an API, and reason about the result before answering. n8n AI Agent vs Regular Workflow Automation: When to Use Which I default to plain workflow automation. Agents are the exception, not the rule.Situation Use a regular workflow Use an AI agentInputs are predictable (form fields, structured webhook) ✅Logic fits a clean if-then tree ✅You need messy text classified or summarized✅You need it to look something up before deciding✅Output has to be structured every time, no surprises ✅Edge cases keep slipping through your filters✅Cost per run matters and volume is high ✅Rule of thumb I use:If I can write the rules in 10 minutes, it's a workflow. If I'd need 50 if-statements and still miss cases, it's an agent.A workflow that classifies email tone with keyword matching will miss "I've been waiting three weeks and this is getting ridiculous." An agent reads it and routes it correctly. That's the kind of decision worth paying tokens for. If the decision is "did the Stripe webhook fire? then send the receipt," don't put an LLM in the path. The Ship Lean Agent Pattern Here's the layout I use now. It's not clever. That's the point.1. n8n handles the trigger and routing. Webhook, RSS, schedule, Airtable change: n8n is good at this. Don't make the LLM do it. 2. The LLM handles judgment. This is the AI Agent node (or a Claude Code call via HTTP). It reads context, calls tools, returns a structured decision. One agent, one job. 3. Tools are scoped tight. Read-only when possible. Pre-filtered queries, not "here's the whole database." Every tool is a surface area you have to trust. 4. A human approves anything that ships. Sends an email to a customer, charges a card, posts to a public account, deploys code: that goes to a Slack/Telegram approval step before it executes. The agent drafts; you click yes. 5. Claude Code does the building, n8n does the running. I draft prompts, tool definitions, and workflow logic in Claude Code. n8n runs the workflow on a schedule. GitHub holds the workflow JSON. Vercel hosts anything customer-facing. Each tool does what it's good at. That's the whole stack. No swarm of sub-agents. No "AI orchestrator" picking other agents. One agent, scoped tools, human in the loop where it matters. What You Need Before BuildingAn n8n instance. I self-host on Hostinger so I'm not paying per execution. An API key. I use Claude Sonnet for most agent work because the structured output behaves. A clear, single decision you want automated Airtable or a database if your agent needs memoryIf n8n is new to you, run through the n8n tutorial for beginners first. Use a manual trigger while you're building. You'll run the thing 30+ times tweaking prompts, and you don't want an RSS feed or webhook firing each time. Step 1: Pick One Decision Every agent needs one job. Not three. One. Bad: "Read my inbox, write replies, schedule meetings, and update the CRM." Good: "For each new RSS post, decide if it's worth sharing with my list. Output SHARE or SKIP and a one-line reason." The narrower the scope, the easier it is to prompt, test, and trust. If you can't describe the agent's job in one sentence, the agent isn't ready to be built. Step 2: Trigger and Input For the example, we'll keep using the content filter: an RSS feed pulls new posts, each post becomes input. The trigger's job is to give the agent enough context to make the call: title, link, full text, source. If your input is thin, the agent's decisions will be thin too. Step 3: Add the AI Agent Node Drop in the AI Agent node. Connect the trigger. Configure:Provider/model: Claude Sonnet is my default for judgment work System prompt: define the job, the criteria, and the output formatExample system prompt: You are a content relevance filter for a newsletter aimed at solo AI builders who use Claude Code, n8n, and ship products on the side.For each post, decide: - Relevance: High / Medium / Low (does it help this audience build or ship?) - Quality: High / Medium / Low (is it specific and actionable, or generic?) - Decision: SHARE or SKIP - Reason: one line, plain languageDefault to SKIP when uncertain. We'd rather miss a marginal post than share a weak one.This alone is not an agent yet. It's an LLM with a prompt. It reads, it answers, that's it. The next step is what changes that. Step 4: Attach Tools (This Is the Agentic Part) Tools are how the agent does things instead of just saying things. In n8n, common tool options:HTTP Request: call any API Database / Airtable / Postgres: look up or write history Code: custom logic when needed Other n8n nodes: wrapped as toolsFor the content filter, attach an Airtable tool pointing at a "Shared Posts" table. Update the prompt: Before deciding, use the Airtable tool to check the "Shared Posts" table for posts shared in the last 30 days. If a similar topic was already covered, lean toward SKIP unless this post is meaningfully better or newer.Now the agent isn't analyzing a post in a vacuum. It's checking history, comparing, and using that to decide. That's the loop. You don't need n8n's sub-agent feature for this. I almost never reach for it. One agent + a few tools handles most things I've thrown at it. Step 5: Wire the Decision to Action The agent returns something like: Decision: SHARE Reason: Concrete walkthrough of building a Claude Code subagent. Fits the audience.Downstream, you don't need a 12-branch if-then. You need one router checking Decision === "SHARE". The complexity lives in the agent's reasoning, not in the canvas. For anything that goes out the door, like a tweet, an email, or a published post, route it to a human approval step. A Slack message with Approve/Reject buttons works fine. The agent drafts. You ship. Step 6: Test on Real Data, Not Your Imagination Your first version will be wrong. That's fine. Plan for it. What I run into most:Vague prompts: agent makes inconsistent calls because the criteria are fuzzy Tool not actually wired: agent "tries" the tool but the connection is broken Output drifts: sometimes structured, sometimes prose Real inputs are messier than your test inputsFix loop is always: tighten the prompt, add an example or two of correct output, narrow the tool's scope. What I Got Wrong Early My first n8n agent system was a faceless YouTube pipeline: Reddit scrape to script to 11Labs voiceover to Creatomate render. Took me a couple weeks. Had four agents where one would've done. It worked. The output wasn't great, but it ran. The lesson wasn't "agents are powerful." It was: I built before I validated, and I overcomplicated every step. The rewrite was always the same: collapse to one agent, scope its tools, put a human at the publish step. That's the version I'd build today, and it's the version above. Common Mistakes That Keep Your Agent Dumb 1. Using the AI Agent node with no tools. You built a chatbot. Tools = autonomy. No tools = no decisions worth calling agentic. 2. Multi-agent setups before you need them. Sub-agents and agent loops exist. Skip them until a single agent has clearly hit its ceiling. It usually hasn't. 3. Vague system prompts. "Make good decisions" isn't a prompt. Spell out criteria, output format, and what to do when uncertain. 4. No human approval on outbound actions. The first time an agent emails a customer something weird, you'll wish you had this. Add it before you need it. 5. Testing only on data you wrote. Real inputs break things synthetic ones don't. Test on actual feeds, actual emails, actual rows. Where to Go From Here Pick one decision you make repeatedly that's annoying because it requires reading something: inbox triage, lead scoring, content filtering, support routing. Build that. One agent, one tool, one decision. Run it manually for a week. Watch where it gets confused. Tighten the prompt. Once that's working, the second one takes half the time. The third feels normal. For more patterns, see 7 n8n workflow examples and n8n AI agent vs workflow automation if you're still deciding which side of the line your use case sits on. The AI Agent node is a building block, not the whole building. Tools are what turn it into something that decides. Keep the rest of the stack boring: n8n for plumbing, Claude Code for judgment, GitHub and Vercel for everything that ships. Then you can spend your time on the decisions, not the wiring.