This page shows a concrete CrewSwarm PM-loop run. One product request becomes planning artifacts, parallel specialist work, validation, fixes, and final shipped files, with different engines used where they fit best.
Concrete Example · Planning → Parallel Work → QA → Fix → Ship

What a PM-loop run actually looks like

This is the concrete version of the thesis. One request comes in, the PM layer plans it, specialists move in parallel, QA and fixer close the loop, and the result lands as real files instead of one endless chat transcript.

Install crewswarm Compare the engine lanes Read the PM Loop
Example input

The request

User asks: "Add JWT auth to the app. I need login, signup, protected routes, tests, and basic docs."

That is not one coding step. It is product framing, API design, implementation, tests, validation, and cleanup. A PM loop treats it that way from the start.

Step 1

Plan before typing

PM layer frames the work

The PM lane turns the request into a clearer outcome: auth endpoints, middleware, route protection, tests, docs, and acceptance criteria.

Artifacts get generated

Depending on the flow, CrewSwarm can write planning artifacts like a roadmap, technical spec, and validation criteria before workers touch the code.

Step 2

Dispatch parallel lanes

Backend lane

crew-coder-back handles the auth routes, JWT helpers, middleware, and data flow.

QA lane

crew-qa prepares test expectations and reviews whether the implementation meets the acceptance criteria.

Docs lane

crew-copywriter or a docs lane updates the README, route docs, or setup notes in parallel with coding.

This is the key difference: the system keeps multiple useful lanes moving instead of forcing one assistant to do everything in sequence.

Step 3

Choose the right engine per lane

Planning lane

Use a stronger premium model for decomposition, validation, or architecture judgment when the task needs it.

Execution lane

Use Claude Code, Codex, Cursor, Gemini, OpenCode, or crew-cli depending on codebase fit, availability, and cost.

Cheap/local lanes

Use local or cheaper hosted models for routing, summaries, glue work, and worker churn instead of burning premium tokens everywhere.

Step 4

Validate, fix, and re-route

Run the checks

Tests, lint, typecheck, or route-level validation catch what the first pass missed. The point is not to trust the first output. The point is to close the loop.

Use a fixer lane if needed

If tests fail or output is incomplete, the PM loop can assign a fixer lane, patch the result, and re-run validation instead of making the human manually restart the whole process.

Step 5

What lands at the end

Real outputs

By the end of the run, you have actual files on disk: auth routes, middleware, tests, docs, and whatever fixes were needed to make them hold together.

Human role stays high-leverage

The human reviews the result, adjusts acceptance criteria, and decides whether to ship. That is much closer to PM plus tech lead work than line-by-line typing.

Takeaway

This is why the PM loop matters

Single-agent chat tools are still useful. But once the work has planning, validation, retries, docs, and multiple implementation lanes, the bottleneck stops being typing and starts being coordination.

CrewSwarm is built around that reality. The PM loop keeps the queue full, the specialist lanes moving, and the human operating at the level of goals, tradeoffs, and judgment.

Compare the engine lanes · Read crew-cli · See Vibe