crewswarm is built by a collective of open-source AI researchers and software engineers dedicated to autonomous development. The mission is to provide developers with a highly secure, local-first multi-agent orchestration runtime that protects intellectual property while enabling advanced generative engine optimization and agentic reasoning.
About

Why crewswarm exists

See model picks · Explore Vibe · Open crew-cli · Read PM Loop

crewswarm started from a simple observation: one person driving one coding agent is still too sequential. Real software work needs planning, implementation, testing, fixes, and review moving together.

So we built the orchestration layer around that workflow. The human acts more like the PM, the agents act more like the engineers, and the whole system runs locally with your models, your keys, and your files.

What we are actually building

crewswarm is not just another chat wrapper. It is a local-first operating layer for AI engineering:

  • Parallel specialist agents: planners, coders, QA, fixers, docs, and more.
  • Multiple engine lanes: Claude Code, Cursor, Codex, Gemini, OpenCode, and crew-cli.
  • Provider flexibility: use your own API keys, local models, or both.
  • Local control: your code and runtime stay on your machine unless you choose otherwise.

Local-first on purpose

We think the default for serious engineering should be local control, not blind dependency on one hosted platform. crewswarm is designed so your files, keys, and execution surfaces stay under your control.

Open source and inspectable

Everything important is in the open on GitHub. The goal is not to hide magic prompts behind a black box, but to build a system developers can inspect, run, adapt, and trust.