Is crewswarm free?
Yes. crewswarm is fully open source under the MIT license. You bring your own API keys and pay providers directly — no subscription fees, no middleman markup, no per-seat pricing.
OpenClaw is good at being the front door: desktop apps, messaging, and a broad assistant surface.
crewswarm handles the heavier engineering workflow behind it: planning, delegation, coding, QA, fixes, and shipping.
OpenClaw is a mature personal AI assistant platform. It has real strengths — and crewswarm is designed to complement them, not replace them.
Native macOS, Windows, iOS, and Android apps. A polished UI that millions of users already know.
Telegram, WhatsApp, Discord, iMessage, Slack, Signal, and more. Talk to your AI from anywhere.
Large community, active development, and an established plugin ecosystem. A proven platform.
Extensible via plugins published to ClawHub or npm. One command to install new capabilities.
OpenClaw is a surface. crewswarm is the system behind the surface: specialist agents, multiple engine lanes, shared context, and a PM loop that plans, delegates, evaluates, and iterates.
Dedicated agents for coding, QA, security, frontend, backend, docs, git, architecture, ML, SEO, and more. Each with their own model and tools.
Claude Code, Cursor, Codex, Gemini CLI, OpenCode, and crew-cli. Agents pick the right engine for the task.
Describe a feature once. The PM agent plans it, dispatches to specialists, evaluates results, and iterates until it ships.
All agents share project state, decisions, and handoff context. No agent works blind. No repeated explanations.
Use OpenClaw as the front door — chat from your phone, desktop, or any channel. crewswarm does the heavy lifting behind the scenes: planning, coding, testing, and shipping with a full specialist crew.
openclaw plugins install crewswarm-openclaw-plugin
OpenClaw is one of many surfaces. crewswarm runs independently with its own dashboard, Vibe IDE, crew-cli, Telegram, and WhatsApp — no OpenClaw required.
Web UI at localhost:4319
Monaco editor + agent chat
Terminal-first workflow
The installer automatically detects ~/.openclaw/openclaw.json and migrates your API keys into crewswarm. No manual config copying. OpenClaw stays installed — use it as a surface or uninstall it later.
Cloud AI orchestration platforms route your code through third-party servers, add subscription costs, and lock you into a single provider. There is a better way.
Your source code never leaves your machine. API keys are stored locally, not on any third-party server. Audit everything yourself.
Swap models and providers freely. Use Anthropic today, switch to Groq tomorrow, run Ollama offline. One config change, zero migration.
Bring your own API keys and pay providers directly. No middleman markup, no per-seat pricing, no surprise invoices.
Run Claude Code, Cursor, crew-cli, Gemini CLI, and OpenCode simultaneously. A PM loop plans and delegates across all engines.
Works without internet when paired with local models via Ollama or LM Studio. Ideal for air-gapped or privacy-sensitive environments.
MIT licensed. Read every line, fork it, self-host it, extend it. No telemetry, no tracking, no proprietary backend.
A side-by-side look at the three approaches to AI-assisted development.
| Capability | crewswarm | Cloud Platforms | Single-Tool CLIs |
|---|---|---|---|
| Code privacy | Local-first — code never uploaded | Code sent to third-party servers | Varies by tool |
| API key storage | Local only — your machine, your keys | Stored on provider servers | Local |
| Subscription cost | Free — bring your own API keys | $20–$200+/mo per seat | Free or pay-per-use |
| Vendor lock-in | None — swap providers any time | Tied to one platform | Tied to one model vendor |
| Multi-engine support | Claude, Cursor, crew-cli, Codex, Gemini, OpenCode | Usually one engine | Single engine only |
| Autonomous PM loop | ROADMAP.md → plan → delegate → ship → repeat | Manual task management | No orchestration |
| Agent count | 20+ specialized agents | Varies | 1 agent |
| Persistent memory | brain.md + shared memory + JSONL history | Cloud-managed context | Session-only |
| Offline mode | Yes — local models via Ollama / LM Studio | No — requires internet | Some support local models |
| Docker isolation | Optional secure sandbox (AppArmor + firewall) | Server-side isolation | No sandboxing |
| Dashboard UI | Full web dashboard + Vibe browser IDE | Web UI included | Terminal only |
| LLM providers | 12+ (OpenAI, Anthropic, Groq, Fireworks, OpenRouter, Cerebras, DeepSeek, xAI, etc.) | 1–3 typically | 1 |
| MCP integration | Cursor, Claude Code, OpenCode | Varies | Some support |
| Messaging bridges | Telegram + WhatsApp | Slack / Teams common | None |
| License | MIT — fully open source | Proprietary | Varies |
crewswarm replaces the manual cycle of prompting, reviewing, and re-prompting across scattered tools.
Write a requirement in plain language. The PM agent turns it into a roadmap.
Domain PMs delegate tasks to the best engine for each job — Claude for architecture, Codex for implementation, Gemini for breadth.
The judge agent evaluates output. Good work ships. Bad work gets retried. You stay in control.
Yes. crewswarm is fully open source under the MIT license. You bring your own API keys and pay providers directly — no subscription fees, no middleman markup, no per-seat pricing.
Yes. Pair crewswarm with local models through Ollama or LM Studio and run agents entirely offline. This is ideal for air-gapped environments, sensitive codebases, or travel.
Claude Code is one of the engines crewswarm can orchestrate. The difference is scope:
Think of it as the difference between a single developer and a full engineering team.
Yes. crewswarm is local-first — your source code never leaves your machine. API keys are stored locally. For additional hardening, enable Docker isolation which adds read-only filesystem, AppArmor profiles, network firewall rules, and non-root execution. Read the full security guide.
OpenAI, Anthropic, Groq, Fireworks, OpenRouter, Cerebras, DeepSeek, xAI (Grok), Mistral, Perplexity, Google (Gemini), and any OpenAI-compatible API. Switch providers with a single config change.
That is the default. crewswarm runs entirely on your machine or your own server. Deploy to any cloud (AWS, GCP, DigitalOcean), bare metal, or localhost. No external dependencies, no telemetry, no phone-home behavior.
Most tools give you one model behind a subscription. crewswarm gives you:
The official crewswarm plugin gives every OpenClaw agent access to your local crew. Dispatch tasks, poll results, and list agents — all without leaving OpenClaw.
crewswarm_dispatch — send a task to any crew agent and wait for the result.
crewswarm_agents — list all 22 available agents.
crewswarm_status — poll a running task.
/crewswarm crew-coder "Add auth middleware"
Works from any OpenClaw channel — Telegram, WhatsApp, Discord, iMessage, or the desktop app.
Programmatic access via crewswarm.dispatch, crewswarm.status, and crewswarm.agents RPC methods.
openclaw plugins install crewswarm-openclaw-plugin
No LLM keys are shared — crewswarm uses its own providers. The only shared secret is the RT auth token. View on npm · Source & docs
One command. No sign-up. No subscription. Just clone, install, and start building.
git clone https://github.com/crewswarm/crewswarm && cd crewswarm && bash install.sh
Open source · MIT license · Multi-engine · Local-first · No subscriptions