Chatbots reply. This one orchestrates.
Orchestration,
not just chat.
Visual workflows, identity verification, human approval handoffs, and bot-to-bot delegation — live on Discord, Telegram, WhatsApp, Slack, and the web.
Building AI bots today is painful.
Write separate integrations for every chat platform
Discord's slash commands ≠ Telegram's bot API ≠ WhatsApp's webhooks
Hard-code every tool call in handler functions
if user says "create ticket" → call Jira API. Repeat × 100.
No conversation memory across sessions
User comes back tomorrow. Bot has amnesia.
Pay for infrastructure you barely use
Idle servers waiting for messages that come once per hour.
Three generations of bots.
Only one can think.
MCP-Orchestrated Agents
Understands. Decides. Acts. Delegates.
The LLM sees every tool in your Aerostack workspace — databases, APIs, SaaS products, internal services — and autonomously decides what to call, in what order. Add identity verification (auth_gate), human approval handoffs, and bot-to-bot delegation. Edge-hosted, not a flow. A real agent.
What changes
What makes Aerostack bots different from Botpress or Voiceflow.
Multi-MCP tool orchestration across your entire stack. Edge-hosted on Cloudflare with sub-50ms cold starts. Human approval handoffs that pause workflows and notify reviewers via web, email, Telegram, or Discord. Bot-to-bot delegation for specialized agent teams. Identity verification via auth_gate before sensitive actions. A growing marketplace of MCP servers you can plug in instantly.
Three steps. Zero boilerplate.
POST /api/bots
{
"name": "DevOps Assistant",
"workspace_id": "ws_acme_devops",
"platform": "discord",
"status": "draft"
}Already have an Aerostack workspace? Just paste the ID. Don't have one? Create it in 30 seconds from the dashboard.
Freestyle intelligence. Or structured control.
Let the LLM decide — or design every step. Switch anytime.
"What PRs need review in our repo?"
→ Calling github__list_pull_requests({ state: "open" })
→ Found 3 PRs. Calling github__get_reviews for #142...
→ Calling github__get_reviews for #138, #135...
You have 3 PRs awaiting review:
#142 Fix auth (ready) · #138 Dark mode (needs 1) · #135 Migration (conflicts)
How Freestyle Works
Dynamic
Tool selection
60s
Deadline
Novel
Query handling
Native on every major platform.
Discord
Ed25519 public key
2,000
Plain text, auto-split
POST /api/bots/webhook/discord/{botId}Telegram
Secret token header
4,096
Markdown (fallback: plain)
POST /api/bots/webhook/telegram/{botId}HMAC-SHA256
4,096
Plain text
POST /api/bots/webhook/whatsapp/{botId}Slack
HMAC-SHA256 + replay
4,000
mrkdwn (Slack markdown)
POST /api/bots/webhook/slack/{botId}Custom
None (open)
Unlimited
Structured JSON
POST /api/bots/webhook/custom/{botId}Any model. Your key or ours.
Switch providers without redeploying. Bring your own API key for zero platform markup. Or use our pooled keys to get started instantly.
{
"llm_provider": "anthropic",
"llm_model": "claude-sonnet-4-20250514",
"llm_api_key": "sk-ant-..." // optional BYOK
}| Provider | Input/1M | Output/1M | Fee |
|---|---|---|---|
| Anthropic | $3.00 | $15.00 | +20% pooled |
| OpenAI | $2.50 | $10.00 | +20% pooled |
| Gemini | $0.075 | $0.30 | +20% pooled |
| Groq | $0.59 | $0.79 | +20% pooled |
BYOK = Zero Fees
Bring your own API key and the 20% platform markup drops to zero. Your key is AES-256-GCM encrypted at rest — never stored in plain text.
Smart enough to save you money.
Most messages don't need tools. Our two-pass strategy skips tool schemas when the LLM can answer directly — saving 400+ tokens per message.
Send message to LLM with:
- ✓ User message
- ✓ Tool names only (in system prompt)
- ✗ No tool schemas (saves ~400 tokens)
80%
of messages answered here
~200
tokens per message
LLM responded with [NEEDS_TOOLS]:
- → Resend with full tool schemas
- → LLM calls tools via MCP
- → Loop until done (max 10 iterations)
20%
of messages need this
60s
max deadline
Token Usage Comparison
Memory that actually works.
Automatic Sessions
Conversations auto-expire after configurable TTL (default 24h). New session starts fresh. No stale context polluting responses.
Persistent Memory
Conversations survive crashes and server restarts. Your bot picks up exactly where it left off — even days later. Zero config.
Context Compression
When conversations exceed the token budget, older messages are summarized by the LLM automatically. Recent context stays intact.
Spending Caps
Set per-bot spending limits. Automatic pause when the cap is hit. Track costs per conversation. Never get a surprise bill.
Start from a proven pattern.
Customer Support Bot
Answers questions from your knowledge base. Escalates to human when confidence is low. Creates tickets automatically.
Community Manager
Welcomes new members. Answers FAQs. Moderates content. Logs activity for your team.
Sales Assistant
Qualifies leads. Answers product questions. Books demos. Logs opportunities to your CRM.
Or start from scratch with a blank bot + your own system prompt.
The platform they can't match.
| Botpress | Voiceflow | Obot | Aerostack | |
|---|---|---|---|---|
| AI & Tools | ||||
| Multi-MCP tool orchestration | ||||
| Multi-LLM routing | ||||
| BYOK (your own API keys) | ||||
| Two-pass token optimization | ||||
| Orchestration & Workflows | ||||
| Visual workflow builder | ||||
| Freestyle agentic mode | ||||
| Identity verification (auth_gate) | ||||
| Human approval handoffs | ||||
| Bot-to-bot delegation | ||||
| Platform Support | ||||
| Discord + Telegram + WhatsApp + Slack + Web | ||||
| Persistent conversation memory | ||||
| Edge-hosted (sub-50ms cold starts) | ||||
| Business & Platform | ||||
| Per-conversation billing | ||||
| Marketplace monetization | ||||
| Full backend (DB + Auth + Storage) | ||||
Your tools are already connected.
If you have an Aerostack workspace, you're 60 seconds from a live bot.