Aerostack
Aerostack
Now in Beta — 5 platforms supported

Chatbots reply. This one orchestrates.

Orchestration,
not just chat.

Visual workflows, identity verification, human approval handoffs, and bot-to-bot delegation — live on Discord, Telegram, WhatsApp, Slack, and the web.

DatabaseSupportCRMMonitoringPaymentsGitHub+ any MCP tool
💬5 Platforms🧠Any LLM🔧MCP-Powered🔐Auth Gate🤝Human Handoffs💰Pay-per-use
//The problem

Building AI bots today is painful.

Write separate integrations for every chat platform

Discord's slash commands ≠ Telegram's bot API ≠ WhatsApp's webhooks

Hard-code every tool call in handler functions

if user says "create ticket" → call Jira API. Repeat × 100.

No conversation memory across sessions

User comes back tomorrow. Bot has amnesia.

Pay for infrastructure you barely use

Idle servers waiting for messages that come once per hour.

//Architecture evolution

Three generations of bots.
Only one can think.

2025+

MCP-Orchestrated Agents

Understands. Decides. Acts. Delegates.

The LLM sees every tool in your Aerostack workspace — databases, APIs, SaaS products, internal services — and autonomously decides what to call, in what order. Add identity verification (auth_gate), human approval handoffs, and bot-to-bot delegation. Edge-hosted, not a flow. A real agent.

What changes

Multi-MCP orchestration — chains actions across any tool stack
Human approval handoffs — pause, notify reviewer, resume on approval
Bot-to-bot delegation — specialized bots hand off to each other
Auth gate — identity verification (OTP / magic link) before sensitive actions
Edge-hosted on Cloudflare — sub-50ms cold starts, no origin server

What makes Aerostack bots different from Botpress or Voiceflow.

Multi-MCP tool orchestration across your entire stack. Edge-hosted on Cloudflare with sub-50ms cold starts. Human approval handoffs that pause workflows and notify reviewers via web, email, Telegram, or Discord. Bot-to-bot delegation for specialized agent teams. Identity verification via auth_gate before sensitive actions. A growing marketplace of MCP servers you can plug in instantly.

//How it works

Three steps. Zero boilerplate.

step-1.sh
POST /api/bots
{
  "name": "DevOps Assistant",
  "workspace_id": "ws_acme_devops",
  "platform": "discord",
  "status": "draft"
}

Already have an Aerostack workspace? Just paste the ID. Don't have one? Create it in 30 seconds from the dashboard.

//Two modes

Freestyle intelligence. Or structured control.

Let the LLM decide — or design every step. Switch anytime.

Live conversation
👤

"What PRs need review in our repo?"

→ Calling github__list_pull_requests({ state: "open" })

→ Found 3 PRs. Calling github__get_reviews for #142...

→ Calling github__get_reviews for #138, #135...

🤖

You have 3 PRs awaiting review:

#142 Fix auth (ready) · #138 Dark mode (needs 1) · #135 Migration (conflicts)

How Freestyle Works

1.LLM receives the message + all MCP tool schemas
2.LLM autonomously decides which tools to call
3.Calls tools, observes results, decides next action
4.Continues for up to 10 iterations or 60-second deadline
5.Returns final response when no more tools needed

Dynamic

Tool selection

60s

Deadline

Novel

Query handling

//Platform support

Native on every major platform.

D

Discord

Signature

Ed25519 public key

Max chars

2,000

Format

Plain text, auto-split

Slash commands (/ask, /reset)
Deferred responses (3s timeout)
Button interactions
POST /api/bots/webhook/discord/{botId}
T

Telegram

Signature

Secret token header

Max chars

4,096

Format

Markdown (fallback: plain)

Programmatic webhook setup
Callback queries (buttons)
Inline keyboard support
POST /api/bots/webhook/telegram/{botId}
W

WhatsApp

Signature

HMAC-SHA256

Max chars

4,096

Format

Plain text

Interactive buttons (3 max)
List menus (10 rows)
Read receipts
POST /api/bots/webhook/whatsapp/{botId}
S

Slack

Signature

HMAC-SHA256 + replay

Max chars

4,000

Format

mrkdwn (Slack markdown)

Block Kit formatting
Thread replies
Mention/DM/Channel modes
POST /api/bots/webhook/slack/{botId}
C

Custom

Signature

None (open)

Max chars

Unlimited

Format

Structured JSON

Full response metadata
Tool call details
Token + cost breakdown
POST /api/bots/webhook/custom/{botId}
//LLM routing

Any model. Your key or ours.

Switch providers without redeploying. Bring your own API key for zero platform markup. Or use our pooled keys to get started instantly.

bot-config.json
{
  "llm_provider": "anthropic",
  "llm_model": "claude-sonnet-4-20250514",
  "llm_api_key": "sk-ant-..."  // optional BYOK
}
ProviderInput/1MOutput/1MFee
Anthropic$3.00$15.00+20% pooled
OpenAI$2.50$10.00+20% pooled
Gemini$0.075$0.30+20% pooled
Groq$0.59$0.79+20% pooled

BYOK = Zero Fees

Bring your own API key and the 20% platform markup drops to zero. Your key is AES-256-GCM encrypted at rest — never stored in plain text.

//Token optimization

Smart enough to save you money.

Most messages don't need tools. Our two-pass strategy skips tool schemas when the LLM can answer directly — saving 400+ tokens per message.

Pass 1Quick Check

Send message to LLM with:

  • User message
  • Tool names only (in system prompt)
  • No tool schemas (saves ~400 tokens)

80%

of messages answered here

~200

tokens per message

Pass 2Full Loop (only when needed)

LLM responded with [NEEDS_TOOLS]:

  • Resend with full tool schemas
  • LLM calls tools via MCP
  • Loop until done (max 10 iterations)

20%

of messages need this

60s

max deadline

Token Usage Comparison

Naive (always send tool schemas)~2,400 tokens/msg
Two-pass (Aerostack)~800 tokens/msg
67%average token savings
//Conversations

Memory that actually works.

Automatic Sessions

Conversations auto-expire after configurable TTL (default 24h). New session starts fresh. No stale context polluting responses.

Persistent Memory

Conversations survive crashes and server restarts. Your bot picks up exactly where it left off — even days later. Zero config.

Context Compression

When conversations exceed the token budget, older messages are summarized by the LLM automatically. Recent context stays intact.

Spending Caps

Set per-bot spending limits. Automatic pause when the cap is hit. Track costs per conversation. Never get a surprise bill.

//Templates

Start from a proven pattern.

Customer Support Bot

Answers questions from your knowledge base. Escalates to human when confidence is low. Creates tickets automatically.

SupportRAGEscalation

Community Manager

Welcomes new members. Answers FAQs. Moderates content. Logs activity for your team.

CommunityModerationOnboarding

Sales Assistant

Qualifies leads. Answers product questions. Books demos. Logs opportunities to your CRM.

SalesLead GenCRM

Or start from scratch with a blank bot + your own system prompt.

//Why Aerostack

The platform they can't match.

BotpressVoiceflowObotAerostack
AI & Tools
Multi-MCP tool orchestration
Multi-LLM routing
BYOK (your own API keys)
Two-pass token optimization
Orchestration & Workflows
Visual workflow builder
Freestyle agentic mode
Identity verification (auth_gate)
Human approval handoffs
Bot-to-bot delegation
Platform Support
Discord + Telegram + WhatsApp + Slack + Web
Persistent conversation memory
Edge-hosted (sub-50ms cold starts)
Business & Platform
Per-conversation billing
Marketplace monetization
Full backend (DB + Auth + Storage)

Your tools are already connected.

If you have an Aerostack workspace, you're 60 seconds from a live bot.