Your AI's toolbox.
One URL. Every tool.
Create a workspace. Add MCP servers and skills. Get one authenticated gateway URL. Namespacing, secrets, team access — automatic.
← replaces 20 separate MCP configs
The AI thinks it's one server.
It's actually all of them.
One workspace aggregates MCP servers + Skills + Edge Functions. The AI model sees one server with all tools auto-namespaced.
MCP Servers
Community or self-hosted MCP servers. Auto-namespaced in your workspace with `{server}__{tool}` naming.
Skills
Public, private, or team-scoped cross-model skills. Works with Claude, OpenAI, and Gemini out of the box.
Edge Functions
Function-backed skills with dynamic TypeScript logic. Runs at 300+ edge locations, sub-50ms globally.
Native format for every AI.
One workspace URL. Three auto-generated formats. Zero adapter code.
MCP Native
SSE · stdio// mcp.json
{
"mcpServers": {
"workspace": {
"url": "https://gateway.aerostack.dev/ws/acme",
"headers": {
"Authorization": "Bearer mwt_..."
}
}
}
}OpenAI Format
/openai-tools// Auto-generated
[
{
"type": "function",
"function": {
"name": "github__search_repos",
"description": "Search GitHub repos",
"parameters": { ... }
}
}
]Gemini Format
/gemini-tools// Auto-generated
{
"functionDeclarations": [
{
"name": "github__search_repos",
"description": "Search GitHub repos",
"parameters": { ... }
}
]
}Host it. Or bring your own.
Stop running MCP servers on your laptop. Deploy to the edge — your colleague, your team, and any agent worldwide connects to the same hosted URL.
You build
- MCP server using
- @modelcontextprotocol/sdk
- Define your tools.
- Your logic. Your data.
Aerostack deploys
- Global edge network
- 300+ locations worldwide
- Zero cold start
- Auto-scaled globally
Anyone connects
- Cursor
- Claude Desktop
- VS Code Copilot
- Any custom agent
Your hosted MCP URL — paste into any AI client
Sub-50ms globally · Zero cold start · Auto-scaled
Then add it to your workspace
Your hosted server becomes part of your one-URL stack. Team members get access with one token refresh.
See every tool call.
Debug in real time.
Every tool invocation is logged — method, latency, status, model, workspace. Stream live or query historical data. Rate limiting and quota enforcement built in.
5
Servers in workspace
120 req/min
Rate limit
Per plan
Quota enforced
Need private MCPs for your team?
Create a private workspace — share MCPs with your whole team via one URL. Full access control and observability included.