Aerostack
Aerostack
Agent Endpoints are coming soon — the features below show our vision for this product.

LLMs talk. This one works.

Any prompt.
A production API.

Write a system prompt. Connect your Aerostack workspace. Get a REST endpoint that reasons, calls tools, and returns structured JSON — not just text.

MCP ToolsJSON SchemaStreaming SSEAPI Key AuthRun History
//The problem

You don't need another chat completion.

LLM APIs return text — your product needs structured data

You parse JSON yourself, handle failures, retry on malformed output. Every time.

No tool access — the AI can answer but can't act

It can tell you what to do but can't query your database, update your CRM, or create a ticket.

Building the agentic loop is a project in itself

Tool selection, result parsing, conversation management, timeout handling, retry logic — weeks of work.

Locked to one LLM provider

Switch from OpenAI to Anthropic? Rewrite your tool calling format, response parsing, and error handling.

//How it works

Three steps. One endpoint.

step-1
POST /api/agent-endpoints
{
  "name": "Invoice Parser",
  "slug": "invoice-parser",
  "system_prompt": "Extract vendor, amount, due date,
    and line items from invoice text.
    Save to database using the db__insert tool.",
  "output_format": "json",
  "output_schema": {
    "vendor": "string",
    "amount": "number",
    "due_date": "string",
    "line_items": [{ "desc": "string", "price": "number" }]
  }
}

The system prompt defines behavior. The output schema enforces structure. The AI follows both — and calls MCP tools when the prompt says to.

//Why Aerostack

Not another wrapper API.

Side-by-side with the alternatives you've probably considered.

Raw LLM APIOpenAI AssistantsAerostack
Capabilities
Multi-step tool orchestration
Open MCP tool ecosystem
Structured JSON output (schema)
Multi-provider (Claude, GPT, Gemini...)
Developer Experience
Stateless — one request, one response
Streaming SSE
Your own URL / slug
No SDK required (plain HTTP)
Business
Per-run cost visibility (cents)
Charge consumers per call
Run history & audit trail
BYOK — zero platform markup

Stateless — no threads to manage

OpenAI Assistants need Threads, Runs, and polling. Here: one POST, one response. The agent loop runs inside that single request.

MCP tools — not proprietary functions

Your tools use the open MCP standard. Same tools work in your bot, your webhook processor, and your gateway — not locked to one vendor.

Monetize each call

Set price_per_run_cents on your endpoint. Consumers pay per call, deducted from their wallet. Built-in billing — no Stripe integration needed.

//Use cases

Pre-built templates. Ready to deploy.

Start from a proven pattern or build from scratch.

Extractor

Invoice Parser

Extract vendor, amount, due date, and line items from invoice text. Save to database.

Input:

Raw invoice text or PDF content

Output:

{ vendor, amount, due_date, line_items[] }

db__insert_invoice
Actor

Lead Enricher

Enrich a name + company with contact info, company size, and intent signals.

Input:

{ name, company }

Output:

{ email, role, company_size, intent_score }

crm__searchweb__lookup
Extractor

PR Code Reviewer

Review a pull request diff for bugs, security issues, and style violations.

Input:

PR diff text

Output:

{ bugs[], security[], style[], verdict }

github__get_file
Extractor

Resume Screener

Score a resume against job requirements. Identify skills gaps.

Input:

Resume text + job description

Output:

{ match_score, skills_gap[], recommendation }

Pipeline

SQL Query Generator

Convert natural language to SQL. Validate against schema. Run and return results.

Input:

"Show me all users who signed up last week"

Output:

{ sql, explanation, results[] }

db__querydb__schema
Actor

Email Drafter

Draft professional emails from context, intent, and tone preferences.

Input:

{ context, intent, tone }

Output:

{ subject, body }

crm__get_contact
//Real-time

Watch the agent think.

Streaming SSE shows every step — tool calls, results, and the final output — as they happen.

SSE stream
thinking

Status updates — loading tools, calling LLM, iteration count. Shows the agent is working.

tool_call

The agent decided to use a tool. Shows the tool name and arguments before execution.

tool_result

Tool execution completed. Shows success/failure and latency — your audit trail.

done

Agent loop complete. Contains the final output, parsed JSON, tool call summary, and cost breakdown.

error

Something went wrong. Contains the error message. Stream ends.

Build real-time UIs

Show users a live progress feed while the agent works. No polling. Standard EventSource API.

Your first agent endpoint.
60 seconds.

System prompt. Workspace. Done. You have a production API.