LLMs talk. This one works.
Any prompt.
A production API.
Write a system prompt. Connect your Aerostack workspace. Get a REST endpoint that reasons, calls tools, and returns structured JSON — not just text.
-H "Authorization: Bearer aek_7f3a..." \
-d '{"input": "Invoice #4821 from Acme Corp..."}'
You don't need another chat completion.
LLM APIs return text — your product needs structured data
You parse JSON yourself, handle failures, retry on malformed output. Every time.
No tool access — the AI can answer but can't act
It can tell you what to do but can't query your database, update your CRM, or create a ticket.
Building the agentic loop is a project in itself
Tool selection, result parsing, conversation management, timeout handling, retry logic — weeks of work.
Locked to one LLM provider
Switch from OpenAI to Anthropic? Rewrite your tool calling format, response parsing, and error handling.
Three steps. One endpoint.
POST /api/agent-endpoints
{
"name": "Invoice Parser",
"slug": "invoice-parser",
"system_prompt": "Extract vendor, amount, due date,
and line items from invoice text.
Save to database using the db__insert tool.",
"output_format": "json",
"output_schema": {
"vendor": "string",
"amount": "number",
"due_date": "string",
"line_items": [{ "desc": "string", "price": "number" }]
}
}The system prompt defines behavior. The output schema enforces structure. The AI follows both — and calls MCP tools when the prompt says to.
Not another wrapper API.
Side-by-side with the alternatives you've probably considered.
| Raw LLM API | OpenAI Assistants | Aerostack | |
|---|---|---|---|
| Capabilities | |||
| Multi-step tool orchestration | |||
| Open MCP tool ecosystem | |||
| Structured JSON output (schema) | |||
| Multi-provider (Claude, GPT, Gemini...) | |||
| Developer Experience | |||
| Stateless — one request, one response | |||
| Streaming SSE | |||
| Your own URL / slug | |||
| No SDK required (plain HTTP) | |||
| Business | |||
| Per-run cost visibility (cents) | |||
| Charge consumers per call | |||
| Run history & audit trail | |||
| BYOK — zero platform markup | |||
Stateless — no threads to manage
OpenAI Assistants need Threads, Runs, and polling. Here: one POST, one response. The agent loop runs inside that single request.
MCP tools — not proprietary functions
Your tools use the open MCP standard. Same tools work in your bot, your webhook processor, and your gateway — not locked to one vendor.
Monetize each call
Set price_per_run_cents on your endpoint. Consumers pay per call, deducted from their wallet. Built-in billing — no Stripe integration needed.
Pre-built templates. Ready to deploy.
Start from a proven pattern or build from scratch.
Invoice Parser
Extract vendor, amount, due date, and line items from invoice text. Save to database.
Raw invoice text or PDF content
{ vendor, amount, due_date, line_items[] }
Lead Enricher
Enrich a name + company with contact info, company size, and intent signals.
{ name, company }
{ email, role, company_size, intent_score }
PR Code Reviewer
Review a pull request diff for bugs, security issues, and style violations.
PR diff text
{ bugs[], security[], style[], verdict }
Resume Screener
Score a resume against job requirements. Identify skills gaps.
Resume text + job description
{ match_score, skills_gap[], recommendation }
SQL Query Generator
Convert natural language to SQL. Validate against schema. Run and return results.
"Show me all users who signed up last week"
{ sql, explanation, results[] }
Email Drafter
Draft professional emails from context, intent, and tone preferences.
{ context, intent, tone }
{ subject, body }
Watch the agent think.
Streaming SSE shows every step — tool calls, results, and the final output — as they happen.
Status updates — loading tools, calling LLM, iteration count. Shows the agent is working.
The agent decided to use a tool. Shows the tool name and arguments before execution.
Tool execution completed. Shows success/failure and latency — your audit trail.
Agent loop complete. Contains the final output, parsed JSON, tool call summary, and cost breakdown.
Something went wrong. Contains the error message. Stream ends.
Build real-time UIs
Show users a live progress feed while the agent works. No polling. Standard EventSource API.
Your first agent endpoint.
60 seconds.
System prompt. Workspace. Done. You have a production API.