Run chat completions, generate embeddings, create images, and moderate content via the OpenAI API.
Use with AI AssistantsMCP
Connect Claude, Cursor, or any MCP-compatible client — then call tools directly
① Add This MCP Server
Paste into your AI client config — then all its tools are available instantly.
{
"mcpServers": {
"openai": {
"url": "https://mcp.aerostack.dev/s/aerostack/mcp-openai",
"headers": {
"Authorization": "Bearer YOUR_AEROSTACK_TOKEN"
}
}
}
}Replace YOUR_AEROSTACK_TOKEN with your API token from the dashboard.
② Call a Tool
Ask your AI assistant to call a specific tool, or send raw JSON-RPC:
Natural Language Prompt
“Use the _ping tool to verify openai credentials by calling a lightweight read endpoint. used internally by aerostack to validate credentials”
Using a Workspace?
Add this MCP to your Workspace — your team shares one token, secrets are stored securely, and every AI agent in the workspace can call it without per-user setup.
Access GPT chat completions, DALL-E image generation, embeddings, and moderation from your AI agents.
OpenAI's API powers the most widely-used AI capabilities in production: chat completions with GPT-4, image generation with DALL-E, text embeddings for semantic search, and content moderation. This MCP server wraps all of them in a single endpoint — letting your Aerostack agents use OpenAI's models as tools within multi-agent workflows, or expose them directly to end users.
Live endpoint: https://mcp.aerostack.dev/s/aerostack/mcp-openai
| Tool | Description |
|---|---|
chat_completion | Create a chat completion using OpenAI models (GPT-4o, GPT-4, GPT-3.5, etc.) |
list_models | List all available OpenAI models |
create_embedding | Create text embeddings for semantic search or similarity |
create_image | Generate an image using DALL-E 3 |
create_moderation | Check text for policy violations |
list_files | List uploaded files associated with the API key |
list_fine_tuning_jobs | List fine-tuning jobs and their status |
| Variable | Required | Description | How to Get |
|---|---|---|---|
OPENAI_API_KEY | Yes | OpenAI API key for all API calls | platform.openai.com/api-keys → Create new secret key → copy the key (shown once) |
OPENAI_API_KEY under Project → SecretsOnce added, every AI agent in your workspace can call OpenAI tools automatically — no per-user setup needed.
"Use GPT-4o to write a product description for the following feature list..."
"Generate a DALL-E image of a futuristic dashboard UI with dark theme and neon accents"
"Create an embedding for this text and return the vector: 'customer churn prediction'"
curl -X POST https://mcp.aerostack.dev/s/aerostack/mcp-openai \
-H 'Content-Type: application/json' \
-H 'X-Mcp-Secret-OPENAI-API-KEY: your-api-key' \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"chat_completion","arguments":{"messages":[{"role":"user","content":"Say hello in one sentence."}]}}}'
MIT
Live Endpoint
https://mcp.aerostack.dev/s/aerostack/mcp-openai
Sub-50ms globally · Zero cold start
@aerostack
Pre-built functions for the most common MCP tool patterns. Clone, extend, and deploy.
by @aerostack
Design creation, AI generation, editing, export, assets, brand kits, folders via Canva's official MCP
by @aerostack
Conversations, contacts, tickets, articles via Intercom's official MCP
by @aerostack
Call Claude models directly — run AI conversations, tool-use chains, batch jobs, and model comparisons from any agent.
by @aerostack
Deployments, projects, domains, env vars, logs via Vercel's official MCP
by @aerostack
Read, send, and search Gmail messages — manage labels, drafts, and attachments with full inbox control.
by @aerostack
Manage audiences, campaigns, and subscriber tags in Mailchimp — automate your email marketing with AI.
Yes. create_embedding returns a float vector for any text. You'd store vectors externally (e.g. Supabase pgvector) and query them separately; the MCP handles the OpenAI API call, not vector storage — pair it with a database MCP for end-to-end semantic search.
create_image returns a URL or base64 string depending on the response_format you set. Claude can pass that URL to a downstream tool — e.g. upload to Supabase Storage or attach to a Slack message — all within one multi-MCP workflow.
Yes. create_moderation returns category scores (hate, violence, self-harm, etc.) for any input text. Claude can call it as a pre-send gate: if any score exceeds your threshold, the message is blocked and a fallback is used instead.
list_fine_tuning_jobs shows all fine-tuning runs and their resulting model IDs. list_models returns all models including fine-tuned ones prefixed with ft:. Claude can cross-reference both to find a successfully trained model ready for inference.