Fireworks Ai
MCP Server Hosted PublicFast LLM inference via Fireworks AI — generate text, run chat completions, and access fine-tuned models at speed.
Use with AI AssistantsMCP
Connect Claude, Cursor, or any MCP-compatible client — then call tools directly
① Add This MCP Server
Paste into your AI client config — then all its tools are available instantly.
{
"mcpServers": {
"fireworks-ai": {
"url": "https://mcp.aerostack.dev/s/aerostack/mcp-fireworks-ai",
"headers": {
"Authorization": "Bearer YOUR_AEROSTACK_TOKEN"
}
}
}
}Replace YOUR_AEROSTACK_TOKEN with your API token from the dashboard.
② Call a Tool
Ask your AI assistant to call a specific tool, or send raw JSON-RPC:
Natural Language Prompt
“Use the chat_completion tool to fast chat completion using fireworks ai. supports llama, mixtral, and other open-source models with openai-compatible api. default model: llama-v3p1-8b-instruct”
Using a Workspace?
Add this MCP to your Workspace — your team shares one token, secrets are stored securely, and every AI agent in the workspace can call it without per-user setup.
No documentation provided.
Details
Live Endpoint
https://mcp.aerostack.dev/s/aerostack/mcp-fireworks-ai
Sub-50ms globally · Zero cold start
Publisher
@aerostack
Pre-built functions for the most common MCP tool patterns. Clone, extend, and deploy.