OpenAI Proxy
Drop-in proxy for OpenAI chat completions
About
A drop-in proxy for OpenAI's chat completions API deployed on the edge. Full streaming passthrough, CORS headers, and zero-config deployment — point any OpenAI SDK client at your proxy URL and it just works.
Deploy free for personal use, or add a Gateway for consumer key auth, rate limiting, and token-based billing to resell access.
API Endpoints
/v1/chat/completions/healthHow It Works
Request Reception
POST /v1/chat/completions — OpenAI-compatible request received at the edge.
Auth & Billing
If Gateway is attached, consumer key validated and token quota checked.
Proxy Forward
Request forwarded to OpenAI with your API key — streaming fully passed through.
Response Relay
OpenAI response streamed back to the client with CORS headers.
Use Cases
API Reselling
Wrap OpenAI access with your own auth and billing to offer AI as a service.
CORS Proxy
Call OpenAI from browser-side code without exposing your API key.
Rate Limit Buffer
Add your own rate limiting layer in front of OpenAI to manage burst traffic.
Logging & Analytics
Log all OpenAI requests through your proxy for usage analytics and debugging.
Opens Aerostack dashboard to deploy this template
What's Included
Pipeline
Billing Model
metered
Pay per token used. Free tier included.