RAG Q&A Bot
Answer questions about your documentation
About
Build a production-ready Q&A system that answers questions using your documentation. Upload PDFs, markdown, or plain text — the system automatically chunks, embeds, and indexes your content into a vector database.
When users ask questions, the RAG pipeline retrieves the most relevant passages, feeds them as context to an LLM, and returns grounded answers with source citations.
API Endpoints
/ingest/chat/search/docs/healthHow It Works
Document Ingestion
POST /ingest — documents are chunked, embedded, and stored in a vector database.
Semantic Retrieval
Query is embedded and matched against the vector index (top-k nearest neighbors).
Context Assembly
Retrieved passages assembled into a prompt with the user question and system instructions.
LLM Generation
Assembled context sent to the configured LLM for response generation.
Citation Attachment
Source metadata from matched documents attached to the response for attribution.
Use Cases
Product Documentation Bot
Let customers search and ask questions about your API docs, guides, and changelogs.
Internal Knowledge Base
Index company wikis, runbooks, and SOPs so teams get instant answers.
Developer Portal Q&A
Embed in your developer portal to help users debug integration issues faster.
Onboarding Assistant
New hires ask questions about processes, tools, and policies grounded in real documents.
Opens Aerostack dashboard to deploy this template
What's Included
Pipeline
Billing Model
metered
Pay per token used. Free tier included.