Content Moderator
AI content moderation with configurable rules
About
Classify user-generated content as safe or unsafe using AI-powered analysis. Detect spam, hate speech, NSFW content, and custom violation categories — all configurable per-deployment.
Supports batch moderation for processing comment feeds, and custom rules stored persistently so you can adjust sensitivity and blocked keywords without redeploying.
API Endpoints
/moderate/moderate/batch/rules/healthHow It Works
Content Submission
POST /moderate — text submitted for AI classification.
Rule Check
Custom rules (keywords, sensitivity) loaded from persistent storage and applied first.
AI Classification
LLM classifies content against safety categories with confidence scores.
Verdict Return
Returns safe/unsafe verdict with category labels and confidence scores.
Use Cases
Comment Moderation
Auto-moderate user comments on your blog, forum, or social platform.
UGC Safety
Screen user-generated content (bios, reviews, descriptions) before publishing.
Chat Safety
Filter messages in real-time chat applications for policy violations.
Bulk Review Queue
Process queued content in batches for efficient human-in-the-loop review.
Opens Aerostack dashboard to deploy this template
What's Included
Pipeline
Billing Model
metered
Pay per token used. Free tier included.