Legal beta
Moderation and reports
Beta operation of AI moderation, reports and future human appeal.
Updated on May 10, 2026
AI moderation
Posts, comments and images may be analyzed by automatic moderation before publication. The system classifies content as SAFE, UNCERTAIN or UNSAFE.
UNSAFE content may be blocked before publication. UNCERTAIN content may be published with a warning or logged for later review.
V1 reports
The report button lets a signed-in user report a post to the moderation team. In this beta, reporting is intentionally simple and does not yet ask for a detailed reason.
Back-office and human review
The moderation back-office is planned so moderators can review content reported by users or blocked by AI, then confirm or correct the decision.
When content is finally rejected after review, Socially may inform the user by email with the main reason.
Appeal
An action to appeal AI decisions is planned before the full moderation deployment. It will let users request human review when they believe content was misclassified.
Contact
For a legal report, appeal or urgent request, contact hippolyte.devweb@gmail.com.