Reporting abuse, harassment, or rule violations

Camila has a real moderation team available 24/7. Reports get reviewed by humans — not just automated systems — and serious categories have under-30-minute SLAs.

How to report

  • In chat: hover any message → Report. Choose a category, optionally add a note.
  • On a profile: three-dot menu → Report user.
  • In a room: the report button below the player flags the entire stream — useful when something on camera is the problem.
  • In DMs: the conversation header has a Report & block option that captures the message thread.
  • By email: abuse@camila.live — for things you can't report in-app or for follow-ups.

When you report, you're prompted to pick a category. The category matters because it routes your report to a different moderator queue with a different SLA.

Categories and SLAs

Category First-response SLA What happens if confirmed
Anything involving someone under 18 < 5 min Account permanently banned, content preserved, law enforcement notified
Doxxing or threats of violence < 30 min Content removed, account banned, model-trust escalation, law enforcement if applicable
Non-consensual content < 30 min Content removed, account banned, law enforcement notified
Trafficking indicators < 30 min Stream stopped, model contacted privately, anti-trafficking partners engaged
Impersonation < 1 hr Impersonator account banned, real model notified
Off-platform funnels < 4 hr Warning, then escalating penalties per content rules
Harassment / chat abuse < 4 hr Mod-action against offender; pattern accumulates
Spam / bot accounts < 24 hr Account flagged, often clustered with similar accounts and banned together
Tag misuse / fake thumbnail < 48 hr Profile review and corrective action

What we ask in a report

Most reports take 30 seconds. We ask:

  • The category (a dropdown)
  • An optional 1-line context note
  • Whether to also block the user (default yes)

We deliberately don't ask for proof — the moderator queue has full access to the chat, profile, room, or DM in question, including timestamps, screenshots, and audit logs. Your report is the trigger; we do the verification.

What we don't ask

  • We don't ask for your real identity to receive a report.
  • We don't share the reporter's identity with the reported user.
  • We don't share that you reported with the reported user.

What we send back

If you reported via in-app: you'll see a confirmation, and a brief follow-up message in your notifications when the moderator reaches a decision. We don't share what the action was (banning is private); we just confirm the report was actioned or, in rare cases, that no rule was broken.

If you reported via email: we reply with a ticket number. Status updates flow there.

False reports

We don't penalise people for honest reports that turn out to be wrong. Pattern-matching catches obviously bad-faith reports (spam-reporting a rival, mass-reporting to suppress someone), and after a clear pattern we limit the offender's reporting weight.

When to escalate beyond Camila

Some incidents need outside help:

  • Imminent threat of physical violence: contact local emergency services first.
  • Known illegal content involving minors: NCMEC CyberTipline (US) at https://report.cybertip.org, or your country's equivalent. We're a mandatory reporter for those, but a direct report from you adds context.
  • Lost evidence in a legal matter: tell us immediately so we apply a legal-hold flag and stop any retention purge on those records.

We work with law enforcement under proper legal process. We don't share data outside that process.

Was this article helpful?