Internal Content Monitoring Policy
1. Purpose and Scope
The purpose of this internal content monitoring policy is to ensure a safe, compliant, and positive user experience by protecting users from harmful, illegal, or unethical content. This policy applies to all content exchanged on the Utility3 Ltd. platform, including AI-generated text, audio, and images, covering both free and paid users globally across all subscription tiers.
2. Content Types and Interactions
Utility3 Ltd. facilitates various user interactions with AI characters through:
- AI-generated messages: Text-based exchanges between users and AI characters.
- AI-generated voice interactions: Audio messages and conversations between users and AI.
- AI-generated images: Visual content generated based on user prompts.
Uncensored content, including adult-themed conversations, is permitted under strict guidelines, ensuring the platform remains within legal boundaries. Guardrails are embedded within the AI’s prompting and response systems to prevent the generation of illegal, harmful, or otherwise inappropriate content.
3. Content Moderation Approach
Utility3 Ltd. employs a two-tiered approach to content moderation:
- Automated Monitoring: AI-based tools continuously monitor user interactions in real-time. These systems are designed to automatically flag any content that violates platform guidelines or could potentially be harmful or illegal.
- Flagging and Escalation: Flagged content is escalated to a human moderation team. This team assesses the severity of the breach and determines the most appropriate action (warning, suspension, ban, or no action).
- Manual Review: Human moderators review flagged interactions to ensure accuracy, fairness, and compliance with the platform’s guidelines. This provides an added layer of protection against false positives or ambiguous cases where context is important.
Moderators receive ongoing training on new content trends, AI behavior updates, and evolving regulatory requirements.
4. Harmful or Prohibited Content
Utility3 Ltd. prohibits all AI-generated content related to the following:
- Violent or abusive content: Including but not limited to murder, torture, physical violence, violent sex, rape, and suicide.
- Illegal or unethical activities: Such as kidnapping, drug trafficking, theft, terrorism, and fraud.
- Sexually explicit and harmful content: Including incest, paedophilia, bestiality, necrophilia, grooming, and other prohibited topics listed in the platform guidelines.
- Discriminatory content: Any form of racism, homophobia, racial slurs, hate speech, and genocide promotion.
- Other restricted topics: Such as abductions, animal cruelty, cannibalism, self-harm, mutilation, drunk sex, and others
The AI system is programmed with strict prompting guidelines to prevent the generation of this content, ensuring it does not respond to requests or prompts that violate these restrictions.
5. User Transparency and Communication
Utility3 Ltd. values transparency in its content moderation approach:
- Terms of Service: The platform’s terms of service explicitly outline acceptable and prohibited content, ensuring users understand the rules governing their interactions.
- Appeal Process: Users can appeal moderation decisions. All appeals are reviewed by the moderation team, which may overturn, amend, or uphold the initial decision based on the findings. A clear communication process ensures users are informed of the outcomes.
To ensure clarity, these policies are communicated to users upon sign-up and are available for review at any time through the platform.
6. Internal Roles and Responsibilities
Utility3 Ltd. has designated the following roles within its content monitoring ecosystem:
- Automated Monitoring: AI tools perform the initial real-time monitoring and flagging of content.
- Human Moderation: An internal team is responsible for reviewing flagged content, assessing the severity of the breach, and applying the necessary enforcement measures.
- Training and Updates: The internal team is regularly trained on policy updates, AI advancements, and trends in user interactions to stay informed about emerging content risks and how best to address them.
Cross-departmental collaboration ensures legal, technical, and operational alignment to maintain the integrity of the platform.
7. Enforcement and Consequences
Utility3 Ltd. has a graduated enforcement system to handle content violations:
- Warnings: Issued for minor infractions, where the content breaches guidelines but does not pose significant risk.
- Temporary Suspensions: For repeated or moderate violations, users may be temporarily suspended, with the length of suspension determined based on the severity of the violation.
- Permanent Bans: For severe breaches or repeated violations, users may be permanently banned from the platform.
These enforcement actions are communicated clearly to users, and all actions are documented for internal review and compliance purposes.
8. Monitoring Frequency and Reporting
Utility3 Ltd. employs:
- Real-time Monitoring: Automated systems monitor all content in real time, flagging potentially harmful interactions instantaneously.
- Reporting and Data Analysis: The platform logs all flagged content, which is analyzed periodically to identify patterns, trends, or areas requiring further policy adjustment or AI improvements. This ensures continuous optimization of the content monitoring system.
9. Policy Review and Updates
Utility3 Ltd. commits to regularly reviewing and updating this internal content monitoring policy to adapt to changes in regulations, platform operations, or emerging user behaviors. This ensures the policy remains robust, relevant, and effective in maintaining the platform’s safety and integrity.