Alibaba Cloud Content Moderation
Content Moderation leverages deep learning technology and benefits from Alibaba's years of Big Data analysis to provide accurate monitoring of pictures, video, text and other multimedia content. Not only does Content Moderation help users to reduce adult, violence, terrorism, drugs and other illegal or inappropriate content, but can also minimize spam advertising and other user experience pain points. Constant automated moderation responses in less than 0.1 seconds with an accuracy rate higher than 95 percent. Readily recognizes adverse images, videos, text, and audio dealing with illicit behaviors such as violence, terrorism, drugs, weapons, extremism and profanity. Daily access to billions of images, videos, text, and audio with highly scalable, deep learning technology developed by Alibaba. Customize models according to your specific requirements. Continually improves recognition based on new data to expand its capabilities.
Learn more
Utopia AI Moderator
Automation with Utopia AI Moderator increases quality, improves publishing speed and reduces costs. Utopia AI Moderator is a fully automated moderation tool that protects your online community and your brand from abusive user-generated content, fraud, cyberbullies and spam. It learns from the publishing decisions your human moderators made previously, works in real time, with a higher degree of accuracy when compared to humans. Utopia AI Moderator understands the context, works in any language, and is especially good in informal language, slang or dialect. Our tool increases the quality of published content. It removes publishing delays and provides reliable, consistent curation around the clock, allowing human moderators to focus on moderation policy management and only the most difficult cases. Utopia AI Moderator is ready for production use in only two weeks, and it moderates 100% of your incoming content. It stays up-to-date by learning as it works.
Learn more
WebPurify
World-Class Image Moderation & More. Discover a faster, more efficient way to keep user-generated content clean. Given the complexities associated with nuance and context, our human moderators are trained to flag violations that fall into the gray areas and make final image decisions that align with your brand standards. Our Automated Intelligent Moderation (AIM) API service offers 24/7 protection from the risks associated with having user-generated content on your brand channels—detecting and removing unwanted images in real-time. This one-of-a-kind solution delivers the best of automated and live moderation through a single, easy-to-use API. AI technology detects images with a high probability of containing undesirable content, limiting the volume that requires human review. The remaining images are then queued up for moderation by experts who are trained to flag any additional violations.
Learn more
Safer
Safer helps stop the viral spread of CSAM material across your platform. Keeping your team, your company, and your users, safer. Increase team efficiency and wellness. Break down silos and leverage community knowledge. Identify known and unknown CSAM with perceptual hashing and machine learning algorithms. Queue flagged content for review with content moderation tools built with employee wellness in mind. Review and report verified CSAM and securely store content in accordance with regulatory obligations. Broaden your protection efforts to identify known and potentially new and unreported content at the point of upload. The Safer community is working together to find more abuse content. Our APIs are built to broaden the shared knowledge of child abuse content by contributing hashes, scanning against other industry hashes, and sending feedback on false positives.
Learn more