Audience
Businesses looking for machine-assisted content moderation APIs and human review tools for images, text and videos
About Azure AI Content Safety
Azure AI Content Safety is a content moderation platform that uses AI to keep your content safe. Create better online experiences for everyone with powerful AI models that detect offensive or inappropriate content in text and images quickly and efficiently.
Language models analyze multilingual text, in both short and long form, with an understanding of context and semantics.
Vision models perform image recognition and detect objects in images using state-of-the-art Florence technology.
AI content classifiers identify sexual, violent, hate, and self-harm content with high levels of granularity.
Content moderation severity scores indicate the level of content risk on a scale of low to high.