티스토리 수익 글 보기
Azure AI Content Safety
Enhance the safety of generative AI applications with advanced guardrails for responsible AI
Overview
- Detect and block violence, hate, sexual, and self-harm content. Configure severity thresholds for your specific use case, and adhere to your responsible AI policies.
- Create unique content filters tailored to your requirements using custom categories. Quickly train a new custom category by providing examples of content you need to block.
- Safeguard your AI applications against prompt injection attacks and jailbreak attempts. Identify and mitigate both direct and indirect threats with prompt shields.
- Identify and correct generative AI hallucinations and ensure outputs are reliable, accurate, and grounded in data with groundedness detection.
- Pinpoint copyrighted content and provide sources for preexisting text and code with protected material detection.
Use cases
Security
34,000
Full-time equivalent engineers dedicated to security initiatives at Microsoft.
15,000
Partners with specialized security expertise.
>100
Compliance certifications, including over 50 specific to global regions and countries.
Pricing
Flexible pricing to meet your needs
Pay for only what you use—no upfront costs. Azure AI Content Safety pay-as-you-go pricing is based on:
Related products
Use Azure AI Content Safety with other Azure AI products to create advanced guardrails for generative AI or to develop comprehensive solutions with built-in responsible AI tooling.
Customer stories
FAQ
Frequently asked questions
-
Content Safety models have been specifically trained and tested in the following languages: English, German, Spanish, Japanese, French, Italian, Portuguese, and Chinese. The service can work in other languages as well, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
Custom categories currently work well in English only. You can use other languages with your own dataset, but the quality might vary.
- Some Azure AI Content Safety features are only available in certain regions. See the features available in each region.
- The system monitors across four harm categories: hate, sexual, violence, and self-harm.
- Yes, you can adjust severity thresholds for each harm category filter.
- Yes, you can use the Azure AI Content Safety custom categories API to create your own content filters. By providing examples, you can train the filter to detect and block undesired content specific to your defined custom categories.
-
Prompt shields enhance the security of generative AI systems by defending against prompt injection attacks:
- Direct prompt attacks (jailbreaks): Users try to manipulate the AI system and bypass safety protocols by creating prompts that attempt to alter system rules or trick the model into executing restricted actions.
- Indirect attacks: Third-party content, like documents or emails, contains hidden instructions to exploit the AI system, such as embedded commands an AI might unknowingly execute.
- Groundedness detection identifies and corrects the ungrounded outputs of generative AI models, ensuring they’re based on provided source materials. This helps to prevent the generation of fabricated or false information. Using a custom language model, groundedness detection evaluates claims against source data and mitigates AI hallucinations.
-
Protected material detection for text identifies and blocks known text content, such as lyrics, articles, recipes, and selected web content, from appearing in AI-generated outputs.
Protected material detection for code detects and prevents the output of known code. It checks for matches against public source code in GitHub repositories. Additionally, the code referencing capability powered by GitHub Copilot enables developers to locate repositories for exploring and discovering relevant code.
- The content filtering system inside Azure OpenAI is powered by Azure AI Content Safety. It’s designed to detect and prevent the output of harmful content in both input prompts and output completions. It works alongside core models, including GPT and DALL-E.
Next steps
Choose the Azure account that’s right for you
Pay as you go or try Azure free for up to 30 days.
Azure Solutions
Azure cloud solutions
Solve your business problems with proven combinations of Azure cloud services, as well as sample architectures and documentation.
Business Solution Hub
Find the right Microsoft Cloud solution
Browse the Microsoft Business Solutions Hub to find the products and solutions that can help your organization reach its goals.