ZeroLeaks
ZeroLeaks is an AI prompt security platform that helps organizations identify and fix exposed system prompts, internal tools, and logic vulnerabilities that could allow prompt injection, prompt extraction, or other forms of leakage that expose internal instructions or intellectual property to unauthorized actors. It provides an interactive dashboard where users can scan system prompts manually or automate scanning via CI/CD integration to catch leaks and injection vectors before code is deployed, and it uses an AI-powered red-team-style analysis engine to assess prompt surfaces for logic flaws, extraction risks, and potential misuse with evidence, scoring, and remediation recommendations. ZeroLeaks targets enterprise-grade security for large-language-model-based products by offering vulnerability assessments that highlight prompt exposure depth, prioritized risks, proof, and access paths for issues found, and suggested fixes such as prompt restructuring, tool gating, etc.
Learn more
LLM Guard
By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM Guard ensures that your interactions with LLMs remain safe and secure. LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository. Base functionality requires a limited number of libraries, as you explore more advanced features, necessary libraries will be automatically installed. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation, or spread the word, we would love to have you as part of our community.
Learn more
ZenGuard AI
ZenGuard AI is a security platform designed to protect AI-driven customer experience agents from potential threats, ensuring they operate safely and effectively. Developed by experts from leading tech companies like Google, Meta, and Amazon, ZenGuard provides low-latency security guardrails that mitigate risks associated with large language model-based AI agents. Safeguards AI agents against prompt injection attacks by detecting and neutralizing manipulation attempts, ensuring secure LLM operation. Identifies and manages sensitive information to prevent data leaks and ensure compliance with privacy regulations. Enforces content policies by restricting AI agents from discussing prohibited subjects, maintaining brand integrity and user safety. The platform also provides a user-friendly interface for policy configuration, enabling real-time updates to security settings.
Learn more
Lakera
Lakera Guard empowers organizations to build GenAI applications without worrying about prompt injections, data loss, harmful content, and other LLM risks. Powered by the world's most advanced AI threat intelligence. Lakera’s threat intelligence database contains tens of millions of attack data points and is growing by 100k+ entries every day. With Lakera guard, your defense continuously strengthens. Lakera guard embeds industry-leading security intelligence at the heart of your LLM applications so that you can build and deploy secure AI systems at scale. We observe tens of millions of attacks to detect and protect you from undesired behavior and data loss caused by prompt injection. Continuously assess, track, report, and responsibly manage your AI systems across the organization to ensure they are secure at all times.
Learn more