LLM Guard
By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM Guard ensures that your interactions with LLMs remain safe and secure. LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository. Base functionality requires a limited number of libraries, as you explore more advanced features, necessary libraries will be automatically installed. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation, or spread the word, we would love to have you as part of our community.
Learn more
ZeroLeaks
ZeroLeaks is an AI prompt security platform that helps organizations identify and fix exposed system prompts, internal tools, and logic vulnerabilities that could allow prompt injection, prompt extraction, or other forms of leakage that expose internal instructions or intellectual property to unauthorized actors. It provides an interactive dashboard where users can scan system prompts manually or automate scanning via CI/CD integration to catch leaks and injection vectors before code is deployed, and it uses an AI-powered red-team-style analysis engine to assess prompt surfaces for logic flaws, extraction risks, and potential misuse with evidence, scoring, and remediation recommendations. ZeroLeaks targets enterprise-grade security for large-language-model-based products by offering vulnerability assessments that highlight prompt exposure depth, prioritized risks, proof, and access paths for issues found, and suggested fixes such as prompt restructuring, tool gating, etc.
Learn more
MCP Defender
MCP Defender is an open source desktop application that functions as an AI firewall, designed to monitor and protect Model Context Protocol (MCP) communications. It acts as a secure proxy between AI applications and MCP servers, analyzing all communications for potential threats in real-time. It automatically scans and protects all MCP tool calls, providing advanced LLM-powered detection of malicious activity. Users can manage the signatures used during scanning, allowing for customizable security measures. MCP Defender identifies and blocks common AI security threats, including prompt injection, credential theft, arbitrary code execution, and remote command injection. It supports integration with various AI applications such as Cursor, Claude, Visual Studio Code, and Windsurf, with more applications to be supported in the future. It offers intelligent threat detection, alerting users as soon as it identifies any malicious activity being performed by AI apps.
Learn more
WebOrion Protector Plus
WebOrion Protector Plus is a GPU-powered GenAI firewall engineered to provide mission-critical protection for generative AI applications. It offers real-time defenses against evolving threats such as prompt injection attacks, sensitive data leakage, and content hallucinations. Key features include prompt injection attack protection, safeguarding intellectual property and personally identifiable information (PII) from exposure, content moderation and validation to ensure accurate and on-topic LLM responses, and user input rate limiting to mitigate risks of security vulnerability exploitation and unbounded consumption. At the core of its capabilities is ShieldPrompt, a multi-layered defense system that utilizes context evaluation through LLM analysis of user prompts, canary checks by embedding fake prompts to detect potential data leaks, pand revention of jailbreaks using Byte Pair Encoding (BPE) tokenization with adaptive dropout.
Learn more