Superagent
Superagent is an open source AI safety and agent development platform that helps developers and organizations build, deploy, and protect AI-driven applications and assistants by embedding safety guardrails, runtime security, and compliance controls into agent workflows. It provides purpose-trained models and APIs (such as Guard, Verify, and Redact) that block prompt injections, malicious tool calls, data leakage, and unsafe outputs in real time, while red-teaming tests probe production systems for vulnerabilities and deliver findings with remediation guidance. Superagent integrates with existing AI systems at inference and tool-call layers to filter inputs/outputs, remove sensitive data like PII/PHI, enforce policy constraints, and stop unauthorized actions before they occur, offering unified observability, live trace logs, policy controls, and audit trails for security and engineering teams.
Learn more
ChainForge
ChainForge is an open-source visual programming environment designed for prompt engineering and large language model evaluation. It enables users to assess the robustness of prompts and text-generation models beyond anecdotal evidence. Simultaneously test prompt ideas and variations across multiple LLMs to identify the most effective combinations. Evaluate response quality across different prompts, models, and settings to select the optimal configuration for specific use cases. Set up evaluation metrics and visualize results across prompts, parameters, models, and settings, facilitating data-driven decision-making. Manage multiple conversations simultaneously, template follow-up messages, and inspect outputs at each turn to refine interactions. ChainForge supports various model providers, including OpenAI, HuggingFace, Anthropic, Google PaLM2, Azure OpenAI endpoints, and locally hosted models like Alpaca and Llama. Users can adjust model settings and utilize visualization nodes.
Learn more
Langfuse
Langfuse is an open source LLM engineering platform to help teams collaboratively debug, analyze and iterate on their LLM Applications.
Observability: Instrument your app and start ingesting traces to Langfuse
Langfuse UI: Inspect and debug complex logs and user sessions
Prompts: Manage, version and deploy prompts from within Langfuse
Analytics: Track metrics (LLM cost, latency, quality) and gain insights from dashboards & data exports
Evals: Collect and calculate scores for your LLM completions
Experiments: Track and test app behavior before deploying a new version
Why Langfuse?
- Open source
- Model and framework agnostic
- Built for production
- Incrementally adoptable - start with a single LLM call or integration, then expand to full tracing of complex chains/agents
- Use GET API to build downstream use cases and export data
Learn more
LMArena
LMArena is a web-based platform that allows users to compare large language models through pair-wise anonymous match-ups: users input prompts, two unnamed models respond, and the crowd votes for the better answer; the identities are only revealed after voting, enabling transparent, large-scale evaluation of model quality. It aggregates these votes into leaderboards and rankings, enabling contributors of models to benchmark performance against peers and gain feedback from real-world usage. Its open framework supports many different models from academic labs and industry, fosters community engagement through direct model testing and peer comparison, and helps identify strengths and weaknesses of models in live interaction settings. It thereby moves beyond static benchmark datasets to capture dynamic user preferences and real-time comparisons, providing a mechanism for users and developers alike to observe which models deliver superior responses.
Learn more