NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
Features
- Implements multiple NLG evaluation metrics
- Supports sentence-level and corpus-level evaluations
- Works with machine translation, summarization, and chatbot output
- Provides command-line and Python API access
- Allows custom metric integration
- Optimized for large-scale NLG benchmarking
Categories
Natural Language Processing (NLP)License
MIT LicenseFollow NLG-Eval
Other Useful Business Software
AI-generated apps that pass security review
Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of NLG-Eval!