NLG-Eval is a toolkit for evaluating the quality of natural language generation (NLG) outputs using multiple automated metrics such as BLEU, METEOR, and ROUGE.
Features
- Implements multiple NLG evaluation metrics
- Supports sentence-level and corpus-level evaluations
- Works with machine translation, summarization, and chatbot output
- Provides command-line and Python API access
- Allows custom metric integration
- Optimized for large-scale NLG benchmarking
Categories
Natural Language Processing (NLP)License
MIT LicenseFollow NLG-Eval
Other Useful Business Software
Ship AI Apps Faster with Vertex AI
Ship AI apps and features faster with Vertex AI—your end-to-end AI platform. Access Gemini 3 and 200+ foundation models, fine-tune for your needs, and deploy with enterprise-grade MLOps. Build chatbots, agents, or custom models. New customers get $300 in free credit.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of NLG-Eval!