Audience
Companies working with visually rich documents requiring a solution to enhance retrieval accuracy and efficiency in OCR-free RAG systems
About MonoQwen-Vision
MonoQwen2-VL-v0.1 is the first visual document reranker designed to enhance the quality of retrieved visual documents in Retrieval-Augmented Generation (RAG) pipelines. Traditional RAG approaches rely on converting documents into text using Optical Character Recognition (OCR), which can be time-consuming and may result in loss of information, especially for non-textual elements like graphs and tables. MonoQwen2-VL-v0.1 addresses these limitations by leveraging Visual Language Models (VLMs) that process images directly, eliminating the need for OCR and preserving the integrity of visual content. This reranker operates in a two-stage pipeline, initially, it uses separate encoding to generate a pool of candidate documents, followed by a cross-encoding model that reranks these candidates based on their relevance to the query. By training a Low-Rank Adaptation (LoRA) on top of the Qwen2-VL-2B-Instruct model, MonoQwen2-VL-v0.1 achieves high performance without significant memory overhead.