Instruction-tuned 1.5B Qwen2.5 model for chat and RL fine-tuning
Powerful 14B LLM with strong instruction and long-text handling
Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video
Multimodal 7B model for image, video, and text understanding tasks
Multilingual 3B LLM optimized for reasoning, math, and long contexts
YOLOv8/YOLOv9-based detector for face, hand, person, and fashion data
Fast, lightweight model for sentence embeddings and similarity tasks
Compact, efficient model for sentence embeddings and semantic search
Semantic sentence embeddings for clustering and search tasks
Compact multi-vector retriever with state-of-the-art ranking accuracy
Summarization model fine-tuned on CNN/DailyMail articles
Zero-shot classification with BART fine-tuned on MultiNLI data
English BERT model using cased text for sentence-level tasks
BERT-based Chinese language model for fill-mask and NLP tasks
Multilingual BERT model trained on 104 Wikipedia languages
BERTimbau: BERT model pretrained for Brazilian Portuguese NLP
BERT-base-uncased is a foundational English model for NLP tasks
Efficient English embedding model for semantic search and retrieval
BGE-Large v1.5: High-accuracy English embedding model for retrieval
BGE-M3 is a multilingual embedding model
Compact English sentence embedding model for semantic search tasks
Image captioning model trained on COCO using BLIP base architecture
Large BLIP model for high-quality, flexible image captioning tasks
Multilingual 176B language model for text and code generation tasks
Bilingual 6.2B parameter chatbot optimized for Chinese and English