Fast, accurate zero-shot time series forecasting with T5 encoder
Fast, efficient T5-based model for zero-shot time series forecasting
Time series forecasting model using T5 architecture with 46M params
Vision-language model for zero-shot image classification with CLIP
Zero-shot image-text matching with ViT-B/32 Transformer encoder
Zero-shot image-text model for classification and similarity tasks
CLIP model for zero-shot image-text tasks using 336x336 patches
CLIP-based model for text-driven zero/one-shot image segmentation
Scalable BERT-based retrieval with late interaction for fast search
CSM-1B is a speech generation model that creates realistic voice audio
Improved DeBERTa model with ELECTRA-style pretraining
Distilled version of BERT, optimized for speed and efficiency
Sentiment analysis model fine-tuned on SST-2 with DistilBERT
DistilGPT2: Lightweight, distilled GPT-2 for faster text generation
Uncensored 34B model fine-tuned for conversation, code, and agents
Transformer model trained to detect fake vs real tokens efficiently
Protein language model trained for sequence understanding and tasks
3B parameter ESM-2 model for protein sequence understanding
ViT-based model that estimates a person's age group from an image
Falcon-40B is a powerful open-source 40B parameter language model
CLIP model fine-tuned for zero-shot fashion product classification
Compact, state-of-the-art LLM by Google for text generation tasks
Tiny pre-trained IBM model for multivariate time series forecasting
Grok-1 is a 314B-parameter open-weight language model by xAI
Multilingual task-adaptive embeddings for 94 languages and NLP tasks