Advanced multilingual LLM with enhanced reasoning and code generation
Agentic 24B LLM optimized for coding tasks with 128k context support
Dia-1.6B generates lifelike English dialogue and vocal expressions
Compact 360M text model with high efficiency and fine-tuning support
Lightweight 361M dense model for text generation and pretraining tasks
Compact post-trained LLM for text generation using Transformers
Small post-trained text model with PaddlePaddle optimization
Text-only ERNIE 4.5 MoE model post-trained for language tasks
21B-parameter text MoE model for powerful multilingual generation
21B parameter text-only MoE model by Baidu, fine-tuned for reasoning
Baidu’s 21B MoE language model optimized for PaddlePaddle inference
ERNIE 4.5 MoE model with ultra-efficient 2-bit quantization for infere
Post-trained ERNIE 4.5 model for efficient, high-quality text tasks
Large-scale MoE text model optimized for reasoning and generation
ERNIE 4.5 MoE model in FP8 for efficient high-performance inference
Post-trained ERNIE 4.5 MoE text model with 300B parameters
Powerful text-only ERNIE 4.5 MoE model with 300B parameters
ERNIE 4.5 MoE model with 4/8-bit quantization for fast, efficient infe
Pretrained multimodal MoE model for complex text and vision tasks
Multimodal model with 28B parameters for text and vision tasks
Multimodal ERNIE 4.5 MoE model for advanced vision-language tasks
Multimodal ERNIE 4.5 MoE model for image-text reasoning and chat
Multimodal MoE model fine-tuned for text and visual comprehension
Latent diffusion model generating high-quality text-to-image outputs
Advanced multimodal ERNIE model for vision-language reasoning