LTM-2-mini

LTM-2-mini

Magic AI
Yi-Large

Yi-Large

01.AI
+
+

Related Products

  • Vertex AI
    944 Ratings
    Visit Website
  • LM-Kit.NET
    25 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • EBizCharge
    204 Ratings
    Visit Website
  • ND Wallet
    14 Ratings
    Visit Website
  • Imorgon
    5 Ratings
    Visit Website
  • Juspay
    15 Ratings
    Visit Website
  • Altium Develop
    1,280 Ratings
    Visit Website
  • Iru
    1,488 Ratings
    Visit Website
  • Enterprise Bot
    23 Ratings
    Visit Website

About

LTM-2-mini is a 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B1 for a 100M token context window. The contrast in memory requirements is even larger – running Llama 3.1 405B with a 100M token context requires 638 H100s per user just to store a single 100M token KV cache.2 In contrast, LTM requires a small fraction of a single H100’s HBM per user for the same context.

About

Yi-Large is a proprietary large language model developed by 01.AI, offering a 32k context length with both input and output costs at $2 per million tokens. It stands out with its advanced capabilities in natural language processing, common-sense reasoning, and multilingual support, performing on par with leading models like GPT-4 and Claude3 in various benchmarks. Yi-Large is designed for tasks requiring complex inference, prediction, and language understanding, making it suitable for applications like knowledge search, data classification, and creating human-like chatbots. Its architecture is based on a decoder-only transformer with enhancements such as pre-normalization and Group Query Attention, and it has been trained on a vast, high-quality multilingual dataset. This model's versatility and cost-efficiency make it a strong contender in the AI market, particularly for enterprises aiming to deploy AI solutions globally.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

AI developers interested in a 100M token context model LLM

Audience

Developers, researchers, and enterprises seeking a high-performance AI model for advanced natural language processing, coding, and data-driven decision-making

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

$0.19 per 1M input token
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Magic AI
Founded: 2022
United States
magic.dev/

Company Information

01.AI
Founded: 2023
China
www.01.ai/

Alternatives

Alternatives

GPT-5 mini

GPT-5 mini

OpenAI
CodeQwen

CodeQwen

Alibaba
GPT-4o mini

GPT-4o mini

OpenAI
Mistral Large 2

Mistral Large 2

Mistral AI
MiniMax M1

MiniMax M1

MiniMax
PaLM 2

PaLM 2

Google
Qwen2

Qwen2

Alibaba

Categories

Categories

Integrations

DeepSeek R1
GitHub
Hugging Face
LLaMA-Factory
ModelScope

Integrations

DeepSeek R1
GitHub
Hugging Face
LLaMA-Factory
ModelScope
Claim LTM-2-mini and update features and information
Claim LTM-2-mini and update features and information
Claim Yi-Large and update features and information
Claim Yi-Large and update features and information