+
+

Related Products

  • Vertex AI
    783 Ratings
    Visit Website
  • RunPod
    180 Ratings
    Visit Website
  • Ango Hub
    15 Ratings
    Visit Website
  • Google Compute Engine
    1,147 Ratings
    Visit Website
  • OORT DataHub
    13 Ratings
    Visit Website
  • LM-Kit.NET
    23 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Fraud.net
    56 Ratings
    Visit Website
  • Gr4vy
    5 Ratings
    Visit Website
  • StackAI
    42 Ratings
    Visit Website

About

Amazon EC2 Trn2 instances, powered by AWS Trainium2 chips, are purpose-built for high-performance deep learning training of generative AI models, including large language models and diffusion models. They offer up to 50% cost-to-train savings over comparable Amazon EC2 instances. Trn2 instances support up to 16 Trainium2 accelerators, providing up to 3 petaflops of FP16/BF16 compute power and 512 GB of high-bandwidth memory. To facilitate efficient data and model parallelism, Trn2 instances feature NeuronLink, a high-speed, nonblocking interconnect, and support up to 1600 Gbps of second-generation Elastic Fabric Adapter (EFAv2) network bandwidth. They are deployed in EC2 UltraClusters, enabling scaling up to 30,000 Trainium2 chips interconnected with a nonblocking petabit-scale network, delivering 6 exaflops of compute performance. The AWS Neuron SDK integrates natively with popular machine learning frameworks like PyTorch and TensorFlow.

About

Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores. This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations. We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth. Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz. We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth. Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz. The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized. Despite having less tensor cores than the V100, it is able to process tensor operations faster due to a different architecture. Second generation AMD EPYC Rome, up to 96 threads with a boost clock of 3.35GHz.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Companies in search of a solution to train their large-scale deep learning and generative AI models

Audience

IT teams searching for a premium dedicated GPU server solution

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

$3.01 per hour
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

Amazon
Founded: 1994
United States
aws.amazon.com/ec2/instance-types/trn2/

Company Information

DataCrunch
Finland
datacrunch.io

Alternatives

Alternatives

AWS Neuron

AWS Neuron

Amazon Web Services
AWS Trainium

AWS Trainium

Amazon Web Services

Categories

Categories

Integrations

AWS Deep Learning AMIs
AWS Nitro System
AWS Trainium
Amazon EC2
Amazon EC2 Capacity Blocks for ML
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon EC2 P5 Instances
Amazon EC2 Trn1 Instances
Amazon EC2 UltraClusters
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Amazon Web Services (AWS)
Datadog
PyTorch
Ray
TensorFlow
WaveSpeedAI

Integrations

AWS Deep Learning AMIs
AWS Nitro System
AWS Trainium
Amazon EC2
Amazon EC2 Capacity Blocks for ML
Amazon EC2 G5 Instances
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon EC2 P5 Instances
Amazon EC2 Trn1 Instances
Amazon EC2 UltraClusters
Amazon EKS
Amazon Elastic Container Service (Amazon ECS)
Amazon SageMaker
Amazon Web Services (AWS)
Datadog
PyTorch
Ray
TensorFlow
WaveSpeedAI
Claim Amazon EC2 Trn2 Instances and update features and information
Claim Amazon EC2 Trn2 Instances and update features and information
Claim DataCrunch and update features and information
Claim DataCrunch and update features and information