AWS InferentiaAmazon
|
Amazon EC2 P5 InstancesAmazon
|
|||||
Related Products
|
||||||
About
AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications. The first-generation AWS Inferentia accelerator powers Amazon Elastic Compute Cloud (Amazon EC2) Inf1 instances, which deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable GPU-based Amazon EC2 instances. Many customers, including Airbnb, Snap, Sprinklr, Money Forward, and Amazon Alexa, have adopted Inf1 instances and realized its performance and cost benefits. The first-generation Inferentia has 8 GB of DDR4 memory per accelerator and also features a large amount of on-chip memory. Inferentia2 offers 32 GB of HBM2e per accelerator, increasing the total memory by 4x and memory bandwidth by 10x over Inferentia.
|
About
Amazon Elastic Compute Cloud (Amazon EC2) P5 instances, powered by NVIDIA H100 Tensor Core GPUs, and P5e and P5en instances powered by NVIDIA H200 Tensor Core GPUs deliver the highest performance in Amazon EC2 for deep learning and high-performance computing applications. They help you accelerate your time to solution by up to 4x compared to previous-generation GPU-based EC2 instances, and reduce the cost to train ML models by up to 40%. These instances help you iterate on your solutions at a faster pace and get to market more quickly. You can use P5, P5e, and P5en instances for training and deploying increasingly complex large language models and diffusion models powering the most demanding generative artificial intelligence applications. These applications include question-answering, code generation, video and image generation, and speech recognition. You can also use these instances to deploy demanding HPC applications at scale for pharmaceutical discovery.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
Companies searching for an advanced Deep Learning solution
|
Audience
Organizations looking for a solution to accelerate their deep learning and high-performance computing applications
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
No information available.
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationAmazon
Founded: 2006
United States
aws.amazon.com/machine-learning/inferentia/
|
Company InformationAmazon
Founded: 1994
United States
aws.amazon.com/ec2/instance-types/p5/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
|
|||||
|
|
|
|||||
|
|
|
|||||
|
|
|
|||||
Categories |
Categories |
|||||
Integrations
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
AWS Deep Learning Containers
AWS EC2 Trn3 Instances
AWS Neuron
AWS Nitro System
AWS Trainium
Amazon EC2 Capacity Blocks for ML
Amazon EC2 P4 Instances
Amazon EC2 Trn2 Instances
|
Integrations
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
AWS Deep Learning Containers
AWS EC2 Trn3 Instances
AWS Neuron
AWS Nitro System
AWS Trainium
Amazon EC2 Capacity Blocks for ML
Amazon EC2 P4 Instances
Amazon EC2 Trn2 Instances
|
|||||
|
|
|