AWS InferentiaAmazon
|
||||||
Related Products
|
||||||
About
AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications. The first-generation AWS Inferentia accelerator powers Amazon Elastic Compute Cloud (Amazon EC2) Inf1 instances, which deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable GPU-based Amazon EC2 instances. Many customers, including Airbnb, Snap, Sprinklr, Money Forward, and Amazon Alexa, have adopted Inf1 instances and realized its performance and cost benefits. The first-generation Inferentia has 8 GB of DDR4 memory per accelerator and also features a large amount of on-chip memory. Inferentia2 offers 32 GB of HBM2e per accelerator, increasing the total memory by 4x and memory bandwidth by 10x over Inferentia.
|
About
Exafunction optimizes your deep learning inference workload, delivering up to a 10x improvement in resource utilization and cost. Focus on building your deep learning application, not on managing clusters and fine-tuning performance. In most deep learning applications, CPU, I/O, and network bottlenecks lead to poor utilization of GPU hardware. Exafunction moves any GPU code to highly utilized remote resources, even spot instances. Your core logic remains an inexpensive CPU instance. Exafunction is battle-tested on applications like large-scale autonomous vehicle simulation. These workloads have complex custom models, require numerical reproducibility, and use thousands of GPUs concurrently. Exafunction supports models from major deep learning frameworks and inference runtimes. Models and dependencies like custom operators are versioned so you can always be confident you’re getting the right results.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
Companies searching for an advanced Deep Learning solution
|
Audience
Enterprises searching for a solution to optimize their deep learning inference workload
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
No information available.
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationAmazon
Founded: 2006
United States
aws.amazon.com/machine-learning/inferentia/
|
Company InformationExafunction
exafunction.com
|
|||||
Alternatives |
Alternatives |
|||||
|
|
||||||
|
|
|
|||||
|
|
||||||
|
|
|
|||||
Categories |
Categories |
|||||
Integrations
AWS EC2 Trn3 Instances
AWS Parallel Computing Service
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
Anyscale
PyTorch
TensorFlow
WithoutBG
|
Integrations
AWS EC2 Trn3 Instances
AWS Parallel Computing Service
Amazon EC2 Inf1 Instances
Amazon EC2 Trn1 Instances
Anyscale
PyTorch
TensorFlow
WithoutBG
|
|||||
|
|
|