Modal

Modal

Modal Labs
+
+

Related Products

  • Google Compute Engine
    1,147 Ratings
    Visit Website
  • RunPod
    180 Ratings
    Visit Website
  • Dragonfly
    16 Ratings
    Visit Website
  • Kamatera
    152 Ratings
    Visit Website
  • eMembership for Labor Unions
    12 Ratings
    Visit Website
  • TinyPNG
    47 Ratings
    Visit Website
  • Vertex AI
    783 Ratings
    Visit Website
  • Podium
    2,061 Ratings
    Visit Website
  • Buddy Punch
    1,567 Ratings
    Visit Website
  • PYPROXY
    9 Ratings
    Visit Website

About

Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores. This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations. We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth. Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz. We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth. Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz. The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized. Despite having less tensor cores than the V100, it is able to process tensor operations faster due to a different architecture. Second generation AMD EPYC Rome, up to 96 threads with a boost clock of 3.35GHz.

About

We built a container system from scratch in rust for the fastest cold-start times. Scale to hundreds of GPUs and back down to zero in seconds, and pay only for what you use. Deploy functions to the cloud in seconds, with custom container images and hardware requirements. Never write a single line of YAML. Startups and academic researchers can get up to $25k free compute credits on Modal. These credits can be used towards GPU compute and accessing in-demand GPU types. Modal measures the CPU utilization continuously in terms of the number of fractional physical cores, each physical core is equivalent to 2 vCPUs. Memory consumption is measured continuously. For both memory and CPU, you only pay for what you actually use, and nothing more.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

IT teams searching for a premium dedicated GPU server solution

Audience

Companies looing for a solution to run generative AI models

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

$3.01 per hour
Free Version
Free Trial

Pricing

$0.192 per core per hour
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

DataCrunch
Finland
datacrunch.io

Company Information

Modal Labs
United States
modal.com

Alternatives

Alternatives

Spot Ocean

Spot Ocean

Spot by NetApp

Categories

Categories

Integrations

Python
WaveSpeedAI

Integrations

Python
WaveSpeedAI
Claim DataCrunch and update features and information
Claim DataCrunch and update features and information
Claim Modal and update features and information
Claim Modal and update features and information