MXNet

MXNet

The Apache Software Foundation
+
+

Related Products

  • Vertex AI
    783 Ratings
    Visit Website
  • Cloudflare
    1,915 Ratings
    Visit Website
  • RunPod
    205 Ratings
    Visit Website
  • Fraud.net
    56 Ratings
    Visit Website
  • Qloo
    23 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • LM-Kit.NET
    23 Ratings
    Visit Website
  • OORT DataHub
    13 Ratings
    Visit Website
  • StackAI
    47 Ratings
    Visit Website
  • LabWare LIMS
    113 Ratings
    Visit Website

About

A hybrid front-end seamlessly transitions between Gluon eager imperative mode and symbolic mode to provide both flexibility and speed. Scalable distributed training and performance optimization in research and production is enabled by the dual parameter server and Horovod support. Deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl. A thriving ecosystem of tools and libraries extends MXNet and enables use-cases in computer vision, NLP, time series and more. Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision-making process have stabilized in a manner consistent with other successful ASF projects. Join the MXNet scientific community to contribute, learn, and get answers to your questions.

About

GPUs bring data in and out quickly, but have little locality of reference because of their small caches. They are geared towards applying a lot of compute to little data, not little compute to a lot of data. The networks designed to run on them therefore execute full layer after full layer in order to saturate their computational pipeline (see Figure 1 below). In order to deal with large models, given their small memory size (tens of gigabytes), GPUs are grouped together and models are distributed across them, creating a complex and painful software stack, complicated by the need to deal with many levels of communication and synchronization among separate machines. CPUs, on the other hand, have large, much faster caches than GPUs, and have an abundance of memory (terabytes). A typical CPU server can have memory equivalent to tens or even hundreds of GPUs. CPUs are perfect for a brain-like ML world in which parts of an extremely large network are executed piecemeal, as needed.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Developers and researchers requiring an open-source deep learning framework for research prototyping and production

Audience

Companies doing AI and ML development

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

No information available.
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

The Apache Software Foundation
Founded: 1999
United States
mxnet.apache.org

Company Information

Neural Magic
Founded: 2018
United States
neuralmagic.com

Alternatives

Alternatives

Neural Designer

Neural Designer

Artelnics
Caffe

Caffe

BAIR

Categories

Categories

Integrations

AWS Elastic Fabric Adapter (EFA)
AWS Marketplace
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon Elastic Inference
Amazon SageMaker Debugger
Amazon SageMaker Model Building
Cameralyze
Flower
GPUonCLOUD
Google Cloud Deep Learning VM Image
Gradient
Guild AI
Horovod
LeaderGPU
MLReef
NVIDIA Triton Inference Server
Ultralytics

Integrations

AWS Elastic Fabric Adapter (EFA)
AWS Marketplace
Amazon EC2 Inf1 Instances
Amazon EC2 P4 Instances
Amazon Elastic Inference
Amazon SageMaker Debugger
Amazon SageMaker Model Building
Cameralyze
Flower
GPUonCLOUD
Google Cloud Deep Learning VM Image
Gradient
Guild AI
Horovod
LeaderGPU
MLReef
NVIDIA Triton Inference Server
Ultralytics
Claim MXNet and update features and information
Claim MXNet and update features and information
Claim Neural Magic and update features and information
Claim Neural Magic and update features and information