Alternatives to Steev
Compare Steev alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Steev in 2026. Compare features, ratings, user reviews, pricing, and more from Steev competitors and alternatives in order to make an informed decision for your business.
-
1
Union Cloud
Union.ai
Union.ai is an award-winning, Flyte-based data and ML orchestrator for scalable, reproducible ML pipelines. With Union.ai, you can write your code locally and easily deploy pipelines to remote Kubernetes clusters. “Flyte’s scalability, data lineage, and caching capabilities enable us to train hundreds of models on petabytes of geospatial data, giving us an edge in our business.” — Arno, CTO at Blackshark.ai “With Flyte, we want to give the power back to biologists. We want to stand up something that they can play around with different parameters for their models because not every … parameter is fixed. We want to make sure we are giving them the power to run the analyses.” — Krishna Yeramsetty, Principal Data Scientist at Infinome “Flyte plays a vital role as a key component of Gojek's ML Platform by providing exactly that." — Pradithya Aria Pura, Principal Engineer at GojStarting Price: Free (Flyte) -
2
VESSL AI
VESSL AI
Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows. Deploy custom AI & LLMs on any infrastructure in seconds and scale inference with ease. Handle your most demanding tasks with batch job scheduling, only paying with per-second billing. Optimize costs with GPU usage, spot instances, and built-in automatic failover. Train with a single command with YAML, simplifying complex infrastructure setups. Automatically scale up workers during high traffic and scale down to zero during inactivity. Deploy cutting-edge models with persistent endpoints in a serverless environment, optimizing resource usage. Monitor system and inference metrics in real-time, including worker count, GPU utilization, latency, and throughput. Efficiently conduct A/B testing by splitting traffic among multiple models for evaluation.Starting Price: $100 + compute/month -
3
DeepSpeed
Microsoft
DeepSpeed is an open source deep learning optimization library for PyTorch. It's designed to reduce computing power and memory use, and to train large distributed models with better parallelism on existing computer hardware. DeepSpeed is optimized for low latency, high throughput training. DeepSpeed can train DL models with over a hundred billion parameters on the current generation of GPU clusters. It can also train up to 13 billion parameters in a single GPU. DeepSpeed is developed by Microsoft and aims to offer distributed training for large-scale models. It's built on top of PyTorch, which specializes in data parallelism.Starting Price: Free -
4
Amazon Nova Forge
Amazon
Amazon Nova Forge is a groundbreaking service that enables organizations to build their own frontier models by leveraging early Nova checkpoints and proprietary data. It provides complete flexibility across the full training lifecycle, including pre-training, mid-training, supervised fine-tuning, and reinforcement learning. With access to Nova-curated datasets and responsible AI tooling, customers can create powerful and safer custom models tailored to their domain. Nova Forge allows teams to mix their own datasets at the peak learning stage to maximize accuracy while preventing catastrophic forgetting. Companies across industries—from Reddit to Sony—use Nova Forge to consolidate ML workflows, accelerate innovation, and outperform specialized models. Hosted securely on AWS, it offers the most cost-effective, streamlined path to building next-generation AI systems. -
5
Tune Studio
NimbleBox
Tune Studio is an intuitive and versatile platform designed to streamline the fine-tuning of AI models with minimal effort. It empowers users to customize pre-trained machine learning models to suit their specific needs without requiring extensive technical expertise. With its user-friendly interface, Tune Studio simplifies the process of uploading datasets, configuring parameters, and deploying fine-tuned models efficiently. Whether you're working on NLP, computer vision, or other AI applications, Tune Studio offers robust tools to optimize performance, reduce training time, and accelerate AI development, making it ideal for both beginners and advanced users in the AI space.Starting Price: $10/user/month -
6
UpTrain
UpTrain
Get scores for factual accuracy, context retrieval quality, guideline adherence, tonality, and many more. You can’t improve what you can’t measure. UpTrain continuously monitors your application's performance on multiple evaluation criterions and alerts you in case of any regressions with automatic root cause analysis. UpTrain enables fast and robust experimentation across multiple prompts, model providers, and custom configurations, by calculating quantitative scores for direct comparison and optimal prompt selection. Hallucinations have plagued LLMs since their inception. By quantifying degree of hallucination and quality of retrieved context, UpTrain helps to detect responses with low factual accuracy and prevent them before serving to the end-users. -
7
Simplismart
Simplismart
Fine-tune and deploy AI models with Simplismart's fastest inference engine. Integrate with AWS/Azure/GCP and many more cloud providers for simple, scalable, cost-effective deployment. Import open source models from popular online repositories or deploy your own custom model. Leverage your own cloud resources or let Simplismart host your model. With Simplismart, you can go far beyond AI model deployment. You can train, deploy, and observe any ML model and realize increased inference speeds at lower costs. Import any dataset and fine-tune open-source or custom models rapidly. Run multiple training experiments in parallel efficiently to speed up your workflow. Deploy any model on our endpoints or your own VPC/premise and see greater performance at lower costs. Streamlined and intuitive deployment is now a reality. Monitor GPU utilization and all your node clusters in one dashboard. Detect any resource constraints and model inefficiencies on the go. -
8
Tencent Cloud TI Platform
Tencent
Tencent Cloud TI Platform is a one-stop machine learning service platform designed for AI engineers. It empowers AI development throughout the entire process from data preprocessing to model building, model training, model evaluation, and model service. Preconfigured with diverse algorithm components, it supports multiple algorithm frameworks to adapt to different AI use cases. Tencent Cloud TI Platform delivers a one-stop machine learning experience that covers a complete and closed-loop workflow from data preprocessing to model building, model training, and model evaluation. With Tencent Cloud TI Platform, even AI beginners can have their models constructed automatically, making it much easier to complete the entire training process. Tencent Cloud TI Platform's auto-tuning tool can also further enhance the efficiency of parameter tuning. Tencent Cloud TI Platform allows CPU/GPU resources to elastically respond to different computing power needs with flexible billing modes. -
9
Cerebras
Cerebras
We’ve built the fastest AI accelerator, based on the largest processor in the industry, and made it easy to use. With Cerebras, blazing fast training, ultra low latency inference, and record-breaking time-to-solution enable you to achieve your most ambitious AI goals. How ambitious? We make it not just possible, but easy to continuously train language models with billions or even trillions of parameters – with near-perfect scaling from a single CS-2 system to massive Cerebras Wafer-Scale Clusters such as Andromeda, one of the largest AI supercomputers ever built. -
10
Exspanse
Exspanse
Exspanse streamlines the path from development to business value. Build, train & rapidly deploy powerful machine learning models from a single user interface that can scale with your business. Train, tune, and prototype models from the Exspanse Notebook with the help of high-powered GPUs, CPUs & our AI code assistant. Think beyond training & modeling when you can use the rapid deploy feature to deploy models as an API right from an Exspanse Notebook. Clone and publish unique AI projects to DeepSpace AI marketplace to advance the AI community. Power, efficiency, and collaboration in one comprehensive platform. Unleash your full potential as a solo data scientist while maximizing your impact. Manage and accelerate your AI development process through our integrated platform. Turn your innovative ideas into working models quickly and effectively. Seamlessly transition from building to deploying AI solutions, without the need for extensive DevOps knowledge.Starting Price: $50 per month -
11
Oumi
Oumi
Oumi is a fully open source platform that streamlines the entire lifecycle of foundation models, from data preparation and training to evaluation and deployment. It supports training and fine-tuning models ranging from 10 million to 405 billion parameters using state-of-the-art techniques such as SFT, LoRA, QLoRA, and DPO. The platform accommodates both text and multimodal models, including architectures like Llama, DeepSeek, Qwen, and Phi. Oumi offers tools for data synthesis and curation, enabling users to generate and manage training datasets effectively. For deployment, it integrates with popular inference engines like vLLM and SGLang, ensuring efficient model serving. The platform also provides comprehensive evaluation capabilities across standard benchmarks to assess model performance. Designed for flexibility, Oumi can run on various environments, from local laptops to cloud infrastructures such as AWS, Azure, GCP, and Lambda.Starting Price: Free -
12
DeepEyes
DeepEyes
The effective management of GMP-regulated manufacturing areas requires a holistic approach based on identifying and monitoring those components that play the most critical roles: facility, personnel and microbial control. By instantly recognizing compliance related anomalies and contamination threats, DeepEyes video-based AI error-recognition solutions close the gap that even the best training and supervision/monitoring leave open. The intelligent DeepEyes solutions automate surveillance by alerting deviations from good manufacturing processes (GMP) in real time; they provide constant quality control that goes parallel to the manufacturing process. Operator training cannot completely avoid the risk of leakage. Constant monitoring is required so as to prevent product loss, waste disposal issues, downtime as well as safety threats. -
13
Predibase
Predibase
Declarative machine learning systems provide the best of flexibility and simplicity to enable the fastest-way to operationalize state-of-the-art models. Users focus on specifying the “what”, and the system figures out the “how”. Start with smart defaults, but iterate on parameters as much as you’d like down to the level of code. Our team pioneered declarative machine learning systems in industry, with Ludwig at Uber and Overton at Apple. Choose from our menu of prebuilt data connectors that support your databases, data warehouses, lakehouses, and object storage. Train state-of-the-art deep learning models without the pain of managing infrastructure. Automated Machine Learning that strikes the balance of flexibility and control, all in a declarative fashion. With a declarative approach, finally train and deploy models as quickly as you want. -
14
Kognitos
Kognitos
Build automations and manage exceptions all in intuitive english. Intuitively automate processes that contain structured and unstructured data, large transaction volumes, and complicated, exception-heavy workflows that are difficult for traditional automation tools. Processes that encounter exceptions, like document-heavy processes, have historically been difficult for RPA to build because of all the up-front development work to build in exception handling. Kognitos takes a fundamentally different approach by allowing your users to teach your automation how to handle the exceptions using natural language. Kognitos emulates the way we would teach one another how to resolve errors and edge cases by intuitive prompting that puts humans in control. Automation can now be trained to work just as you would train another human through experience and examples. -
15
dstack
dstack
dstack is an orchestration layer designed for modern ML teams, providing a unified control plane for development, training, and inference on GPUs across cloud, Kubernetes, or on-prem environments. By simplifying cluster management and workload scheduling, it eliminates the complexity of Helm charts and Kubernetes operators. The platform supports both cloud-native and on-prem clusters, with quick connections via Kubernetes or SSH fleets. Developers can spin up containerized environments that link directly to their IDEs, streamlining the machine learning workflow from prototyping to deployment. dstack also enables seamless scaling from single-node experiments to distributed training while optimizing GPU usage and costs. With secure, auto-scaling endpoints compatible with OpenAI standards, it empowers teams to deploy models quickly and reliably. -
16
Options for every business to train deep learning and machine learning models cost-effectively. AI accelerators for every use case, from low-cost inference to high-performance training. Simple to get started with a range of services for development and deployment. Tensor Processing Units (TPUs) are custom-built ASIC to train and execute deep neural networks. Train and run more powerful and accurate models cost-effectively with faster speed and scale. A range of NVIDIA GPUs to help with cost-effective inference or scale-up or scale-out training. Leverage RAPID and Spark with GPUs to execute deep learning. Run GPU workloads on Google Cloud where you have access to industry-leading storage, networking, and data analytics technologies. Access CPU platforms when you start a VM instance on Compute Engine. Compute Engine offers a range of both Intel and AMD processors for your VMs.
-
17
Taylor AI
Taylor AI
Training open source language models requires time and specialized knowledge. Taylor AI empowers your engineering team to focus on generating real business value, rather than deciphering complex libraries and setting up training infrastructure. Working with third-party LLM providers requires exposing your company's sensitive data. Most providers reserve the right to re-train models with your data. With Taylor AI, you own and control your models. Break away from the pay-per-token pricing structure. With Taylor AI, you only pay to train the model. You have the freedom to deploy and interact with your AI models as much as you like. New open source models emerge every month. Taylor AI stays current on the best open source language models, so you don't have to. Stay ahead, and train with the latest open source models. You own your model, so you can deploy it on your terms according to your unique compliance and security standards. -
18
Determined AI
Determined AI
Distributed training without changing your model code, determined takes care of provisioning machines, networking, data loading, and fault tolerance. Our open source deep learning platform enables you to train models in hours and minutes, not days and weeks. Instead of arduous tasks like manual hyperparameter tuning, re-running faulty jobs, and worrying about hardware resources. Our distributed training implementation outperforms the industry standard, requires no code changes, and is fully integrated with our state-of-the-art training platform. With built-in experiment tracking and visualization, Determined records metrics automatically, makes your ML projects reproducible and allows your team to collaborate more easily. Your researchers will be able to build on the progress of their team and innovate in their domain, instead of fretting over errors and infrastructure. -
19
WhyLabs
WhyLabs
Enable observability to detect data and ML issues faster, deliver continuous improvements, and avoid costly incidents. Start with reliable data. Continuously monitor any data-in-motion for data quality issues. Pinpoint data and model drift. Identify training-serving skew and proactively retrain. Detect model accuracy degradation by continuously monitoring key performance metrics. Identify risky behavior in generative AI applications and prevent data leakage. Protect your generative AI applications are safe from malicious actions. Improve AI applications through user feedback, monitoring, and cross-team collaboration. Integrate in minutes with purpose-built agents that analyze raw data without moving or duplicating it, ensuring privacy and security. Onboard the WhyLabs SaaS Platform for any use cases using the proprietary privacy-preserving integration. Security approved for healthcare and banks. -
20
Lightning AI
Lightning AI
Use our platform to build AI products, train, fine tune and deploy models on the cloud without worrying about infrastructure, cost management, scaling, and other technical headaches. Train, fine tune and deploy models with prebuilt, fully customizable, modular components. Focus on the science and not the engineering. A Lightning component organizes code to run on the cloud, manage its own infrastructure, cloud costs, and more. 50+ optimizations to lower cloud costs and deliver AI in weeks not months. Get enterprise-grade control with consumer-level simplicity to optimize performance, reduce cost, and lower risk. Go beyond a demo. Launch the next GPT startup, diffusion startup, or cloud SaaS ML service in days not months.Starting Price: $10 per credit -
21
MosaicML
MosaicML
Train and serve large AI models at scale with a single command. Point to your S3 bucket and go. We handle the rest, orchestration, efficiency, node failures, and infrastructure. Simple and scalable. MosaicML enables you to easily train and deploy large AI models on your data, in your secure environment. Stay on the cutting edge with our latest recipes, techniques, and foundation models. Developed and rigorously tested by our research team. With a few simple steps, deploy inside your private cloud. Your data and models never leave your firewalls. Start in one cloud, and continue on another, without skipping a beat. Own the model that's trained on your own data. Introspect and better explain the model decisions. Filter the content and data based on your business needs. Seamlessly integrate with your existing data pipelines, experiment trackers, and other tools. We are fully interoperable, cloud-agnostic, and enterprise proved. -
22
Evidently AI
Evidently AI
The open-source ML observability platform. Evaluate, test, and monitor ML models from validation to production. From tabular data to NLP and LLM. Built for data scientists and ML engineers. All you need to reliably run ML systems in production. Start with simple ad hoc checks. Scale to the complete monitoring platform. All within one tool, with consistent API and metrics. Useful, beautiful, and shareable. Get a comprehensive view of data and ML model quality to explore and debug. Takes a minute to start. Test before you ship, validate in production and run checks at every model update. Skip the manual setup by generating test conditions from a reference dataset. Monitor every aspect of your data, models, and test results. Proactively catch and resolve production model issues, ensure optimal performance, and continuously improve it.Starting Price: $500 per month -
23
Arcee AI
Arcee AI
Optimizing continual pre-training for model enrichment with proprietary data. Ensuring that domain-specific models offer a smooth experience. Creating a production-friendly RAG pipeline that offers ongoing support. With Arcee's SLM Adaptation system, you do not have to worry about fine-tuning, infrastructure set-up, and all the other complexities involved in stitching together solutions using a plethora of not-built-for-purpose tools. Thanks to the domain adaptability of our product, you can efficiently train and deploy your own SLMs across a plethora of use cases, whether it is for internal tooling, or for your customers. By training and deploying your SLMs with Arcee’s end-to-end VPC service, you can rest assured that what is yours, stays yours. -
24
Anyscale
Anyscale
Anyscale is a unified AI platform built around Ray, the world’s leading AI compute engine, designed to help teams build, deploy, and scale AI and Python applications efficiently. The platform offers RayTurbo, an optimized version of Ray that delivers up to 4.5x faster data workloads, 6.1x cost savings on large language model inference, and up to 90% lower costs through elastic training and spot instances. Anyscale provides a seamless developer experience with integrated tools like VSCode and Jupyter, automated dependency management, and expert-built app templates. Deployment options are flexible, supporting public clouds, on-premises clusters, and Kubernetes environments. Anyscale Jobs and Services enable reliable production-grade batch processing and scalable web services with features like job queuing, retries, observability, and zero-downtime upgrades. Security and compliance are ensured with private data environments, auditing, access controls, and SOC 2 Type II attestation.Starting Price: $0.00006 per minute -
25
ADE Enterprise
ADESOFTware
Further education centers are faced with multiple requirements: quality, traceability, cost optimization, a high number of people to train in a reduced amount of time. Adaptability, organization and constant reporting are essential strengths. adesoft is a constraint-based training logistics solution publisher acting at the heart of the HRIS of large companies to plan, simulate and schedule training, in addition to centralizing certification and expertise. Using ADE Enterprise, control your everyday activity simply and put all the chances on your side to improve your training courses and your client satisfaction. Optimum reactivity for information and training course offers. Short training course times and schedules that must be adapted and updated in real-time. Client and instructor schedule management. Human resource management (absences, vacations, missions, etc.). A constant search for specific courses using tight budgets. Instructor skill assessment and management.Starting Price: $250.00/month/user -
26
Encord
Encord
Achieve peak model performance with the best data. Create & manage training data for any visual modality, debug models and boost performance, and make foundation models your own. Expert review, QA and QC workflows help you deliver higher quality datasets to your artificial intelligence teams, helping improve model performance. Connect your data and models with Encord's Python SDK and API access to create automated pipelines for continuously training ML models. Improve model accuracy by identifying errors and biases in your data, labels and models. -
27
Chipp
Chipp
Write a prompt, train it on your own knowledge, content, docs and data. Bring together multiple app with a cohesive interface that reflects your brand's style - all accessible via one link. Collect emails, charge users, and upsell to other services and products. Transform interactions with Chipp's custom chat interfaces, trained on your unique datasets, documents, and files. Whether it's customer service or interactive storytelling, our chatbots provide relevant, context-aware dialogues for an engaging user experience that reflects your brand's voice.Starting Price: $199 per year -
28
NeoPulse
AI Dynamics
The NeoPulse Product Suite includes everything needed for a company to start building custom AI solutions based on their own curated data. Server application with a powerful AI called “the oracle” that is capable of automating the process of creating sophisticated AI models. Manages your AI infrastructure and orchestrates workflows to automate AI generation activities. A program that is licensed by the organization to allow any application in the enterprise to access the AI model using a web-based (REST) API. NeoPulse is an end-to-end automated AI platform that enables organizations to train, deploy and manage AI solutions in heterogeneous environments, at scale. In other words, every part of the AI engineering workflow can be handled by NeoPulse: designing, training, deploying, managing and retiring. -
29
Helix AI
Helix AI
Build and optimize text and image AI for your needs, train, fine-tune, and generate from your data. We use best-in-class open source models for image and language generation and can train them in minutes thanks to LoRA fine-tuning. Click the share button to create a link to your session, or create a bot. Optionally deploy to your own fully private infrastructure. You can start chatting with open source language models and generating images with Stable Diffusion XL by creating a free account right now. Fine-tuning your model on your own text or image data is as simple as drag’n’drop, and takes 3-10 minutes. You can then chat with and generate images from those fine-tuned models straight away, all using a familiar chat interface.Starting Price: $20 per month -
30
alwaysAI
alwaysAI
alwaysAI provides developers with a simple and flexible way to build, train, and deploy computer vision applications to a wide variety of IoT devices. Select from a catalog of deep learning models or upload your own. Use our flexible and customizable APIs to quickly enable core computer vision services. Quickly prototype, test and iterate with a variety of camera-enabled ARM-32, ARM-64 and x86 devices. Identify objects in an image by name or classification. Identify and count objects appearing in a real-time video feed. Follow the same object across a series of frames. Find faces or full bodies in a scene to count or track. Locate and define borders around separate objects. Separate key objects in an image from background visuals. Determine human body poses, fall detection, emotions. Use our model training toolkit to train an object detection model to identify virtually any object. Create a model tailored to your specific use-case. -
31
SAVVI AI
SAVVI AI
See how Savvi can quickly and easily solve your business challenges. Increase operational efficiency, and empower your team to succeed. Start with the decision, recommendation or prediction that you want to automate with AI. Easily integrate existing data or run a data cold start with a simple line of code in your app. Savvi handles your AI App end-to-end, define your prediction or decision options, identify business goals and publish. Savvi collects the data, trains the ML model, builds your objective function, deploys the AI App into your product. Savvi will continuously learn to improve to your goals. Savvi can securely collect data from your product and train an ML model in less than a few weeks. Just drop in a snippet of Savvi’s code and go. No need for a data architecture project to get started with AI. -
32
Hugging Face
Hugging Face
Hugging Face is a leading platform for AI and machine learning, offering a vast hub for models, datasets, and tools for natural language processing (NLP) and beyond. The platform supports a wide range of applications, from text, image, and audio to 3D data analysis. Hugging Face fosters collaboration among researchers, developers, and companies by providing open-source tools like Transformers, Diffusers, and Tokenizers. It enables users to build, share, and access pre-trained models, accelerating AI development for a variety of industries.Starting Price: $9 per month -
33
Automi
Automi
You will find all the tools you need to easily adapt cutting-edge AI models to you specific needs, using your own data. Design super-intelligent AI agents by combining the individual expertise of several cutting-edge AI models. All the AI models published on the platform are open-source. The datasets they were trained on are accessible, their limitations and their biases are also shared. -
34
Granica
Granica
The Granica AI efficiency platform reduces the cost to store and access data while preserving its privacy to unlock it for training. Granica is developer-first, petabyte-scale, and AWS/GCP-native. Granica makes AI pipelines more efficient, privacy-preserving, and more performant. Efficiency is a new layer in the AI stack. Byte-granular data reduction uses novel compression algorithms, cutting costs to store and transfer objects in Amazon S3 and Google Cloud Storage by up to 80% and API costs by up to 90%. Estimate in 30 mins in your cloud environment, on a read-only sample of your S3/GCS data. No need for budget allocation or total cost of ownership analysis. Granica deploys into your environment and VPC, respecting all of your security policies. Granica supports a wide range of data types for AI/ML/analytics, with lossy and fully lossless compression variants. Detect and protect sensitive data even before it is persisted into your cloud object store. -
35
Seekr
Seekr
Boost your productivity and create more inspired content with generative AI that is bounded and grounded by the highest industry standards and intelligence. Rate content for reliability, reveal political lean, and align with your brand’s safety themes. Our AI models are rigorously tested and reviewed by leading experts and data scientists to train our dataset exclusively with the web’s most trustworthy content. Leverage the industry’s most trustworthy large language model (LLM) to create new content fast, accurately, and at low cost. Speed up processes and drive better business outcomes with a suite of AI tools built to reduce costs and skyrocket results. -
36
Novita AI
novita.ai
Explore the full spectrum of AI APIs tailored for image, video, audio, and LLM applications. Novita AI is designed to elevate your AI-driven business at the pace of technology, offering model hosting and training solutions. Access 100+ APIs, including AI image generation & editing with 10,000+ models, and training APIs for custom models. Enjoy the cheapest pay-as-you-go pricing, freeing you from GPU maintenance hassles while building your own products. generate images in 2s from 10000+ models with a single click. Updated models with civitai and hugging face. Provide a wide variety of products based on Novita API. You can empower your own products with a quick Novita API integration.Starting Price: $0.0015 per image -
37
Snorkel AI
Snorkel AI
AI today is blocked by lack of labeled data, not models. Unblock AI with the first data-centric AI development platform powered by a programmatic approach. Snorkel AI is leading the shift from model-centric to data-centric AI development with its unique programmatic approach. Save time and costs by replacing manual labeling with rapid, programmatic labeling. Adapt to changing data or business goals by quickly changing code, not manually re-labeling entire datasets. Develop and deploy high-quality AI models via rapid, guided iteration on the part that matters–the training data. Version and audit data like code, leading to more responsive and ethical deployments. Incorporate subject matter experts' knowledge by collaborating around a common interface, the data needed to train models. Reduce risk and meet compliance by labeling programmatically and keeping data in-house, not shipping to external annotators. -
38
Field1st
Field1st
Field1st is an AI-powered safety operations and field intelligence platform that replaces paper forms and disconnected reporting with mobile-first, real-time safety data capture, hazard detection, risk assessment, compliance tracking, and predictive analytics. It unifies field data—near-miss reports, hazard photos, voice-enabled forms and observations, into a single cloud system that works offline and syncs when connected, giving supervisors and safety leaders immediate visibility into risks, incidents, and trends across sites. It uses AI safety agents trained on OSHA and company policies to detect patterns in hazards and near misses, suggest corrective actions, flag predictive risk indicators, and proactively guide teams before incidents escalate, while also automating compliance documentation, audit-ready reporting, and corrective action workflows. Field1st’s tools include dynamic, customizable forms and checklists, real-time incident escalation, GPS tagging, etc. -
39
MXNet
The Apache Software Foundation
A hybrid front-end seamlessly transitions between Gluon eager imperative mode and symbolic mode to provide both flexibility and speed. Scalable distributed training and performance optimization in research and production is enabled by the dual parameter server and Horovod support. Deep integration into Python and support for Scala, Julia, Clojure, Java, C++, R and Perl. A thriving ecosystem of tools and libraries extends MXNet and enables use-cases in computer vision, NLP, time series and more. Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision-making process have stabilized in a manner consistent with other successful ASF projects. Join the MXNet scientific community to contribute, learn, and get answers to your questions. -
40
Headversity
Headversity
Headversity is the industry’s first preventative assistance platform (PRE.A.P.), offering proactive and digital mental health training experiences for the entire workforce. EAPs and therapy are no longer enough to support mental health. Our preventative assistance platform supports employees before and beyond crisis, helping employers avoid costly behavioral outcomes. Meaningful insights to measure the training’s impact on your workforce. Skill scores, psychometric data, pulse data, and engagement trends are pulled together to ensure the right training is being delivered to the right people at the right time. When we partner with organizations, we pull out all the stops. We work with you to establish goals, roll out communications suited to your channels, API to existing supports, and more. -
41
Metacog
Metacog
Rooted in EDM and cognitive science, Metacog® is a complete analytics engine for hard-to-measure human behavior in complex environments. Our vertically integrated system covers a full range of analytics layers, from basic data collection and storage to advanced behavior modeling and visualization. Use Metacog to evaluate and improve the performance of individuals, teams, and training programs. Accelerate your training progress at scale without sacrificing quality. Close your readiness gap faster. Using goals set by your experts, Metacog observes and evaluates your trainees' behaviors, processes and results during training events. Real-time feedback can be provided during the simulation, and training content can be accelerated or slowed based on the individual's cognitive load, level of engagement and other factors. Metacog captures and synthesizes an accurate record of the team's activities—individually and collectively—as they train together during a simulation. -
42
q.MINDshare Microlearning
count5
q.MINDshareTM (“q”), from count5, is an adaptive microlearning platform that eliminates the forgetting curve associated with employee training. Adding q to your programs means your employees remember more training than before, helping you deliver faster, more predictable performance outcomes to the business. q adaptive microlearning reinforces employee training post-event, measures baseline training retention then adapts to close knowledge gaps of each learner. q adaptive microlearning keeps important changes in product, strategy, and process top-of-mind, improving learner confidence and driving the success of key initiatives. Information overload hinders employee performance. q’s noise-free delivery system cuts through the clutter to get 100% of your employees’ attention, 100% of the time, delivering your priorities one spoonful at a time. What gets measured gets managed, and q replaces hope with real-time learning metrics. -
43
Caffe
BAIR
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. Check out our web image classification demo! Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices. Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models. Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU. -
44
Levity
Levity
Create your own AI that takes daily, repetitive tasks off your shoulders so your team can reach the next level of productivity. Levity is a no-code platform that allows you to train AI models on images, documents, and text data. You can rebuild manual workflows and connect everything to your existing systems without writing a single line of code. Levity enables you to upload your own labeled data to train custom models that fit your business like a glove. If you want to get started even quicker, it also provides countless templates for frequent use-cases, such as sentiment analysis, customer support or document classification. Got a repetitive task that requires more than rule-based automation that standard RPA tools offer? Try Levity out for free and see within minutes what cognitive automation is capable of.Starting Price: $99 -
45
NVIDIA FLARE
NVIDIA
NVIDIA FLARE (Federated Learning Application Runtime Environment) is an open source, extensible SDK designed to facilitate federated learning across diverse industries, including healthcare, finance, and automotive. It enables secure, privacy-preserving AI model training by allowing multiple parties to collaboratively train models without sharing raw data. FLARE supports various machine learning frameworks such as PyTorch, TensorFlow, RAPIDS, and XGBoost, making it adaptable to existing workflows. FLARE's componentized architecture allows for customization and scalability, supporting both horizontal and vertical federated learning. It is suitable for applications requiring data privacy and regulatory compliance, such as medical imaging and financial analytics. It is available for download via the NVIDIA NVFlare GitHub repository and PyPi.Starting Price: Free -
46
Lilac
Lilac
Lilac is an open source tool that enables data and AI practitioners to improve their products by improving their data. Understand your data with powerful search and filtering. Collaborate with your team on a single, centralized dataset. Apply best practices for data curation, like removing duplicates and PII to reduce dataset size and lower training cost and time. See how your pipeline impacts your data using our diff viewer. Clustering is a technique that automatically assigns categories to each document by analyzing the text content and putting similar documents in the same category. This reveals the overarching structure of your dataset. Lilac uses state-of-the-art algorithms and LLMs to cluster the dataset and assign informative, descriptive titles. Before we do advanced searching, like concept or semantic search, we can immediately use keyword search by typing a keyword in the search box.Starting Price: Free -
47
Evoke
Evoke
Focus on building, we’ll take care of hosting. Just plug and play with our rest API. No limits, no headaches. We have all the inferencing capacity you need. Stop paying for nothing. We’ll only charge based on use. Our support team is our tech team too. So you’ll be getting support directly rather than jumping through hoops. The flexible infrastructure allows us to scale with you as you grow and handle any spikes in activity. Image and art generation from text to image or image to image with clear documentation with our stable diffusion API. Change the output's art style with additional models. MJ v4, Anything v3, Analog, Redshift, and more. Other stable diffusion versions like 2.0+ will also be included. Train your own stable diffusion model (fine-tuning) and deploy on Evoke as an API. We plan to have other models like Whisper, Yolo, GPT-J, GPT-NEOX, and many more in the future for not only inference but also training and deployment.Starting Price: $0.0017 per compute second -
48
Dragonfile
Dragonfile
Dragonfile – The Smarter Way to Manage Claims. Dragonfile is a powerful, intuitive claims management solution designed specifically for adjusters and adjustment companies. Built by industry experts, Dragonfile streamlines workflows, automates updates, and organizes files—helping adjusters save time, reduce stress, and focus on resolving claims efficiently. ✅ Centralized File Management – Keep all claim documents in one secure place. ✅ Automated Notifications & Reminders – Never miss a deadline again. ✅ Seamless Accessibility – Work from desktop, tablet, or mobile—anytime, anywhere. ✅ Zero to Minimal Training Required – Simple, user-friendly interface built for adjusters. ✅ Customizable Workflows – Adapt to your process and work smarter, not harder. Whether you’re managing P&C or Flood claims, Dragonfile simplifies the process, eliminates manual work, and enhances productivity. -
49
Stochastic
Stochastic
Enterprise-ready AI system that trains locally on your data, deploys on your cloud and scales to millions of users without an engineering team. Build customize and deploy your own chat-based AI. Finance chatbot. xFinance, a 13-billion parameter model fine-tuned on an open-source model using LoRA. Our goal was to show that it is possible to achieve impressive results in financial NLP tasks without breaking the bank. Personal AI assistant, your own AI to chat with your documents. Single or multiple documents, easy or complex questions, and much more. Effortless deep learning platform for enterprises, hardware efficient algorithms to speed up inference at a lower cost. Real-time logging and monitoring of resource utilization and cloud costs of deployed models. xTuring is an open-source AI personalization software. xTuring makes it easy to build and control LLMs by providing a simple interface to personalize LLMs to your own data and application. -
50
Forefront
Forefront.ai
Powerful language models a click away. Join over 8,000 developers building the next wave of world-changing applications. Fine-tune and deploy GPT-J, GPT-NeoX, Codegen, and FLAN-T5. Multiple models, each with different capabilities and price points. GPT-J is the fastest model, while GPT-NeoX is the most powerful—and more are on the way. Use these models for classification, entity extraction, code generation, chatbots, content generation, summarization, paraphrasing, sentiment analysis, and much more. These models have been pre-trained on a vast amount of text from the open internet. Fine-tuning improves upon this for specific tasks by training on many more examples than can fit in a prompt, letting you achieve better results on a wide number of tasks.