Orq.ai
Orq.ai is the #1 platform for software teams to operate agentic AI systems at scale. Optimize prompts, deploy use cases, and monitor performance, no blind spots, no vibe checks. Experiment with prompts and LLM configurations before moving to production. Evaluate agentic AI systems in offline environments. Roll out GenAI features to specific user groups with guardrails, data privacy safeguards, and advanced RAG pipelines. Visualize all events triggered by agents for fast debugging. Get granular control on cost, latency, and performance. Connect to your favorite AI models, or bring your own. Speed up your workflow with out-of-the-box components built for agentic AI systems. Manage core stages of the LLM app lifecycle in one central platform. Self-hosted or hybrid deployment with SOC 2 and GDPR compliance for enterprise security.
Learn more
Zerve AI
Merging the best of a notebook and an IDE into one integrated coding environment, experts can explore their data and write stable code at the same time with fully automated cloud infrastructure. Zerve’s data science development environment gives data science and ML teams a unified space to explore, collaborate, build, and deploy data science & AI projects like never before. Zerve offers true language interoperability, meaning that as well as being able to use Python, R, SQL, or Markdown all in the same canvas, users can connect these code blocks to each other. No more long-running code blocks or containers, with Zerve enjoying unlimited parallelization at any stage of the development journey. Analysis artifacts are automatically serialized, versioned, stored, and preserved for later use, meaning easily changing a step in the data flow without needing to rerun any preceding steps. Fine-grained selection of compute resources and extra memory for complex data transformation.
Learn more
NVIDIA FLARE
NVIDIA FLARE (Federated Learning Application Runtime Environment) is an open source, extensible SDK designed to facilitate federated learning across diverse industries, including healthcare, finance, and automotive. It enables secure, privacy-preserving AI model training by allowing multiple parties to collaboratively train models without sharing raw data. FLARE supports various machine learning frameworks such as PyTorch, TensorFlow, RAPIDS, and XGBoost, making it adaptable to existing workflows. FLARE's componentized architecture allows for customization and scalability, supporting both horizontal and vertical federated learning. It is suitable for applications requiring data privacy and regulatory compliance, such as medical imaging and financial analytics. It is available for download via the NVIDIA NVFlare GitHub repository and PyPi.
Learn more
dstack
It streamlines development and deployment, reduces cloud costs, and frees users from vendor lock-in. Configure the hardware resources, such as GPU, and memory, and specify your preference for using spot instances. dstack automatically provisions cloud resources, fetches your code, and forwards ports for secure access. Access the cloud dev environment conveniently using your local desktop IDE. Configure the hardware resources you need (GPU, memory, etc.) and indicate whether you want to use spot or on-demand instances. dstack will automatically provision cloud resources and forward ports for secure and convenient access. Pre-train and finetune your own state-of-the-art models easily and cost-effectively in any cloud. Have cloud resources automatically provisioned based on your configuration? Access your data and store output artifacts using declarative configuration or the Python SDK.
Learn more