StackAI
StackAI is an enterprise AI automation platform to build end-to-end internal tools and processes with AI agents in a fully compliant and secure way. Designed for large organizations, it enables teams to automate complex workflows across operations, compliance, finance, IT, and support without heavy engineering.
With StackAI you can:
• Connect knowledge bases (SharePoint, Confluence, Notion, Google Drive, databases) with versioning, citations, and access controls.
• Deploy AI agents as chat assistants, advanced forms, or APIs integrated into Slack, Teams, Salesforce, HubSpot, or ServiceNow.
• Govern usage with enterprise security: SSO (Okta, Azure AD, Google), RBAC, audit logs, PII masking, data residency, and cost controls.
• Route across OpenAI, Anthropic, Google, or local LLMs with guardrails, evaluations, and testing.
• Start fast with templates for Contract Analyzer, Support Desk, RFP Response, Investment Memo Generator, and more.
Learn more
Ango Hub
Ango Hub is a quality-focused, enterprise-ready data annotation platform for AI teams, available on cloud and on-premise. It supports computer vision, medical imaging, NLP, audio, video, and 3D point cloud annotation, powering use cases from autonomous driving and robotics to healthcare AI.
Built for AI fine-tuning, RLHF, LLM evaluation, and human-in-the-loop workflows, Ango Hub boosts throughput with automation, model-assisted pre-labeling, and customizable QA while maintaining accuracy. Features include centralized instructions, review pipelines, issue tracking, and consensus across up to 30 annotators. With nearly twenty labeling tools—such as rotated bounding boxes, label relations, nested conditional questions, and table-based labeling—it supports both simple and complex projects. It also enables annotation pipelines for chain-of-thought reasoning and next-gen LLM training and enterprise-grade security with HIPAA compliance, SOC 2 certification, and role-based access controls.
Learn more
Shap-E
This is the official code and model release for Shap-E. Generate 3D objects conditioned on text or images. Sample a 3D model, conditioned on a text prompt, or conditioned on a synthetic view image. To get the best result, you should remove the background from the input image. Load 3D models or a trimesh, and create a batch of multiview renders and a point cloud encode them into a latent and render it back. For this to work, install Blender version 3.3.1 or higher.
Learn more