Audience
Developers and game makers in need of a solution to generate 3D objects conditioned on text or images
About Shap-E
This is the official code and model release for Shap-E. Generate 3D objects conditioned on text or images. Sample a 3D model, conditioned on a text prompt, or conditioned on a synthetic view image. To get the best result, you should remove the background from the input image. Load 3D models or a trimesh, and create a batch of multiview renders and a point cloud encode them into a latent and render it back. For this to work, install Blender version 3.3.1 or higher.
Other Popular Alternatives & Related Software
Seed3D
Seed3D 1.0 is a foundation-model pipeline that takes a single input image and generates a simulation-ready 3D asset, including closed manifold geometry, UV-mapped textures, and physically-based rendering material maps, designed for immediate integration into physics engines and embodied-AI simulators. It uses a hybrid architecture combining a 3D variational autoencoder for latent geometry encoding, and a diffusion-transformer stack to generate detailed 3D shapes, followed by multi-view texture synthesis, PBR material estimation, and UV texture completion. The geometry branch produces watertight meshes with fine structural details (e.g., thin protrusions, holes, text), while the texture/material branch yields multi-view consistent albedo, metallic, and roughness maps at high resolution, enabling realistic appearance under varied lighting. Assets generated by Seed3D 1.0 require minimal cleanup or manual tuning.
Learn more
Poly
Poly is an AI-enabled texture creation tool that lets you quickly generate customized, 8K HD, and seamlessly tile-able textures with up to 32-bit PBR maps using a simple prompt (text and/or image) in seconds. It's perfect for use in 3D applications such as 3D modeling, character design, architecture visualization, game development, AR/VR world-building, and much more. We're thrilled to share the result of our team's research work with the community and hope you will find it useful and fun. Type in a prompt, select a texture material type, and watch as Poly creates a fully-formed 32-bit EXR texture for you. You can use this to play around with Poly's AI, seeing what it is capable of and experimenting with prompting strategies. The dock at the bottom of the screen lets you switch views. You can view your past prompts, view a model in 3D, or view any of the six available physical-based rendering maps.
Learn more
Magic3D
Together with image conditioning techniques as well as prompt-based editing approach, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications. Magic3D can create high-quality 3D textured mesh models from input text prompts. It utilizes a coarse-to-fine strategy leveraging both low- and high-resolution diffusion priors for learning the 3D representation of the target content. Magic3D synthesizes 3D content with 8× higher-resolution supervision than DreamFusion while also being 2× faster. Given a coarse model generated with a base text prompt, we can modify parts of the text in the prompt, and then fine-tune the NeRF and 3D mesh models to obtain an edited high-resolution 3D mesh.
Learn more
Point-E
While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to produce a single sample. This is in stark contrast to state-of-the-art generative image models, which produce samples in a number of seconds or minutes. In this paper, we explore an alternative method for 3D object generation which produces 3D models in only 1-2 minutes on a single GPU. Our method first generates a single synthetic view using a text-to-image diffusion model and then produces a 3D point cloud using a second diffusion model which conditions the generated image. While our method still falls short of the state-of-the-art in terms of sample quality, it is one to two orders of magnitude faster to sample from, offering a practical trade-off for some use cases. We release our pre-trained point cloud diffusion models, as well as evaluation code and models, at this https URL.
Learn more
Pricing
Starting Price:
Free
Free Version:
Free Version available.
Integrations
Company Information
OpenAI
United States
github.com/openai/shap-e
Other Useful Business Software
Level Up Your Cyber Defense with External Threat Management
Move beyond alerts. Gain full visibility, context, and control over your external attack surface to stay ahead of every threat.
Product Details
Platforms Supported
Cloud
Training
Documentation
Support
Online