Generate calibrated multi-view training data from any GLB model. 100–400 views with RGB, depth, normals, masks, point clouds, and camera poses. No GPU required.
10 model uploads with shareable turntable videos. No credit card required.
This interactive 3DGS was trained from a DX.GL dataset in 10 minutes. Drag to orbit.
10 CC0 models from Polyhaven · Trained with nerfstudio splatfacto · View collection · Use ← → to browse
Download on HuggingFace →Each dataset is a ZIP containing calibrated multi-view renders with full modality coverage.
100–400 PNG images with transparent backgrounds. Fibonacci hemisphere or full sphere sampling for uniform coverage.
Both 8-bit and 16-bit grayscale depth maps. Tight near/far planes computed from actual model geometry — not screen-space approximations.
Per-pixel surface normal maps in camera space. Useful for surface reconstruction, relighting, and material estimation.
Clean object/background segmentation masks. Transparent backgrounds ensure zero contamination in training data.
PLY point cloud generated alongside views. Ready for initialization or evaluation in your reconstruction pipeline.
nerfstudio-compatible transforms.json with per-frame intrinsics, extrinsics, and depth_near/depth_far. Works with nerfacto, splatfacto, and custom loaders.
Choose the tier that fits your research. All tiers include every modality.
| Tier | Views | Resolution | Credits |
|---|---|---|---|
| 100×800 | 100 | 800 × 800 | 1 |
| 196×1024 | 196 | 1024 × 1024 | 4 |
| 400×2048 | 400 | 2048 × 2048 | 16 |
Credits: $39 × 1 · $299 × 10 · $1,999 × 100. All prices USD, excl. VAT.
Every tier includes RGB + depth (8+16-bit) + normals + masks + PLY + transforms.json.
Skip the Blender scripts and GPU provisioning. Focus on your research.
Upload a GLB, get a dataset. No Blender, no CUDA setup, no local rendering. We handle the compute.
PBR rendering on controlled GPU hardware. Same lighting, same camera model, same output quality — every time.
8-bit and 16-bit depth maps with tight near/far planes from actual model geometry. Not screen-space approximations.
Fibonacci hemisphere or sphere sampling. nerfstudio transforms.json with intrinsics + extrinsics per frame.
Render datasets for hundreds of models via REST API. No manual Blender scripting or GPU provisioning.
RGBA output with no background contamination. Clean masks, clean depth — your training data stays clean.
Three steps from 3D model to training data.
Upload a GLB via drag-and-drop, URL, or API. Files up to 100 MB. OBJ scans in ZIP are converted automatically.
Select views (100 or 400), resolution (800–2048), and coverage (hemisphere or full sphere). Click render.
Get a ZIP with images/, depth/, depth_16bit/, normals/, masks/, points3D.ply, transforms.json, and overview.webp.
Render datasets programmatically for batch pipelines.
# Upload a model curl -X POST https://dx.gl/v1/models \ -H "Authorization: Bearer dxgl_sk_..." \ -F "[email protected]" # Render a dataset curl -X POST https://dx.gl/v1/renders \ -H "Authorization: Bearer dxgl_sk_..." \ -H "Content-Type: application/json" \ -d '{"modelId":"abc123","output":"dataset","datasetQuality":"196x1024","coverage":"hemisphere"}'
Create an account, upload your first model, and get a dataset in minutes. Every account includes 10 model uploads with shareable turntable videos. Plus 3 preview renders to try custom settings.