DX.GL

Multi-View Datasets
for NeRF, 3DGS & 3D Reconstruction

Generate calibrated multi-view training data from any GLB model. 100–400 views with RGB, depth, normals, masks, point clouds, and camera poses. No GPU required.

10 model uploads with shareable turntable videos. No credit card required.

See the output quality

Browse all datasets →

Dataset → 3D Gaussian Splat

This interactive 3DGS was trained from a DX.GL dataset in 10 minutes. Drag to orbit.

10 CC0 models from Polyhaven · Trained with nerfstudio splatfacto · View collection · Use ← → to browse

Download on HuggingFace →

What's in Every Dataset

Each dataset is a ZIP containing calibrated multi-view renders with full modality coverage.

RGB Views

100–400 PNG images with transparent backgrounds. Fibonacci hemisphere or full sphere sampling for uniform coverage.

Depth Maps

Both 8-bit and 16-bit grayscale depth maps. Tight near/far planes computed from actual model geometry — not screen-space approximations.

Normal Maps

Per-pixel surface normal maps in camera space. Useful for surface reconstruction, relighting, and material estimation.

Binary Masks

Clean object/background segmentation masks. Transparent backgrounds ensure zero contamination in training data.

Point Cloud

PLY point cloud generated alongside views. Ready for initialization or evaluation in your reconstruction pipeline.

Camera Poses

nerfstudio-compatible transforms.json with per-frame intrinsics, extrinsics, and depth_near/depth_far. Works with nerfacto, splatfacto, and custom loaders.

Quality Tiers

Choose the tier that fits your research. All tiers include every modality.

Tier Views Resolution Credits
100×800 100 800 × 800 1
196×1024 196 1024 × 1024 4
400×2048 400 2048 × 2048 16

Credits: $39 × 1 · $299 × 10 · $1,999 × 100. All prices USD, excl. VAT.
Every tier includes RGB + depth (8+16-bit) + normals + masks + PLY + transforms.json.

Why DX.GL

Skip the Blender scripts and GPU provisioning. Focus on your research.

No GPU Required

Upload a GLB, get a dataset. No Blender, no CUDA setup, no local rendering. We handle the compute.

Consistent Quality

PBR rendering on controlled GPU hardware. Same lighting, same camera model, same output quality — every time.

Production-Grade Depth

8-bit and 16-bit depth maps with tight near/far planes from actual model geometry. Not screen-space approximations.

Calibrated Cameras

Fibonacci hemisphere or sphere sampling. nerfstudio transforms.json with intrinsics + extrinsics per frame.

Batch at Scale

Render datasets for hundreds of models via REST API. No manual Blender scripting or GPU provisioning.

Transparent Backgrounds

RGBA output with no background contamination. Clean masks, clean depth — your training data stays clean.

How it Works

Three steps from 3D model to training data.

1. Upload

Upload a GLB via drag-and-drop, URL, or API. Files up to 100 MB. OBJ scans in ZIP are converted automatically.

2. Choose Tier

Select views (100 or 400), resolution (800–2048), and coverage (hemisphere or full sphere). Click render.

3. Download ZIP

Get a ZIP with images/, depth/, depth_16bit/, normals/, masks/, points3D.ply, transforms.json, and overview.webp.

Automate with the API

Render datasets programmatically for batch pipelines.

# Upload a model
curl -X POST https://dx.gl/v1/models \
  -H "Authorization: Bearer dxgl_sk_..." \
  -F "[email protected]"

# Render a dataset
curl -X POST https://dx.gl/v1/renders \
  -H "Authorization: Bearer dxgl_sk_..." \
  -H "Content-Type: application/json" \
  -d '{"modelId":"abc123","output":"dataset","datasetQuality":"196x1024","coverage":"hemisphere"}'

Full API documentation →

Start Generating Datasets

Create an account, upload your first model, and get a dataset in minutes. Every account includes 10 model uploads with shareable turntable videos. Plus 3 preview renders to try custom settings.