MeshDiffusion/README.md

137 wiersze
4.2 KiB
Markdown
Czysty Zwykły widok Historia

2023-03-11 10:17:59 +00:00
# MeshDiffusion: Score-based Generative 3D Mesh Modeling (ICLR 2023 Spotlight)
2023-03-11 14:54:56 +00:00
![MeshDiffusion Teaser](/assets/mesh_teaser.jpg)
2023-03-11 14:51:07 +00:00
2023-03-11 15:25:05 +00:00
This is the official implementation of MeshDiffusion (https://openreview.net/forum?id=0cpM2ApF9p6).
2023-03-11 10:17:59 +00:00
MeshDiffusion is a diffusion model for generating 3D meshes with a direct parametrization of deep marching tetrahedra (DMTet). Please refer to https://meshdiffusion.github.io for more details.
2023-03-11 14:54:56 +00:00
![MeshDiffusion Pipeline](/assets/meshdiffusion_pipeline.jpg)
2023-03-11 14:51:07 +00:00
2023-03-11 10:17:59 +00:00
## Getting Started
### Requirements
- Python >= 3.8
- CUDA 11.6
- Pytorch >= 1.6
- Pytorch3D
2023-03-11 15:22:14 +00:00
Follow the instructions to install requirements for nvdiffrec: https://github.com/NVlabs/nvdiffrec
2023-03-11 10:17:59 +00:00
### Pretrained Models
2023-03-11 15:22:14 +00:00
Download the model checkpoints from https://drive.google.com/drive/folders/15IjbUM1tQf8gS0YsRqY5ZbMs-leJgoJ0?usp=sharing.
2023-03-11 10:17:59 +00:00
## Inference
### Unconditional Generation
Run the following
```
python main_diffusion.py --config=$DIFFUSION_CONFIG --mode=uncond_gen \
--config.eval.eval_dir=$OUTPUT_PATH \
--config.eval.ckpt_path=$CKPT_PATH
```
Later run
```
cd nvdiffrec
2023-03-11 16:03:17 +00:00
python eval.py --config $DMTET_CONFIG --sample-path $SAMPLE_PATH --deform-scale $DEFORM_SCALE [--angle-index $ANGLE_INDEX]
2023-03-11 10:17:59 +00:00
```
2023-03-11 15:59:46 +00:00
where `$SAMPLE_PATH` is the generated sample `.npy` file in `$OUTPUT_PATH`, and `$DEFORM_SCALE` is the scale of deformation of tet vertices set for the DMTet dataset (we use 3.0 for resolution 64 as default; change the value for your own datasets). Change `$ANGLE_INDEX` to some number from 0 to 50 if images rendered from different angles are desired.
A mesh file (`.obj`) will also be saved to the folder, which can be viewed in tools such as MeshLab.
2023-03-11 10:17:59 +00:00
### Single-view Conditional Generation
First fit a DMTet from a single view of a mesh
```
cd nvdiffrec
2023-03-11 15:22:14 +00:00
python fit_singleview.py --config $DMTET_CONFIG --mesh-path $MESH_PATH --angle-ind $ANGLE_IND --out-dir $OUT_DIR --validate $VALIDATE
2023-03-11 10:17:59 +00:00
```
2023-03-11 15:22:14 +00:00
where `$ANGLE_IND` is an integer (0 to 50) controlling the z-axis rotation of the object. Set `$VALIDATE` to 1 if visualization of fitted DMTets is needed.
2023-03-11 10:17:59 +00:00
Then use the trained diffusion model to complete the occluded regions
```
cd ..
2023-03-11 15:38:10 +00:00
2023-03-11 14:51:07 +00:00
python main_diffusion.py --mode=cond_gen --config=$DIFFUSION_CONFIG \
2023-03-11 10:17:59 +00:00
--config.eval.eval_dir=$EVAL_DIR \
--config.eval.ckpt_path=$CKPT_PATH \
--config.eval.partial_dmtet_path=$OUT_DIR/tets/dmtet.pt \
2023-03-11 15:38:10 +00:00
--config.eval.tet_path=$TET_PATH \
2023-03-11 10:17:59 +00:00
--config.eval.batch_size=$EVAL_BATCH_SIZE
```
2023-03-11 15:22:14 +00:00
Now store the completed meshes as `.obj` files in `$SAMPLE_PATH`
2023-03-11 10:17:59 +00:00
```
cd nvdiffrec
2023-03-11 15:22:14 +00:00
python eval.py --config $DMTET_CONFIG --sample-path $SAMPLE_PATH --deform-scale $DEFORM_SCALE
2023-03-11 10:17:59 +00:00
```
2023-03-11 15:22:14 +00:00
Caution: the deformation scale should be consistent for single view fitting and the diffusion model. Check before you run conditional generation.
2023-03-11 15:59:46 +00:00
2023-03-11 10:17:59 +00:00
## Training
For ShapeNet, first create a list of paths of all ground-truth meshes and store them as a json file under `./nvdiffrec/data/shapenet_json`.
Then run the following
```
cd nvdiffrec
2023-03-11 14:51:07 +00:00
python fit_dmtets.py --config $DMTET_CONFIG --out-dir $DMTET_DATA_PATH
2023-03-11 10:17:59 +00:00
```
2023-03-11 15:22:14 +00:00
Create a meta file of all dmtet grid file locations for diffusion model training:
2023-03-11 10:17:59 +00:00
```
cd ../metadata/
2023-03-11 14:51:07 +00:00
python save_meta.py --data_path $DMTET_DATA_PATH/tets --json_path $META_FILE
2023-03-11 10:17:59 +00:00
```
Train a diffusion model
```
cd ..
2023-03-11 15:38:10 +00:00
2023-03-11 14:51:07 +00:00
python main_diffusion.py --mode=train --config=$DIFFUSION_CONFIG \
2023-03-11 15:38:10 +00:00
--config.data.meta_path=$META_FILE \
2023-03-11 14:51:07 +00:00
--config.data.filter_meta_path=$TRAIN_SPLIT_FILE
2023-03-11 10:17:59 +00:00
```
2023-03-11 15:22:14 +00:00
where `$TRAIN_SPLIT_FILE` is a json list of indices to be included in the training set. Examples in `metadata/train_split/`.
2023-03-11 10:17:59 +00:00
## Texture Completion
Follow the instructions in https://github.com/TEXTurePaper/TEXTurePaper and create text-conditioned textures for the generated meshes.
2023-03-11 14:56:41 +00:00
## Citation
If you find our work useful to your research, please consider citing:
```
@InProceedings{Liu2023MeshDiffusion,
title={MeshDiffusion: Score-based Generative 3D Mesh Modeling},
author={Zhen Liu and Yao Feng and Michael J. Black and Derek Nowrouzezahrai and Liam Paull and Weiyang Liu},
booktitle={International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=0cpM2ApF9p6}
}
```
2023-03-11 10:17:59 +00:00
## Acknowledgement
2023-03-11 14:56:41 +00:00
This repo is adapted from https://github.com/NVlabs/nvdiffrec and https://github.com/yang-song/score_sde_pytorch.