Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
MilkClouds f84a17b86e | 2 years ago | |
---|---|---|
.. | ||
meta_data | 4 years ago | |
README.md | 2 years ago | |
batch_load_scannet_data.py | 3 years ago | |
extract_posed_images.py | 2 years ago | |
load_scannet_data.py | 3 years ago | |
scannet_utils.py | 3 years ago |
We follow the procedure in votenet.
Download ScanNet v2 data HERE. Link or move the 'scans' folder to this level of directory. If you are performing segmentation tasks and want to upload the results to its official benchmark, please also link or move the 'scans_test' folder to this directory.
In this directory, extract point clouds and annotations by running python batch_load_scannet_data.py
. Add the --max_num_point 50000
flag if you only use the ScanNet data for the detection task. It will downsample the scenes to less points.
In this directory, extract RGB image with poses by running python extract_posed_images.py
. This step is optional. Skip it if you don't plan to use multi-view RGB images. Add --max-images-per-scene -1
to disable limiting number of images per scene. ScanNet scenes contain up to 5000+ frames per each. After extraction, all the .jpg images require 2 Tb disk space. The recommended 300 images per scene require less then 100 Gb. For example multi-view 3d detector ImVoxelNet samples 50 and 100 images per training and test scene.
Enter the project root directory, generate training data by running
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
The overall process could be achieved through the following script
python batch_load_scannet_data.py
python extract_posed_images.py
cd ../..
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
The directory structure after pre-processing should be as below
scannet
├── meta_data
├── batch_load_scannet_data.py
├── load_scannet_data.py
├── scannet_utils.py
├── README.md
├── scans
├── scans_test
├── scannet_instance_data
├── points
│ ├── xxxxx.bin
├── instance_mask
│ ├── xxxxx.bin
├── semantic_mask
│ ├── xxxxx.bin
├── seg_info
│ ├── train_label_weight.npy
│ ├── train_resampled_scene_idxs.npy
│ ├── val_label_weight.npy
│ ├── val_resampled_scene_idxs.npy
├── posed_images
│ ├── scenexxxx_xx
│ │ ├── xxxxxx.txt
│ │ ├── xxxxxx.jpg
│ │ ├── intrinsic.txt
├── scannet_infos_train.pkl
├── scannet_infos_val.pkl
├── scannet_infos_test.pkl
No Description
Python C++ Cuda Pickle Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》