Browse Source

update

master
yehua 1 month ago
parent
commit
2ef432cd87
5 changed files with 122 additions and 23 deletions
  1. +99
    -0
      README - old.md
  2. +10
    -19
      README.md
  3. +2
    -2
      pytorch/compress.py
  4. +2
    -2
      pytorch/decompress.py
  5. +9
    -0
      requirements-pytorch.txt

+ 99
- 0
README - old.md View File

@@ -0,0 +1,99 @@
# pcc_geo_cnn_v1_yh

key words: lossy, point cloud, geometry compression, basic entropy coding model

pcc_geo_cnn_v1, is a kind of point cloud lossy compression method. It is the earliest 3D entropy coding method based on deeplearning. The network is simple, and shallow, with analysis transform, entropy bottleneck, and synthesis transform models. The performance is just a little bit better than GPCC octree.

## our contributions
1.transplant from tensorflow to pytorch
2.benchmark test on both tensorflow and pytorch, and compare the performance

## file structure
root
└── pytorch: pytorch code, models included
└── [TensorFlow source code](https://github.com/mauriceqch/pcc_geo_cnn)
└── LEARNING CONVOLUTIONAL TRANSFORMS FOR LOSSY POINT CLOUD GEOMETRY.pdf: origional paper
└── datasets: see the ModelNet40_pc_64.zip in dataset part

## environment
1. tensorflow
Python 3.6.9
Tensorflow-gpu 1.13.1
2. pytorch
torch 1.8.1
torchac 0.9.3

## command
* tensorflow:

training:
>python train.py "/userhome/pcc_geo_cnn_v1/ModelNet40_pc_64/**/*.ply" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_test --resolution 64 --lmbda 0.0001 --batch_size 8

encode:
>python compress.py /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/ "**/*.ply" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/msft_bin_00001 /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_eval --resolution 512

decode:
>python decompress.py /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/ "**/*.ply.bin" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/msft_dec_00001/ /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_eval

* pytorch:

training:
>python train.py

encode:
>python compress.py

decode:
>python decompress.py

## performance
* benchmark test on tensorflow and pytorch below. From the result, we can see that for dense PCs, the method can achieve good compression rate, while for sparse PCs, the performance is not so good.
* Compared the performance of tensorflow and pytorch, the bpp of pytorch is smaller while the D1 and D2 are also smaller than tensorflow. So their performances are closed to each other.

Table 1. Test on TensorFlow

|file|bpp|mseF,PSNR (p2point)|mseF,PSNR (p2plane)|
|:--|:--|:--|:--|
|sarah_vox9_0023_n.ply|1.538|64.552|67.941|
|Phil_vox9_0139_n.ply|1.418|64.168|67.501|
|ricardo_vox9_0215_n.ply|1.775|64.852|68.22|
|david_vox9_0215_n.ply|1.496|64.857|68.25|
|andrew_vox9_0317_n.ply|1.43|64.262|67.521|
|28_airplane_0270_n.ply|5.137|58.029|61.357|
|3_lamp_0073_n.ply|4.464|60.014|64.062|
|average|2.465|62.962|66.407|

Table 2. Test on PyTorch

|file|bpp|mseF,PSNR (p2point)|mseF,PSNR (p2plane)|
|:--|:--|:--|:--|
|sarah_vox9_0023_n.ply|1.375|61.865|64.474|
|Phil_vox9_0139_n.ply|1.192|61.495|64.005|
|ricardo_vox9_0215_n.ply|1.851|62.234|64.797|
|david_vox9_0215_n.ply|1.402|62.233|64.895|
|andrew_vox9_0317_n.ply|1.416|61.718|64.174|
|28_airplane_0270_n.ply|4.433|54.361|56.293|
|3_lamp_0073_n.ply|4.099|55.646|58.536|
|average|2.253|59.936|62.453|

## Cite from:
@inproceedings{DBLP:conf/icip/QuachVD19,
author = {Maurice Quach and
Giuseppe Valenzise and
Fr{\'{e}}d{\'{e}}ric Dufaux},
title = {Learning Convolutional Transforms for Lossy Point Cloud Geometry Compression},
booktitle = {2019 {IEEE} International Conference on Image Processing, {ICIP} 2019,
Taipei, Taiwan, September 22-25, 2019},
pages = {4320--4324},
publisher = {{IEEE}},
year = {2019},
url = {https://doi.org/10.1109/ICIP.2019.8803413},
doi = {10.1109/ICIP.2019.8803413},
timestamp = {Wed, 11 Dec 2019 16:30:23 +0100},
biburl = {https://dblp.org/rec/conf/icip/QuachVD19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}

## contributors
name: Ye Hua
email: yeh@pcl.ac.cn

+ 10
- 19
README.md View File

@@ -16,35 +16,26 @@ root
└── datasets: see the ModelNet40_pc_64.zip in dataset part

## environment
1. tensorflow
Python 3.6.9
Tensorflow-gpu 1.13.1
2. pytorch
torch 1.8.1
torchac 0.9.3
1. pytorch
- ubuntu 18.04
- cuda V10.2.89
- python 3.6.9
- refer to requirements-pytorch.txt

## command
* tensorflow:

training:
>python train.py "/userhome/pcc_geo_cnn_v1/ModelNet40_pc_64/**/*.ply" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_test --resolution 64 --lmbda 0.0001 --batch_size 8

encode:
>python compress.py /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/ "**/*.ply" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/msft_bin_00001 /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_eval --resolution 512

decode:
>python decompress.py /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/ "**/*.ply.bin" /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/data/sarah9/msft_dec_00001/ /userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/models/ModelNet40_pc_64_0001_eval

* pytorch:
> cd pytorch

training:
>python train.py

encode:
>python compress.py
>python compress.py --input_file "/userhome/PCGCv1/pytorch_eval/28_airplane_0270.ply"

You can get some test files [here](https://git.openi.org.cn/OpenPointCloud/PCC_benchmark_testsets).

decode:
>python decompress.py
>python decompress.py --input_file "output/28_airplane_0270.bin"

## performance
* benchmark test on tensorflow and pytorch below. From the result, we can see that for dense PCs, the method can achieve good compression rate, while for sparse PCs, the performance is not so good.


+ 2
- 2
pytorch/compress.py View File

@@ -37,10 +37,10 @@ if __name__ == '__main__':
'--input_file',type=str, default='28_airplane_0270.ply', # /userhome/dataset/paper_test/longdress_vox10_1300.ply
help='Input directory.')
parser.add_argument(
'--output_dir',type=str, default='/userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/pytorch/output/',
'--output_dir',type=str, default='output/',
help='Output directory.')
parser.add_argument(
"--init_ckpt", type=str, default='/userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/pytorch/models0/epoch_99_11860.pth', dest="init_ckpt",
"--init_ckpt", type=str, default='models/epoch_1_273.pth', dest="init_ckpt",
help='initial checkpoint directory.')
parser.add_argument(
'--num_filters', type=int, default=32,


+ 2
- 2
pytorch/decompress.py View File

@@ -50,10 +50,10 @@ if __name__ == '__main__':
'--input_file',type=str, default='/userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/pytorch/output/28_airplane_0270.bin',
help='Input directory.')
parser.add_argument(
'--output_dir',type=str, default='/userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/pytorch/output/',
'--output_dir',type=str, default='output/',
help='Output directory.')
parser.add_argument(
"--init_ckpt", type=str, default='/userhome/pcc_geo_cnn_v1/pcc_geo_cnn_master/pytorch/models0/epoch_99_11860.pth', dest="init_ckpt",
"--init_ckpt", type=str, default='models/epoch_1_273.pth', dest="init_ckpt",
help='initial checkpoint directory.')
parser.add_argument(
'--num_filters', type=int, default=32,


+ 9
- 0
requirements-pytorch.txt View File

@@ -0,0 +1,9 @@
h5py==2.10.0
matplotlib==3.1.1
numpy==1.17.2
open3d==0.11.2
pandas==0.24.2
pyntcloud==0.1.4
torch==1.8.1
torchac==0.9.3
tqdm==4.38.0

Loading…
Cancel
Save