Browse Source

update

master
yehua 1 month ago
parent
commit
a59f8af733
6 changed files with 150 additions and 62 deletions
  1. +121
    -0
      README - old-2.md
  2. +10
    -54
      README.md
  3. +10
    -8
      pytorch/compress_octree.py
  4. +0
    -0
      pytorch/models/5.0e-5/epoch_96.pth
  5. +0
    -0
      pytorch/models/5.0e-5/log.txt
  6. +9
    -0
      requirements-pytorch.txt

+ 121
- 0
README - old-2.md View File

@@ -0,0 +1,121 @@
# pcc_geo_cnn_v2_yehua

point clouds, compression, neural networks, geometry, octree
pcc_geo_cnn_v2 is based on pcc_geo_cnn_v1, for lossy point cloud compression. And for performance optimization, the author comes up with 5 thoughts, and verifies them by experiments.

## our contributions
1.benchmark test on different PC files, and on different metrics, including bpp, D1, D2, time, compared TensorFlow with pytorch.
2.draw a flow chart about encoding process.
3.BD-BR and BD-PSNR calculation over GPCC(octree), also compared with pcc_geo_cnn_v1.
4.transplant from tensorflow to pytorch

## file structure
root
└── [TensorFlow code](https://github.com/mauriceqch/pcc_geo_cnn_v2)
└── compression-1.3.zip: TensorFlow-compression module
└── pytorch: pytorch version code, models included
└── Improved Deep Point Cloud Geometry Compression.pdf: origional paper
└── flowchart.vsdx: flow chart about encoding process
└── trainsets: ModelNet40_200_pc512_oct3_4k.zip
└── pc_error_d: gpcc metrics software
└── tmc3: gpcc compression software

## environment
1. tensorflow
refer to pcc_geo_cnn_v2-master/readme.md
2. pytorch
Python 3.6.9
torch 1.8.1
torchac 0.9.3

## command
1. tensorflow

Training specific model:

> python tr_train.py
'/userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2-master/ModelNet40_200_pc512_oct3_4k/**/*.ply'
/userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04
--resolution 64 --lmbda 1.00e-04 --alpha 0.75 --gamma 2.0 --batch_size 32 --model_config c3p

Experiment (compress, decompress, remap colors, compute metrics):

> python ev_experiment.py \
--output_dir /userhome/pcc_geo_cnn_v2/output_yh/ev_experiment \
--model_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--model_config c3p --opt_metrics d1_mse d2_mse --max_deltas inf \
--pc_name redandblack_vox10_1550 \
--pcerror_path /userhome/pcc_geo_cnn_v2/pc_error_d \
--pcerror_cfg_path /userhome/PCGCv2/PCGCv2-8.3/mpeg-pcc-tmc13-master/cfg/trisoup-predlift/lossy-geom-lossy-attrs/redandblack_vox10_1550/r04/pcerror.cfg \
--input_pc /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550.ply \
--input_norm /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550_n.ply

Compress/decompress (for D1 and D2 optimized point clouds):

> python compress_octree.py \
--input_files /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550.ply \
--input_normals /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550_n.ply \
--output_files /userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d1.ply.bin \
/userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d2.ply.bin \
--checkpoint_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--opt_metrics d1_mse d2_mse --resolution 1024 --model_config c3p --octree_level 4 \
--dec_files /userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d1.ply.bin.ply \
/userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d2.ply.bin.ply

Compress/decompress (for D1 optimized point cloud only):

> python compress_octree.py \
--input_files /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/redandblack_vox10_1550/redandblack_vox10_1550.ply \
--output_files /userhome/pcc_geo_cnn_v2/output_yh/redandblack_vox10_1550_d1.ply.bin \
--checkpoint_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--opt_metrics d1_mse --resolution 1024 --model_config c3p --octree_level 4 \
--dec_files /userhome/pcc_geo_cnn_v2/output_yh/redandblack_vox10_1550_d1.ply.bin.ply

For more instructions, please refer to pcc_geo_cnn_v2-master/readme.md

2. pytorch

Training:
>python train_new.py

Compress/decompress:
>python compress_octree.py

## performance
We first caculate the BD-PSNR and BD-BR rate of pcc_geo_cnn_v2 with the model of c4-ws, which has the best performance than others, over GPCC(octree), the result shows as below. We can see from the table, that for dense PC with bitwidth of 10 and 11, pcc_geo_cnn_v2 gets better performance over octree, while for sparse or vox12 PC, it gets worse. The main reason is that there is no PC data in training sets with similar distributions or geometry features. Compared with pcc_geo_cnn_v1, pcc_geo_cnn_v2 has better performance, no matter for dense or sparse, or other bitwidth.
![BD_result](BDresult.png)

We then carry out a benchmark test on PC files both on TensorFlow and Pytorch, including those are not tested by author. The result shows below. From the result, we can see that the bpp of pytorch is much smaller than tensorflow, while d1 and d2 is just a little bit smaller, it may be because we adjust the lmbda parameter to get best performance for pytorch. As for running time, pytorch takes less time, for the faster speed of 3d convolution operation.

|PC files|TF_bpp|TF_d1|TF_d2|TF_Enc_Dec_time|PT_bpp|PT_d1|PT_d2|PT_Enc_dec_time|
|:--|:--|:--|:--|:--|:--|:--|:--|:--|
|queen_vox10_0200.ply|0.692|75.759|79.388|1469.03|0.385|74.798|77.93|1364.85|
|longdress_vox10_1300.ply|0.885|74.75|78.481|1896.43|0.483|73.518|76.717|1369.51|
|basketball_player_vox11_00000200.ply|0.872|82.321|86.291|3133.21|0.464|80.7|83.963|4989.24|
|loot_vox10_1200.ply|0.887|75.119|78.854|1683.22|0.482|73.859|77.037|1383.46|
|dancer_vox11_00000001.ply|0.848|82.376|86.314|2686.44|0.455|80.733|83.86|4377.09|
|soldier_vox10_0690.ply|0.915|74.908|78.689|2480.98|0.498|73.563|76.781|1740.23|
|sarah_vox9_0023.ply|0.891|66.564|70.048|1835.59|0.474|65.759|68.818|424.65|
|sarah_vox10_0023.ply|0.79|72.492|75.916|2548.6|0.434|71.838|74.84|1642.06|
|phil_vox9_0139.ply|0.892|66.164|69.615|1928.87|0.473|65.957|69.14|394.04|
|phil_vox10_0139.ply|0.807|72.197|75.622|2549.86|0.44|71.476|74.564|1727.18|
|redandblack_vox10_1550.ply|0.91|74.082|77.674|1390.07|0.496|72.904|76.05|951.54|

We also draw D1-bpp and D2-bpp figures, to clearly show the performance between different PC files as below. It is obvious that, PCs of vox11 have best performance, which are on up-left part of the figure, PCs of vox10 are second in the middle, and PCs of sparse have bad performance on the bottom part.
![d1](pcc_geo_cnn_v2_D1.png)
![d2](pcc_geo_cnn_v2_D2.png)

## Cite from:
@misc{quach2020improved,
title={Improved Deep Point Cloud Geometry Compression},
author={Maurice Quach and Giuseppe Valenzise and Frederic Dufaux},
year={2020},
eprint={2006.09043},
archivePrefix={arXiv},
primaryClass={cs.CV}
}


## contributors
name: Ye Hua
email: yeh@pcl.ac.cn

+ 10
- 54
README.md View File

@@ -21,65 +21,21 @@ root
└── tmc3: gpcc compression software

## environment
1. tensorflow
refer to pcc_geo_cnn_v2-master/readme.md
2. pytorch
Python 3.6.9
torch 1.8.1
torchac 0.9.3
1. pytorch
- ubuntu 18.04
- cuda V10.2.89
- python 3.6.9
- refer to requirements-pytorch.txt

## command
1. tensorflow

Training specific model:

> python tr_train.py
'/userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2-master/ModelNet40_200_pc512_oct3_4k/**/*.ply'
/userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04
--resolution 64 --lmbda 1.00e-04 --alpha 0.75 --gamma 2.0 --batch_size 32 --model_config c3p

Experiment (compress, decompress, remap colors, compute metrics):

> python ev_experiment.py \
--output_dir /userhome/pcc_geo_cnn_v2/output_yh/ev_experiment \
--model_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--model_config c3p --opt_metrics d1_mse d2_mse --max_deltas inf \
--pc_name redandblack_vox10_1550 \
--pcerror_path /userhome/pcc_geo_cnn_v2/pc_error_d \
--pcerror_cfg_path /userhome/PCGCv2/PCGCv2-8.3/mpeg-pcc-tmc13-master/cfg/trisoup-predlift/lossy-geom-lossy-attrs/redandblack_vox10_1550/r04/pcerror.cfg \
--input_pc /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550.ply \
--input_norm /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550_n.ply

Compress/decompress (for D1 and D2 optimized point clouds):

> python compress_octree.py \
--input_files /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550.ply \
--input_normals /userhome/pcc_geo_cnn_v2/MPEG_PCC_dataset/redandblack_vox10_1550_n.ply \
--output_files /userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d1.ply.bin \
/userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d2.ply.bin \
--checkpoint_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--opt_metrics d1_mse d2_mse --resolution 1024 --model_config c3p --octree_level 4 \
--dec_files /userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d1.ply.bin.ply \
/userhome/pcc_geo_cnn_v2/output_yh/compress_d1d2/redandblack_vox10_1550_d2.ply.bin.ply

Compress/decompress (for D1 optimized point cloud only):

> python compress_octree.py \
--input_files /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/redandblack_vox10_1550/redandblack_vox10_1550.ply \
--output_files /userhome/pcc_geo_cnn_v2/output_yh/redandblack_vox10_1550_d1.ply.bin \
--checkpoint_dir /userhome/pcc_geo_cnn_v2/pcc_geo_cnn_v2/models/c4-ws/1.00e-04 \
--opt_metrics d1_mse --resolution 1024 --model_config c3p --octree_level 4 \
--dec_files /userhome/pcc_geo_cnn_v2/output_yh/redandblack_vox10_1550_d1.ply.bin.ply

For more instructions, please refer to pcc_geo_cnn_v2-master/readme.md

2. pytorch
1. pytorch
> cd pytorch

Training:
>python train_new.py
> python train_new.py

Compress/decompress:
>python compress_octree.py
Compress/Decompress:
> python compress_octree.py --input_files "/userhome/PCGCv1/pytorch_eval/28_airplane_0270.ply" --output_files '28_airplane_0270.ply.bin' --input_normals "/userhome/PCGCv1/pytorch_eval/28_airplane_0270.ply" --dec_files '28_airplane_0270.ply.bin.ply' --checkpoint_dir 'models/5.0e-5/epoch_96.pth'

## performance
We first caculate the BD-PSNR and BD-BR rate of pcc_geo_cnn_v2 with the model of c4-ws, which has the best performance than others, over GPCC(octree), the result shows as below. We can see from the table, that for dense PC with bitwidth of 10 and 11, pcc_geo_cnn_v2 gets better performance over octree, while for sparse or vox12 PC, it gets worse. The main reason is that there is no PC data in training sets with similar distributions or geometry features. Compared with pcc_geo_cnn_v1, pcc_geo_cnn_v2 has better performance, no matter for dense or sparse, or other bitwidth.


+ 10
- 8
pytorch/compress_octree.py View File

@@ -144,6 +144,8 @@ def compress_blocks(model,x_shape, blocks, binstr, points, resolution, level, wi
return data_list, metadata, debug_t_list

def compress():
args_resolution = 1024
args.octree_level = 4
if 'vox10' in args.input_files[0]:
args_resolution = 1024
args.octree_level = 4
@@ -268,16 +270,16 @@ if __name__ == '__main__':
formatter_class=argparse.ArgumentDefaultsHelpFormatter)

parser.add_argument(
'--input_files', default=['Sarah_vox9_0023.ply'],
'--input_files', default='Sarah_vox9_0023.ply',
help='Input files.')
parser.add_argument(
'--output_files', default=['Sarah_vox9_0023_d1.ply.bin'], #,'Sarah_vox9_0023_d2.ply.bin'
'--output_files', default='Sarah_vox9_0023_d1.ply.bin', #,'Sarah_vox9_0023_d2.ply.bin'
help='Output files. If input normals are provided, specify two output files per input file.')
parser.add_argument(
'--input_normals', default=['Sarah_vox9_0023_n.ply'],
'--input_normals', default='Sarah_vox9_0023_n.ply',
help='Input normals. If provided, two output paths are needed for each input file for D1 and D2 optimization.')
parser.add_argument(
'--dec_files', default=['Sarah_vox9_0023_d1.ply.bin.ply'],#,'Sarah_vox9_0023_d2.ply.bin.ply'
'--dec_files', default='Sarah_vox9_0023_d1.ply.bin.ply',#,'Sarah_vox9_0023_d2.ply.bin.ply'
help='Decoded files. Allows compression/decompression in a single execution. If input normals are provided, '
+ 'specify two decoded files per input file.') #这个有
parser.add_argument(
@@ -318,8 +320,8 @@ if __name__ == '__main__':
help="lower bound of scale. 1e-5 or 1e-9")

args = parser.parse_args()
# args.input_files = [args.input_files]
# args.output_files = [args.output_files]
# args.input_normals = [args.input_normals]
# args.dec_files = [args.dec_files]
args.input_files = [args.input_files]
args.output_files = [args.output_files]
args.input_normals = [args.input_normals]
args.dec_files = [args.dec_files]
compress()

pytorch/models/5.0e-5-最好的模型/epoch_96.pth → pytorch/models/5.0e-5/epoch_96.pth View File


pytorch/models/5.0e-5-最好的模型/log.txt → pytorch/models/5.0e-5/log.txt View File


+ 9
- 0
requirements-pytorch.txt View File

@@ -0,0 +1,9 @@
h5py==2.10.0
matplotlib==3.1.1
numpy==1.17.2
open3d==0.11.2
pandas==0.24.2
pyntcloud==0.1.4
torch==1.8.1
torchac==0.9.3
tqdm==4.38.0

Loading…
Cancel
Save