Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
yehua 338da86a32 | 10 months ago | |
---|---|---|
flow_pretrain_np | 10 months ago | |
snapshot | 10 months ago | |
subnet | 10 months ago | |
LICENSE | 11 months ago | |
README.md | 10 months ago | |
augmentation.py | 10 months ago | |
dataset.py | 10 months ago | |
drawuvg.py | 10 months ago | |
get_trainfiles.py | 10 months ago | |
main.py | 10 months ago | |
net.py | 10 months ago | |
run_256.py | 10 months ago | |
run_512.py | 10 months ago | |
run_1024.py | 10 months ago | |
run_2048.py | 10 months ago | |
run_test.py | 10 months ago | |
test-yh.py | 10 months ago | |
test.py | 10 months ago | |
train_256.py | 10 months ago | |
train_512.py | 10 months ago | |
train_1024.py | 10 months ago | |
train_2048.py | 10 months ago |
key words: end-to-end, video compression, deep learning
This paper proposes the first end-to-end video compression deep model that jointly optimizes all the components for video compression. The paper is published in 2019, and readers can read the original paper via the link.
source
├── augmentation.py
├── dataset.py
├── drawuvg.py
├── flow_pretrain_np
├── get_trainfiles.py
├── main.py
├── net.py
├── run_1024.py #run training for lambda 1024
├── run_2048.py
├── run_256.py
├── run_512.py
├── run_test.py #run tests for all lambdas on UVG dataset
├── snapshot
│ ├── best_1024.ckpt #pretrained model for lambda 1024
│ ├── best_2048.ckpt
│ ├── best_256.ckpt
│ ├── best_512.ckpt
| ├── train_1024.log # train log for lambda 1024
| ├── train_2048.log
| ├── train_256.log
| └── train_512.log
├── subnet
│ ├── GDN.py
│ ├── __init__.py
│ ├── analysis.py
│ ├── analysis_mv.py
│ ├── analysis_prior.py
│ ├── basics.py
│ ├── bitEstimator.py
│ ├── endecoder.py
│ ├── flowlib.py
│ ├── ms_ssim_mindspore.py
│ ├── synthesis.py
│ ├── synthesis_mv.py
│ └── synthesis_prior.py
├── test-yh.py
├── test.py
├── train_1024.py #training for lambda 1024
├── train_2048.py
├── train_256.py
└── train_512.py
python train_256.py -d {vimeo90k_dir}/vimeo_septuplet -l train.log --epochs 100 #for lambda 256
or
python run_256.py #for continuous training and avoid memory explosion
python test-yh.py -C DVC_mindspore_1024_YachtRide.csv -L 1024 -F YachtRide -d /userhome/DVC/PyTorch/data/UVG/images/ --ckpt_path /userhome/DVC/MindSpore/snapshot/best_256.ckpt
or
python run_test.py #for all UVG dataset test
bpp | PSNR | MSSSIM | run_time | GPU memory MiB | lambda |
---|---|---|---|---|---|
0.059 | 33.617 | 0.92 | 3277.053 | 8098 | 256 |
0.092 | 34.753 | 0.936 | 3464.094 | 8098 | 512 |
0.158 | 36.35 | 0.946 | 5020.429 | 8098 | 1024 |
0.266 | 37.272 | 0.956 | 5607.829 | 8098 | 2048 |
@inproceedings{lu2019dvc,
title={Dvc: An end-to-end deep video compression framework},
author={Lu, Guo and Ouyang, Wanli and Xu, Dong and Zhang, Xiaoyun and Cai, Chunlei and Gao, Zhiyong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={11006--11015},
year={2019}
}
name: Ye Hua, Zhang Yongchi
email: yeh@pcl.ac.cn
video compression, deep learning
Python
Apache-2.0
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》