Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
JYChen af56c3e5a7 | 2 years ago | |
---|---|---|
.. | ||
README.md | 2 years ago | |
tinypose_128x96.yml | 2 years ago | |
tinypose_256x192.yml | 2 years ago |
PP-TinyPose是PaddleDetecion针对移动端设备优化的实时姿态检测模型,可流畅地在移动端设备上执行多人姿态估计任务。借助PaddleDetecion自研的优秀轻量级检测模型PicoDet,我们同时提供了特色的轻量级垂类行人检测模型。TinyPose的运行环境有以下依赖要求:
如希望在移动端部署,则还需要:
模型 | 输入尺寸 | AP (COCO Val) | 单人推理耗时 (FP32) | 单人推理耗时(FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) |
---|---|---|---|---|---|---|---|---|---|
PP-TinyPose | 128*96 | 58.1 | 4.57ms | 3.27ms | Config | Model | 预测部署模型 | Lite部署模型 | Lite部署模型(FP16) |
PP-TinyPose | 256*192 | 68.8 | 14.07ms | 8.33ms | Config | Model | 预测部署模型 | Lite部署模型 | Lite部署模型(FP16) |
模型 | 输入尺寸 | mAP (COCO Val) | 平均推理耗时 (FP32) | 平均推理耗时 (FP16) | 配置文件 | 模型权重 | 预测部署模型 | Paddle-Lite部署模型(FP32) | Paddle-Lite部署模型(FP16) |
---|---|---|---|---|---|---|---|---|---|
PicoDet-S-Pedestrian | 192*192 | 29.0 | 4.30ms | 2.37ms | Config | Model | 预测部署模型 | Lite部署模型 | Lite部署模型(FP16) |
PicoDet-S-Pedestrian | 320*320 | 38.5 | 10.26ms | 6.30ms | Config | Model | 预测部署模型 | Lite部署模型 | Lite部署模型(FP16) |
说明
COCO train2017
和AI Challenger trainset
作为训练集。姿态检测模型使用COCO person keypoints val2017
作为测试集,行人检测模型采用COCO instances val2017
作为测试集。行人检测模型 | 姿态检测模型 | mAP (COCO Val) | 单人耗时 (FP32) | 单人耗时 (FP16) | 6人耗时 (FP32) | 6人耗时 (FP16) |
---|---|---|---|---|---|---|
PicoDet-S-Pedestrian-192*192 | PP-TinyPose-128*96 | 36.7 | 11.72 ms | 8.18 ms | 36.22 ms | 26.33 ms |
PicoDet-S-Pedestrian-320*320 | PP-TinyPose-128*96 | 44.2 | 19.45 ms | 14.41 ms | 44.0 ms | 32.57 ms |
说明
姿态检测模型与行人检测模型的训练集在COCO
以外还扩充了AI Challenger数据集,各数据集关键点定义如下:
COCO keypoint Description:
0: "Nose",
1: "Left Eye",
2: "Right Eye",
3: "Left Ear",
4: "Right Ear",
5: "Left Shoulder,
6: "Right Shoulder",
7: "Left Elbow",
8: "Right Elbow",
9: "Left Wrist",
10: "Right Wrist",
11: "Left Hip",
12: "Right Hip",
13: "Left Knee",
14: "Right Knee",
15: "Left Ankle",
16: "Right Ankle"
AI Challenger Description:
0: "Right Shoulder",
1: "Right Elbow",
2: "Right Wrist",
3: "Left Shoulder",
4: "Left Elbow",
5: "Left Wrist",
6: "Right Hip",
7: "Right Knee",
8: "Right Ankle",
9: "Left Hip",
10: "Left Knee",
11: "Left Ankle",
12: "Head top",
13: "Neck"
由于两个数据集的关键点标注形式不同,我们将两个数据集的标注进行了对齐,仍然沿用COCO的标注形式,训练的参考列表,其主要处理如下:
AI Challenger
关键点标注顺序调整至与COCO一致,统一是否标注/可见的标志位;AI Challenger
中特有的点位;将AI Challenger
数据中COCO
特有点位标记为未标注;image_id
与annotation id
;COCO
形式的合并数据标注,执行模型训练:# 姿态检测模型
python3 -m paddle.distributed.launch tools/train.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml
# 行人检测模型
python3 -m paddle.distributed.launch tools/train.py -c configs/picodet/application/pedestrian_detection/picodet_s_320_pedestrian.yml
python3 tools/export_model.py -c configs/picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml --output_dir=outut_inference -o weights=output/picodet_s_192_pedestrian/model_final
python3 tools/export_model.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml --output_dir=outut_inference -o weights=output/tinypose_128x96/model_final
导出后的模型如:
picodet_s_192_pedestrian
├── infer_cfg.yml
├── model.pdiparams
├── model.pdiparams.info
└── model.pdmodel
您也可以直接下载模型库中提供的对应预测部署模型
,分别获取得到行人检测模型和姿态检测模型的预测部署模型,解压即可。
# 预测一张图片
python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_file={your image file} --device=GPU
# 预测多张图片
python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --image_dir={dir of image file} --device=GPU
# 预测一个视频
python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU
paddle_inference
库及相关依赖。WITH_KEYPOINT=ON
.# 预测一张图片
./build/main --model_dir=output_inference/picodet_s_320_pedestrian --model_dir_keypoint=output_inference/tinypose_128x96 --image_file={your image file} --device=GPU
# 预测多张图片
./build/main --model_dir=output_inference/picodet_s_320_pedestrian --model_dir_keypoint=output_inference/tinypose_128x96 --image_dir={dir of image file} --device=GPU
# 预测一个视频
./build/main --model_dir=output_inference/picodet_s_320_pedestrian --model_dir_keypoint=output_inference/tinypose_128x96 --video_file={your video file} --device=GPU
Paddle-Lite部署模型
,分别获取得到行人检测模型和姿态检测模型的.nb
格式文件。如果您希望将自己训练的模型应用于部署,可以参考以下步骤:
python3 tools/export_model.py -c configs/picodet/application/pedestrian_detection/picodet_s_192_pedestrian.yml --output_dir=outut_inference -o weights=output/picodet_s_192_pedestrian/model_final TestReader.fuse_normalize=true
python3 tools/export_model.py -c configs/keypoint/tiny_pose/tinypose_128x96.yml --output_dir=outut_inference -o weights=output/tinypose_128x96/model_final TestReader.fuse_normalize=true
pip install paddlelite
.nb
的Paddle-Lite模型用于端侧部署:# 1. 转换行人检测模型
# FP32
paddle_lite_opt --model_dir=inference_model/picodet_s_192_pedestrian --valid_targets=arm --optimize_out=picodet_s_192_pedestrian_fp32
# FP16
paddle_lite_opt --model_dir=inference_model/picodet_s_192_pedestrian --valid_targets=arm --optimize_out=picodet_s_192_pedestrian_fp16 --enable_fp16=true
# 2. 转换姿态检测模型
# FP32
paddle_lite_opt --model_dir=inference_model/tinypose_128x96 --valid_targets=arm --optimize_out=tinypose_128x96_fp32
# FP16
paddle_lite_opt --model_dir=inference_model/tinypose_128x96 --valid_targets=arm --optimize_out=tinypose_128x96_fp16 --enable_fp16=true
我们已提供包含数据预处理、模型推理及模型后处理的全流程示例代码,可根据实际需求进行修改。
注意
TestReader.fuse_normalize=true
参数,可以将对图像的Normalize操作合并在模型中执行,从而实现加速。TinyPose采用了以下策略来平衡模型的速度和精度表现:
PaddleDetection
Python Jupyter Notebook Markdown C++ Java other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》