Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
hello_Kitty e81efccd40 | 2 years ago | |
---|---|---|
.idea | 2 years ago | |
.vscode | 2 years ago | |
VOCdevkit | 2 years ago | |
__pycache__ | 2 years ago | |
font | 2 years ago | |
img | 2 years ago | |
logs | 2 years ago | |
model_data | 2 years ago | |
nets | 2 years ago | |
utils | 2 years ago | |
2007_test.txt | 2 years ago | |
2007_train.txt | 2 years ago | |
2007_val.txt | 2 years ago | |
FPS_test.py | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
get_dr_txt.py | 2 years ago | |
get_gt_txt.py | 2 years ago | |
get_map.py | 2 years ago | |
kmeans_for_anchors.py | 2 years ago | |
mnist_train.txt | 2 years ago | |
one best target.py | 2 years ago | |
predict.py | 2 years ago | |
run.py | 2 years ago | |
test.py | 2 years ago | |
train.py | 2 years ago | |
train_eager.py | 2 years ago | |
video.py | 2 years ago | |
voc_annotation.py | 2 years ago | |
workspace.code-workspace | 2 years ago | |
yolo.py | 2 years ago | |
yolo_anchors.txt | 2 years ago | |
使用方式.md | 2 years ago |
训练数据集 | 权值文件名称 | 测试数据集 | 输入图片大小 | mAP 0.5:0.95 | mAP 0.5 |
---|---|---|---|---|---|
VOC07+12+COCO | yolov4_tiny_weights_voc.h5 | VOC-Test07 | 416x416 | - | 75.7 |
COCO-Train2017 | yolov4_tiny_weights_coco.h5 | COCO-Val2017 | 416x416 | 19.1 | 38.4 |
tensorflow-gpu==2.2.0
代码中的yolov4_tiny_weights_coco.h5和yolov4_tiny_weights_voc.h5是基于416x416的图片训练的。
在train.py文件下:
1、mosaic参数可用于控制是否实现Mosaic数据增强。
2、Cosine_scheduler可用于控制是否使用学习率余弦退火衰减。
3、label_smoothing可用于控制是否Label Smoothing平滑。
训练所需的yolov4_tiny_weights_coco.h5和yolov4_tiny_weights_voc.h5可在百度网盘中下载。
链接: https://pan.baidu.com/s/127QvzFcEO83ZzV81hsB_fQ 提取码: 234g
a、下载完库后解压,在百度网盘下载yolov4_tiny_voc.h5,放入model_data,运行predict.py,输入
img/street.jpg
可完成预测。
b、利用video.py可进行摄像头检测。
a、按照训练步骤训练。
b、在yolo.py文件里面,在如下部分修改model_path和classes_path使其对应训练好的文件;model_path对应logs文件夹下面的权值文件,classes_path是model_path对应分的类。
_defaults = {
"model_path": 'model_data/yolov4_tiny_weights_coco.h5',
"anchors_path": 'model_data/yolo_anchors.txt',
"classes_path": 'model_data/coco_classes.txt,
"score" : 0.5,
"iou" : 0.3,
# 显存比较小可以使用416x416
# 显存比较大可以使用608x608
"model_image_size" : (416, 416)
}
c、运行predict.py,输入
img/street.jpg
可完成预测。
d、利用video.py可进行摄像头检测。
1、本文使用VOC格式进行训练。
2、训练前将标签文件放在VOCdevkit文件夹下的VOC2007文件夹下的Annotation中。
3、训练前将图片文件放在VOCdevkit文件夹下的VOC2007文件夹下的JPEGImages中。
4、在训练前利用voc2yolo4.py文件生成对应的txt。
5、再运行根目录下的voc_annotation.py,运行前需要将classes改成你自己的classes。注意不要使用中文标签,文件夹中不要有空格!
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
6、此时会生成对应的2007_train.txt,每一行对应其图片位置及其真实框的位置。
7、在训练前需要务必在model_data下新建一个txt文档,文档中输入需要分的类,在train.py中将classes_path指向该文件,示例如下:
classes_path = 'model_data/new_classes.txt'
model_data/new_classes.txt文件内容为:
cat
dog
...
8、运行train.py即可开始训练。
更新了get_gt_txt.py、get_dr_txt.py和get_map.py文件。
get_map文件克隆自https://github.com/Cartucho/mAP
具体mAP计算过程可参考:https://www.bilibili.com/video/BV1zE411u7Vw
https://github.com/qqwweee/keras-yolo3/
https://github.com/Cartucho/mAP
https://github.com/Ma-Dan/keras-yolo4
No Description
PureBasic Text Python
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》