Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
zhanghuiyao 008ec94f39 | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
hyp.scratch.p5.yaml | 1 year ago | |
hyp.scratch.p6.yaml | 1 year ago | |
hyp.scratch.tiny.yaml | 1 year ago | |
yolov7-d6.yaml | 1 year ago | |
yolov7-e6.yaml | 1 year ago | |
yolov7-e6e.yaml | 1 year ago | |
yolov7-tiny.yaml | 1 year ago | |
yolov7-w6.yaml | 1 year ago | |
yolov7-x.yaml | 1 year ago | |
yolov7.yaml | 1 year ago |
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100. YOLOv7-E6 object detector (56 FPS V100, 55.9% AP) outperforms both transformer-based detector SWIN-L Cascade-Mask R-CNN (9.2 FPS A100, 53.9% AP) by 509% in speed and 2% in accuracy, and convolutional-based detector ConvNeXt-XL Cascade-Mask R-CNN (8.6 FPS A100, 55.2% AP) by 551% in speed and 0.7% AP in accuracy, as well as YOLOv7 outperforms: YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, DETR, Deformable DETR, DINO-5scale-R50, ViT-Adapter-B and many other object detectors in speed and accuracy. Moreover, we train YOLOv7 only on MS COCO dataset from scratch without using any other datasets or pre-trained weights.
Name | Scale | Arch | Context | ImageSize | Dataset | Box mAP (%) | Params | FLOPs | Recipe | Download |
---|---|---|---|---|---|---|---|---|---|---|
YOLOv7 | Tiny | P5 | D910x8-G | 640 | MS COCO 2017 | 37.5 | 6.2M | 13.8G | yaml | weights |
YOLOv7 | L | P5 | D910x8-G | 640 | MS COCO 2017 | 50.8 | 36.9M | 104.7G | yaml | weights |
YOLOv7 | X | P5 | D910x8-G | 640 | MS COCO 2017 | 52.4 | 71.3M | 189.9G | yaml | weights |
Please refer to the GETTING_STARTED in MindYOLO for details.
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
# distributed training on multiple GPU/Ascend devices
mpirun -n 8 python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend --is_parallel True
If the script is executed by the root user, the
--allow-run-as-root
parameter must be added tompirun
.
Similarly, you can train the model on multiple GPU devices with the above mpirun command.
For detailed illustration of all hyper-parameters, please refer to config.py.
Note: As the global batch size (batch_size x num_devices) is an important hyper-parameter, it is recommended to keep the global batch size unchanged for reproduction or adjust the learning rate linearly to a new global batch size.
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
# standalone training on a CPU/GPU/Ascend device
python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend
To validate the accuracy of the trained model, you can use test.py
and parse the checkpoint path with --weight
.
python test.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend --weight /PATH/TO/WEIGHT.ckpt
[1] Chien-Yao Wang, Alexey Bochkovskiy, and HongYuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696, 2022.
No Description
Python Markdown Shell other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》