Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Software . 2021
Data sources: Datacite
addClaim

ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations

Authors: Jocher, Glenn; Stoken, Alex; Borovec, Jirka; NanoCode012; Ayush Chaurasia; TaoXie; Changyu, Liu; +23 Authors

ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations

Abstract

This release implements YOLOv5-P6 models and retrained YOLOv5-P5 models: YOLOv5-P5 models (same architecture as v4.0 release): 3 output layers P3, P4, P5 at strides 8, 16, 32, trained at --img 640 YOLOv5-P6 models: 4 output layers P3, P4, P5, P6 at strides 8, 16, 32, 64 trained at --img 1280 Example usage: # Command Line python detect.py --weights yolov5m.pt --img 640 # P5 model at 640 python detect.py --weights yolov5m6.pt --img 640 # P6 model at 640 python detect.py --weights yolov5m6.pt --img 1280 # P6 model at 1280 # PyTorch Hub model = torch.hub.load('ultralytics/yolov5', 'yolov5m6') # P6 model results = model(imgs, size=1280) # inference at 1280 All model sizes YOLOv5s/m/l/x are now available in P5 and P6 architectures: python detect.py --weights yolov5s.pt # P5 models yolov5m.pt yolov5l.pt yolov5x.pt yolov5s6.pt # P6 models yolov5m6.pt yolov5l6.pt yolov5x6.pt Notable Updates YouTube Inference: Direct inference from YouTube videos, i.e. python detect.py --source 'https://youtu.be/NUsoVlDFqZg'. Live streaming videos and normal videos supported. (https://github.com/ultralytics/yolov5/pull/2752) AWS Integration: Amazon AWS integration and new AWS Quickstart Guide for simple EC2 instance YOLOv5 training and resuming of interrupted Spot instances. (https://github.com/ultralytics/yolov5/pull/2185) Supervise.ly Integration: New integration with the Supervisely Ecosystem for training and deploying YOLOv5 models with Supervise.ly (https://github.com/ultralytics/yolov5/issues/2518) Improved W&B Integration: Allows saving datasets and models directly to Weights & Biases. This allows for --resume directly from W&B (useful for temporary environments like Colab), as well as enhanced visualization tools. See this blog by @AyushExel for details. (https://github.com/ultralytics/yolov5/pull/2125) Updated Results P6 models include an extra P6/64 output layer for detection of larger objects, and benefit the most from training at higher resolution. For this reason we trained all P5 models at 640, and all P6 models at 1280. <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p> <details> <summary>YOLOv5-P5 640 Figure (click to expand)</summary> <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p> </details> <details> <summary>Figure Notes (click to expand)</summary> * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. </details> April 11, 2021: v5.0 release: YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations. January 5, 2021: v4.0 release: nn.SiLU() activations, Weights & Biases logging, PyTorch Hub integration. August 13, 2020: v3.0 release: nn.Hardswish() activations, data autodownload, native AMP. July 23, 2020: v2.0 release: improved model definition, training and mAP. Pretrained Checkpoints Model size<br><sup>(pixels) mAP<sup>val<br>0.5:0.95 mAP<sup>test<br>0.5:0.95 mAP<sup>val<br>0.5 Speed<br><sup>V100 (ms) params<br><sup>(M) FLOPS<br><sup>640 (B) YOLOv5s 640 36.7 36.7 55.4 2.0 7.3 17.0 YOLOv5m 640 44.5 44.5 63.3 2.7 21.4 51.3 YOLOv5l 640 48.2 48.2 66.9 3.8 47.0 115.4 YOLOv5x 640 50.4 50.4 68.8 6.1 87.7 218.8 YOLOv5s6 1280 43.3 43.3 61.9 4.3 12.7 17.4 YOLOv5m6 1280 50.5 50.5 68.7 8.4 35.9 52.4 YOLOv5l6 1280 53.4 53.4 71.1 12.3 77.2 117.7 YOLOv5x6 1280 54.4 54.4 72.0 22.4 141.8 222.9 YOLOv5x6 TTA 1280 55.0 55.0 72.0 70.8 - - <details> <summary>Table Notes (click to expand)</summary> * AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` * Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment` </details>Changelog Changes between previous release and this release: https://github.com/ultralytics/yolov5/compare/v4.0...v5.0 Changes since this release: https://github.com/ultralytics/yolov5/compare/v5.0...HEAD Click a section below to expand details: <details> <summary>Implemented Enhancements (26) </summary> - Return predictions as json [\#2703](https://github.com/ultralytics/yolov5/issues/2703) - Single channel image training? [\#2609](https://github.com/ultralytics/yolov5/issues/2609) - Images in MPO Format are considered corrupted [\#2446](https://github.com/ultralytics/yolov5/issues/2446) - Improve Validation Visualization [\#2384](https://github.com/ultralytics/yolov5/issues/2384) - Add ASFF \(three fuse feature layers\) int the Head for V5\(s,m,l,x\) [\#2348](https://github.com/ultralytics/yolov5/issues/2348) - Dear author, can you provide a visualization scheme for YOLOV5 feature graphs during detect.py? Thank you! [\#2259](https://github.com/ultralytics/yolov5/issues/2259) - Dataloader [\#2201](https://github.com/ultralytics/yolov5/issues/2201) - Update Train Custom Data wiki page [\#2187](https://github.com/ultralytics/yolov5/issues/2187) - Multi-class NMS [\#2162](https://github.com/ultralytics/yolov5/issues/2162) - 💡Idea: Mosaic cropping using segmentation labels [\#2151](https://github.com/ultralytics/yolov5/issues/2151) - Improving Confusion Matrix Interpretability: FP and FN vectors should be switched to align with Predicted and True axis [\#2071](https://github.com/ultralytics/yolov5/issues/2071) - Interpreting model YoloV5 by Grad-cam [\#2065](https://github.com/ultralytics/yolov5/issues/2065) - Output optimal confidence threshold based on PR curve [\#2048](https://github.com/ultralytics/yolov5/issues/2048) - is it valuable that add --cache-images option to detect.py? [\#2004](https://github.com/ultralytics/yolov5/issues/2004) - I want to change the anchor box to anchor circles, where do you think the change to be made ? [\#1987](https://github.com/ultralytics/yolov5/issues/1987) - Support for imgaug [\#1954](https://github.com/ultralytics/yolov5/issues/1954) - Any plan for Knowledge Distillation? [\#1762](https://github.com/ultralytics/yolov5/issues/1762) - Is there a wasy to run detections on a video/webcam/rtrsp, etc EVERY x SECONDS? [\#1742](https://github.com/ultralytics/yolov5/issues/1742) - Can yolov5 support rotated target detection? [\#1728](https://github.com/ultralytics/yolov5/issues/1728) - Deploying yolov5 to TorchServe \(GPU compatible\) [\#1681](https://github.com/ultralytics/yolov5/issues/1681) - Why diffrent colors of bboxs? [\#1638](https://github.com/ultralytics/yolov5/issues/1638) - Yet another export yolov5 models to ONNX and inference with TensorRT [\#1597](https://github.com/ultralytics/yolov5/issues/1597) - Rerange the blocks of Focus Layer into `row major` to be compatible with tensorflow `SpaceToDepth` [\#413](https://github.com/ultralytics/yolov5/issues/413) - YouTube Livestream Detection [\#2752](https://github.com/ultralytics/yolov5/pull/2752) ([ben-milanko](https://github.com/ben-milanko)) - Add TransformerLayer, TransformerBlock, C3TR modules [\#2333](https://github.com/ultralytics/yolov5/pull/2333) ([dingyiwei](https://github.com/dingyiwei)) - Improved W&B integration [\#2125](https://github.com/ultralytics/yolov5/pull/2125) ([AyushExel](https://github.com/AyushExel)) </details><details> <summary>Fixed Bugs (73)</summary> - it seems that check\_wandb\_resume don't support multiple input files of images. [\#2716](https://github.com/ultralytics/yolov5/issues/2716) - ip camera or web camera. error: \(-215:Assertion failed\) !ss ize.empty\(\) in function 'cv::resize' [\#2709](https://github.com/ultralytics/yolov5/issues/2709) - Model predict with forward will fail if PIL image does not have filename attribute [\#2702](https://github.com/ultralytics/yolov5/issues/2702) - ❔Question Whenever i try to run my model i run into this error AttributeError: 'NoneType' object has no attribute 'startswith' from wandbutils.py line 161 I wonder why ? Any workaround or fix [\#2697](https://github.com/ultralytics/yolov5/issues/2697) - coremltools no longer included in docker container [\#2686](https://github.com/ultralytics/yolov5/issues/2686) - 'LoadImages' path handling appears to be broken [\#2618](https://github.com/ultralytics/yolov5/issues/2618) - CUDA memory leak [\#2586](https://github.com/ultralytics/yolov5/issues/2586) - UnboundLocalError: local variable 'wandb\_logger' referenced before assignment [\#2562](https://github.com/ultralytics/yolov5/issues/2562) - RuntimeError: CUDA error: CUBLAS\_STATUS\_INTERNAL\_ERROR when calling `cublasCreate\(handle\)` [\#2417](https://github.com/ultralytics/yolov5/issues/2417) - CUDNN Mapping Error [\#2415](https://github.com/ultralytics/yolov5/issues/2415) - Can't train in DDP mode after recent update [\#2405](https://github.com/ultralytics/yolov5/issues/2405) - a bug about function bbox\_iou\(\) [\#2376](https://github.com/ultralytics/yolov5/issues/2376) - Training got stuck when I used DistributedDataParallel mode but dataParallel mode is useful [\#2375](https://github.com/ultralytics/yolov5/issues/2375) - Something wrong with fixing ema [\#2343](https://github.com/ultralytics/yolov5/issues/2343) - Conversion to CoreML fails when running with --batch 2 [\#2322](https://github.com/ultralytics/yolov5/issues/2322) - The "fitness" function in train.py. [\#2303](https://github.com/ultralytics/yolov5/issues/2303) - Error "Directory already existed" happen when training with multiple GPUs [\#2275](https://github.com/ultralytics/yolov5/issues/2275) - self.balance = {3: \[4.0, 1.0, 0.4\], 4: \[4.0, 1.0, 0.25, 0.06\], 5: \[4.0, 1.0, 0.25, 0.06, .02\]}\[det.nl\] [\#2255](https://github.com/ultralytics/yolov5/issues/2255) - Cannot run model with URL as argument [\#2246](https://github.com/ultralytics/yolov5/issues/2246) - Yolov5 crashes with RTSP stream analysis [\#2226](https://github.com/ultralytics/yolov5/issues/2226) - interruption during evolve [\#2218](https://github.com/ultralytics/yolov5/issues/2218) - I am a student of Tsinghua University, doing research in Tencent. When I train with yolov5, the following problems appear,Sincerely hope to get help, [\#2203](https://github.com/ultralytics/yolov5/issues/2203) - Frame Loss in video stream [\#2196](https://github.com/ultralytics/yolov5/issues/2196) - wandb.ai not logging epochs vs metrics/losses instead uses step [\#2175](https://github.com/ultralytics/yolov5/issues/2175) - Evolve is leaking files [\#2142](https://github.com/ultralytics/yolov5/issues/2142) - Issue in torchscript model inference [\#2129](https://github.com/ultralytics/yolov5/issues/2129) - RuntimeError: CUDA error: device-side assert triggered [\#2124](https://github.com/ultralytics/yolov5/issues/2124) - In 'evolve' mode, If the original hyp is 0, It will never update [\#2122](https://github.com/ultralytics/yolov5/issues/2122) - Caching image path [\#2121](https://github.com/ultralytics/yolov5/issues/2121) - can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu\(\) to copy the tensor to host memory first [\#2106](https://github.com/ultralytics/yolov5/issues/2106) - Error in creating model with Ghost modules [\#2081](https://github.com/ultralytics/yolov5/issues/2081) - TypeError: int\(\) can't convert non-string with explicit base [\#2066](https://github.com/ultralytics/yolov5/issues/2066) - \[Pytorch Hub\] Hub CI is broken with latest master of yolo5 example. [\#2050](https://github.com/ultralytics/yolov5/issues/2050) - Problems when downloading requirements [\#2047](https://github.com/ultralytics/yolov5/issues/2047) - detect.py - images always saved [\#2029](https://github.com/ultralytics/yolov5/issues/2029) - thop and pycocotools shouldn't be hard requirements to train a model [\#2014](https://github.com/ultralytics/yolov5/issues/2014) - CoreML export failure [\#2007](https://github.com/ultralytics/yolov5/issues/2007) - loss function like has a bug [\#1988](https://github.com/ultralytics/yolov5/issues/1988) - CoreML export failure: unexpected number of inputs for node x.2 \(\_convolution\): 13 [\#1945](https://github.com/ultralytics/yolov5/issues/1945) - torch.nn.modules.module.ModuleAttributeError: 'Hardswish' object has no attribute 'inplace' [\#1939](https://github.com/ultralytics/yolov5/issues/1939) - runs not logging separately in wandb.ai [\#1937](https://github.com/ultralytics/yolov5/issues/1937) - wrong batch size after --resume on multiple GPUs [\#1936](https://github.com/ultralytics/yolov5/issues/1936) - TypeError: int\(\) can't convert non-string with explicit base [\#1927](https://github.com/ultralytics/yolov5/issues/1927) - RuntimeError: DataLoader worker [\#1908](https://github.com/ultralytics/yolov5/issues/1908) - Unable to export weights into onnx [\#1900](https://github.com/ultralytics/yolov5/issues/1900) - CUDA Initialization Warning on Docker when not passing in gpu [\#1891](https://github.com/ultralytics/yolov5/issues/1891) - Issue with github api rate limiting [\#1890](https://github.com/ultralytics/yolov5/issues/1890) - wandb: ERROR Error while calling W&B API: Error 1062: Duplicate entry '189160-gbp6y2en' for key 'PRIMARY' \(\<Response \[409\]\>\) [\#1878](https://github.com/ultralytics/yolov5/issues/1878) - Broken pipe [\#1859](https://github.com/ultralytics/yolov5/issues/1859) - detection.py [\#1858](https://github.com/ultralytics/yolov5/issues/1858) - Getting error on loading custom trained model [\#1856](https://github.com/ultralytics/yolov5/issues/1856) - W&B id is always the same and continue with the old logging. [\#1851](https://github.com/ultralytics/yolov5/issues/1851) - pytorch1.7 is not completely support.'inplace'! 'inplace'! 'inplace'! [\#1832](https://github.com/ultralytics/yolov5/issues/1832) - Validation errors are NaN [\#1804](https://github.com/ultralytics/yolov5/issues/1804) - Error Loading custom model weights with pytorch.hub.load [\#1788](https://github.com/ultralytics/yolov5/issues/1788) - 'cap' object is not self. initialized [\#1781](https://github.com/ultralytics/yolov5/issues/1781) - ValueError: API key must be 40 characters long, yours was 1 [\#1777](https://github.com/ultralytics/yolov5/issues/1777) - scipy [\#1766](https://github.com/ultralytics/yolov5/issues/1766) - error of missing key 'anchors' in hyp.scratch.yaml [\#1744](https://github.com/ultralytics/yolov5/issues/1744) - mss grab color conversion problem using TorchHub [\#1735](https://github.com/ultralytics/yolov5/issues/1735) - Video rotation when running detection. [\#1725](https://github.com/ultralytics/yolov5/issues/1725) - RuntimeError: CUDA out of memory. Tried to allocate 294.00 MiB \(GPU 0; 6.00 GiB total capacity; 118.62 MiB already allocated; 4.20 GiB free; 362.00 MiB reserved in total by PyTorch\) [\#1698](https://github.com/ultralytics/yolov5/issues/1698) - Errors on MAC [\#1690](https://github.com/ultralytics/yolov5/issues/1690) - RuntimeError: DataLoader worker \(pid\(s\) 296430\) exited unexpectedly [\#1675](https://github.com/ultralytics/yolov5/issues/1675) - Non-positive Stride [\#1671](https://github.com/ultralytics/yolov5/issues/1671) - gbk error. How can I solve it? [\#1669](https://github.com/ultralytics/yolov5/issues/1669) - CoreML export failure: unexpected number of inputs for node x.2 \(\_convolution\): 13 [\#1667](https://github.com/ultralytics/yolov5/issues/1667) - RuntimeError: Given groups=1, weight of size \[32, 128, 1, 1\], expected input\[1, 64, 32, 32\] to have 128 channels, but got 64 channels instead [\#1627](https://github.com/ultralytics/yolov5/issues/1627) - segmentation fault [\#1620](https://github.com/ultralytics/yolov5/issues/1620) - Getting different output sizes when using exported torchscript [\#1562](https://github.com/ultralytics/yolov5/issues/1562) - some bugs when training [\#1547](https://github.com/ultralytics/yolov5/issues/1547) - Evolve getting error [\#1319](https://github.com/

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    78
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 1%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 1%
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 2K
    download downloads 17
  • 2K
    views
    17
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
78
Top 1%
Top 10%
Top 1%
2K
17