Downloads provided by UsageCounts
Personal Protective Equipment Dataset (PPED) This dataset serves as a benchmark for PPE in chemical plants We provide datasets and experimental results. 1. The dataset We produced a data set based on the actual needs and relevant regulations in chemical plants. The standard GB 39800.1-2020 formulated by the Ministry of Emergency Management of the People’s Republic of China defines the protective requirements for plants and chemical laboratories. The complete dataset is contained in the folder PPED/data. 1.1. Image collection We took more than 3300 pictures. We set the following different characteristics, including different environments, different distances, different lighting conditions, different angles, and the diversity of the number of people photographed. Backgrounds: There are 4 backgrounds, including office, near machines, factory and regular outdoor scenes. Scale: By taking pictures from different distances, the captured PPEs are classified in small, medium and large scales. Light: Good lighting conditions and poor lighting conditions were studied. Diversity: Some images contain a single person, and some contain multiple people. Angle: The pictures we took can be divided into front and side. A total of more than 3300 photos were taken in the raw data under all conditions. All images are located in the folder “PPED/data/JPEGImages”. 1.2. Label We use Labelimg as the labeling tool, and we use the PASCAL-VOC labelimg format. Yolo use the txt format, we can use trans_voc2yolo.py to convert the XML file in PASCAL-VOC format to txt file. Annotations are stored in the folder PPED/data/Annotations 1.3. Dataset Features The pictures are made by us according to the different conditions mentioned above. The file PPED/data/feature.csv is a CSV file which notes all the .os of all the image. It records every feature of the picture, including lighting conditions, angles, backgrounds, number of people and scale. 1.4. Dataset Division The data set is divided into 9:1 training set and test set. 2. Baseline Experiments We provide baseline results with five models, namely Faster R-CNN ®, Faster R-CNN (M), SSD, YOLOv3-spp, and YOLOv5. All code and results is given in folder PPED/experiment. 2.1. Environment and Configuration: Intel Core i7-8700 CPU NVIDIA GTX1060 GPU 16 GB of RAM Python: 3.8.10 pytorch: 1.9.0 pycocotools: pycocotools-win Windows 10 2.2. Applied Models The source codes and results of the applied models is given in folder PPED/experiment with sub-folders corresponding to the model names. 2.2.1. Faster R-CNN Faster R-CNN backbone: resnet50+fpn We downloaded the pre-training weights from https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth. We modified the dataset path, training classes and training parameters including batch size. We run train_res50_fpn.py start training. Then, the weights are trained by the training set. Finally, we validate the results on the test set. backbone: mobilenetv2 the same training method as resnet50+fpn, but the effect is not as good as resnet50+fpn, so it is directly discarded. The Faster R-CNN source code used in our experiment is given in folder PPED/experiment/Faster R-CNN. The weights of the fully-trained Faster R-CNN (R), Faster R-CNN (M) model are stored in file PPED/experiment/trained_models/resNetFpn-model-19.pth and mobile-model.pth. The performance measurements of Faster R-CNN (R) Faster R-CNN (M) are stored in folder PPED/experiment/results/Faster RCNN(R)and Faster RCNN(M). 2.2.2. SSD backbone: resnet50 We downloaded pre-training weights from https://download.pytorch.org/models/resnet50-19c8e357.pth. The same training method as Faster R-CNN is applied. The SSD source code used in our experiment is given in folder PPED/experiment/ssd. The weights of the fully-trained SSD model are stored in file PPED/experiment/trained_models/SSD_19.pth. The performance measurements of SSD are stored in folder PPED/experiment/results/SSD. 2.2.3. YOLOv3-spp backbone: DarkNet53 We modified the type information of the XML file to match our application. We run trans_voc2yolo.py to convert the XML file in VOC format to a txt file. The weights used are: yolov3-spp-ultralytics-608.pt. The YOLOv3-spp source code used in our experiment is given in folder PPED/experiment/YOLOv3-spp. The weights of the fully-trained YOLOv3-spp model are stored in file PPED/experiment/trained_models/YOLOvspp-19.pt. The performance measurements of YOLOv3-spp are stored in folder PPED/experiment/results/YOLOv3-spp. 2.2.4. YOLOv5 backbone: CSP_DarkNet We modified the type information of the XML file to match our application. We run trans_voc2yolo.py to convert the XML file in VOC format to a txt file. The weights used are: yolov5s. The YOLOv5 source code used in our experiment is given in folder PPED/experiment/yolov5. The weights of the fully-trained YOLOv5 model are stored in file PPED/experiment/trained_models/YOLOv5.pt. The performance measurements of YOLOv5 are stored in folder PPED/experiment/results/YOLOv5. 2.3. Evaluation The computed evaluation metrics as well as the code needed to compute them from our dataset are provided in the folder PPED/experiment/eval. 3. Code Sources Faster R-CNN (R and M) https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/tree/master/pytorch_object_detection/faster_rcnn official code: https://github.com/pytorch/vision/blob/main/torchvision/models/detection/faster_rcnn.py SSD https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/tree/master/pytorch_object_detection/ssd official code: https://github.com/pytorch/vision/blob/main/torchvision/models/detection/ssd.py YOLOv3-spp https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/tree/master/pytorch_object_detection/yolov3-spp YOLOv5 https://github.com/ultralytics/yolov5
Faster R-CNN, Object Detection, PPE, YOLO, Benchmark, Personal Protective Equipment, SSD
Faster R-CNN, Object Detection, PPE, YOLO, Benchmark, Personal Protective Equipment, SSD
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 704 | |
| downloads | 300 |

Views provided by UsageCounts
Downloads provided by UsageCounts