
The image datasets that are most widely used for training deep learning models are specifically developed for applications. This study introduces a novel dataset aimed at augmenting the existing data for the identification of figs in their natural habitats, specifically in the wilderness. In the present study, researchers have generated numerous image datasets specifically for object detection focus on applications in agriculture. Regrettably, it is exceedingly difficult for us to obtain a specialized dataset specifically designed for detecting figs. To tackle this issue, a grand total of 462 photographs of fig fruits were gathered. The augmentation technique was utilized to substantially increase the size of the dataset. Ultimately, we conduct an examination of the dataset by doing a baseline performance study for bounding-box detection using established object detection methods, specifically you only look once (YOLO) version 3 and YOLOv4. The performance obtained on the test photos of our dataset is satisfactory. For farmers, the capacity to identify and oversee fig fruits in their natural or developed environments can be highly advantageous. The detecting device offers instantaneous data regarding the quantity of mature figs, facilitating decision-making procedures.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
