
pmid: 38109234
Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which are crucial for many downstream tasks, such as visual reasoning. In response, we introduce a novel Position-guided Text Prompt (PTP) paradigm to bolster the visual grounding abilities of cross-modal models trained with VLP. In the VLP phase, PTP divides an image into N x N blocks and employs a widely-used object detector to identify objects within each block. PTP then reframes the visual grounding task as a fill-in-the-blank problem, encouraging the model to predict objects in given blocks or regress the blocks of a given object, exemplified by filling "[P]" or "[O]" in a PTP sentence such as "The block [P] has a [O]." This strategy enhances the visual grounding capabilities of VLP models, enabling them to better tackle various downstream tasks. Additionally, we integrate the seconda-order relationships between objects to further enhance the visual grounding capabilities of our proposed PTP paradigm. Incorporating PTP into several state-of-the-art VLP frameworks leads to consistently significant improvements across representative cross-modal learning model architectures and multiple benchmarks, such as zero-shot Flickr30 k Retrieval (+5.6 in average recall@1) for ViLT baseline, and COCO Captioning (+5.5 in CIDEr) for the state-of-the-art BLIP baseline. Furthermore, PTP attains comparable results with object-detector-based methods and a faster inference speed, as it discards its object detector during inference, unlike other approaches.
visual grounding, Artificial Intelligence and Robotics, Programming Languages and Compilers, vision-language pre-training, Numerical Analysis and Scientific Computing, Fill-in-the-blank, position-guided text prompt
visual grounding, Artificial Intelligence and Robotics, Programming Languages and Compilers, vision-language pre-training, Numerical Analysis and Scientific Computing, Fill-in-the-blank, position-guided text prompt
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
