
The fast advancement of Large Vision-Language Models (LVLMs) has shown immense potential. These models are increasingly capable of tackling abstract visual tasks. Geometric structures, particularly graphs with their inherent flexibility and complexity, serve as an excellent benchmark for evaluating these models' predictive capabilities. While human observers can readily identify subtle visual details and perform accurate analyses, our investigation reveals that state-of-the-art LVLMs exhibit consistent limitations in specific visual graph scenarios, especially when confronted with stylistic variations. In response to these challenges, we introduce VisGraphVar (Visual Graph Variability), a customizable benchmark generator able to produce graph images for seven distinct task categories (detection, classification, segmentation, pattern recognition, link prediction, reasoning, matching), designed to systematically evaluate the strengths and limitations of individual LVLMs. We use VisGraphVar to produce 990 graph images and evaluate six LVLMs, employing two distinct prompting strategies, namely zero-shot and chain-of-thought. The findings demonstrate that variations in visual attributes of images (e.g., node labeling and layout) and the deliberate inclusion of visual imperfections, such as overlapping nodes, significantly affect model performance. This research emphasizes the importance of a comprehensive evaluation across graph-related tasks, extending beyond reasoning alone. VisGraphVar offers valuable insights to guide the development of more reliable and robust systems capable of performing advanced visual graph analysis.
FOS: Computer and information sciences, Computer Science - Machine Learning, Large vision-language models, Layout, Computer Science - Artificial Intelligence, graph theory, Computer Vision and Pattern Recognition (cs.CV), 68T50, Computer Science - Computer Vision and Pattern Recognition, Benchmark, computer vision, Image color analysis, Machine Learning (cs.LG), Cognition, Visualization, Benchmark testing, Image segmentation, Computer Science - Computation and Language, Complexity theory, Computational modeling, TK1-9971, Image edge detection, Generators, Graph theory, Artificial Intelligence (cs.AI), Computer vision, Electrical engineering. Electronics. Nuclear engineering, large vision-language models, Computation and Language (cs.CL)
FOS: Computer and information sciences, Computer Science - Machine Learning, Large vision-language models, Layout, Computer Science - Artificial Intelligence, graph theory, Computer Vision and Pattern Recognition (cs.CV), 68T50, Computer Science - Computer Vision and Pattern Recognition, Benchmark, computer vision, Image color analysis, Machine Learning (cs.LG), Cognition, Visualization, Benchmark testing, Image segmentation, Computer Science - Computation and Language, Complexity theory, Computational modeling, TK1-9971, Image edge detection, Generators, Graph theory, Artificial Intelligence (cs.AI), Computer vision, Electrical engineering. Electronics. Nuclear engineering, large vision-language models, Computation and Language (cs.CL)
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
