
First documented instance of multiple frontier AI systems from different vendors (Claude, GPT-4, Grok, Gemini, DeepSeek, Kimi) collaborating in real-time to solve mathematical olympiad problems. Achieved 50% accuracy (3/6 problems correct) on IMO 2025, an 18.4 percentage point improvement over Gemini baseline. Notably, Gemini alone solved 0/6 problems in our trials, with all three correct answers emerging from cross-AI collaboration and consensus voting. This work demonstrates that multi-vendor AI collaboration can exceed individual model performance on the hardest mathematical reasoning benchmarks, and introduces a novel "Family Game Night" protocol for fallback reasoning when primary models fail.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
