
Recently, Coding Agents gained considerable impact on software engineering processes such as on reviews, tests, documentation, code generation, but also pull requests. Those Agentic Pull Requests flood repositories, and thus, cause considerable effort for integrators that must review these pull requests. However, so far it is unclear which factors influence the acceptance of Agentic Pull Requests. Providing insights would help to improve Coding Agents, support integrators, and provide guidelines for users creating Agentic Pull Requests. In this paper, we propose an exploratory study, using targeted association rule mining, to uncover underlying patterns and the factors that lead to an acceptance of an Agentic Pull Request. Our results indicate that Agentic Pull Requests are more likely to be accepted when response times of integrators are short and the pull requests itself is rather small (e.g., considering number of commits & changed files). Moreover, patterns previously discovered in Human Pull Requests only partly apply but in a considerably less patient way.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
