
AI agents based on large language models (LLMs) are becoming a key tool for automating complex tasks. Unlike general LLMs that simply generate text, modern agents are able to independently plan actions, call external tools and APIs, work with knowledge, and make decisions based on multi-stage analysis of the situation. However, with the increasing complexity of such types of systems, a critical problem of ensuring their robustness arises. This work presents a systematic approach to identifying and classifying problems in AI agent robustness. The provided problem taxonomy describes nine common problems, which might happen during the execution tasks in an AI agent. For practical usage, a comprehensive evaluation methodology is proposed, including metamorphic testing to evaluate the resistance to changes in input data, checking the correctness of working with information sources, analyzing the flow of tasks inside an AI agent, monitoring the tool usage, and evaluating the quality of the final results. The methodology contains specific metrics with success criteria and approaches to their implementation. It is shown that the proposed system covers all identified categories of errors and makes it possible to evaluate the robustness of AI agents not only at the level of components, but also the interaction part and as a system overall.
Robustness AI, AI Agents, Taxonomy of Errors, Evaluation AI
Robustness AI, AI Agents, Taxonomy of Errors, Evaluation AI
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
