
Demonstrating reliability of cognitive multi-agent systems is of key importance. There has been an extensive amount of work on logics for verifying cognitive agents but it has remained mostly theoretical. Cognitive agent-oriented programming languages provide the tools for compact representation of complex decision making mechanisms, which offers an opportunity for applying a theorem proving approach. We base our work on the belief that theorem proving can add to the currently available approaches for providing assurance for cognitive multi-agent systems. However, a practical approach using theorem proving is missing. We explore the use of proof assistants to make verifying cognitive multi-agent systems more practical.
SDG 16 - Peace, Justice and Strong Institutions
SDG 16 - Peace, Justice and Strong Institutions
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
