
doi: 10.2139/ssrn.3733743
Algorithms are becoming ubiquitous in our society. They are powerful and, in some cases, indispensable tools in today’s economy. In terms of the technology, we do not yet have AI sophisticated enough to, with a reasonable degree of certainty, reach autonomous tacit collusion in most real markets. This does not mean that we should ignore the potential risks. In fact, in their effort to design AIs that can learn to cooperate with each other and with humans for social good, AI researchers have shown that autonomous algorithmic coordination is possible. But there are also several positive takeaways from this research. For example, given the technical challenges, I argue that just like emails leave a trail of evidence when a cartel uses them to coordinate, a similar trail of evidence is likely present when collusive algorithms are being designed. The literature also gives us a good deal of insights about the types of design features and capabilities that could lead to algorithmic collusion. I highlighted the role of algorithmic communication as a leading example and argued that these known collusive features should raise red flags even if collusion is ultimately reached autonomously by algorithms.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
