Views provided by UsageCounts
The STeLLAR artifact consists of a client, written in Go, structured in modules as well a collection of auxiliary Python scripts for plotting the data obtained. The requirements are Linux Ubuntu 18, x86 server-grade CPU, and 10G NIC. The user should have an AWS account to deploy and benchmark AWS functions (same for other providers). The code is released under the MIT license. All the data presented in the paper was obtained using this tool, namely the studies of warm and cold function invocations, deployment method and language runtime, data transfer delays, bursty invocations and scheduling policies. The source code of the toolchain can be downloaded from Zenodo and compiled into a binary before execution. Further instructions on how to run the tool are available on GitHub and Zenodo.
tail latency, serverless, benchmarking
tail latency, serverless, benchmarking
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 3 |

Views provided by UsageCounts