
handle: 2108/432524
The popularity of the Function-As-a-Service (FaaS) computing paradigm has exceeded the borders of Cloud data centers, aiming to bring the benefits of serverless computing at the edge of the network as well. Enjoying both the reduced latency of Edge and the resource richness of Cloud calls for suitable architectures and strategies, especially as regards policies for function offloading to make efficient use of the available resources. In this paper, we devise Quality-Of-Service-Aware offloading policies for serverless functions through deep reinforcement learning (DRL). The proposed approach learns how to optimize the utility generated over time and the incurred operational costs, keeping into account the service requirements of heterogeneous functions and users. Experiments on a FaaS prototype demonstrate the effectiveness of the approach, compared to a model-based competitor, at the price of increased computational demand due to DRL training.
Reinforcement learning, Edge computing, Functions-As-a-Service, Serverless computing
Reinforcement learning, Edge computing, Functions-As-a-Service, Serverless computing
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
