
doi: 10.25560/110050
handle: 10044/1/110050
In serverless computing, users upload their code to a provider, who manages all other aspects of running a distributed parallel application: provisioning and scaling resources, ensuring timely execution, and enforcing security guarantees; all the while billing the user only for what they have used. This makes serverless computing an excellent fit for a wide range of applications, all of which can take advantage of simple, on-demand parallelism, avoiding the otherwise complex job of scaling cloud-based infrastructure. Serverless providers are responsible for each application's distribution and parallelism, and perform fine-grained scaling of many tenants' applications over their shared infrastructure. To simplify their job, today's providers require that applications be decomposed into stateless ephemeral functions. Each function is isolated in its own container or virtual machine (VM), can only share data through external storage, has no guaranteed level of parallelism, and cannot communicate directly with others. While this architecture and programming model simplify the provider's job, they introduce three problems for users: (i) using containers and VMs for isolation introduces excessive overheads, and prevents sharing system resources between co-located functions; (ii) sharing data via external storage introduces duplication, inefficiency, and performance overheads; and (iii) functions can neither guarantee a level of parallelism, nor communicate directly. These problems make it too expensive or complex to execute many stateful parallel applications on serverless today. This thesis offers the following contributions to address these problems: 1. Lightweight serverless isolation. To overcome the resource and performance overheads of using containers and VMs for serverless isolation, we propose a new form of lightweight software-based isolation. This new mechanism offers memory safety guarantees via software fault isolation using WebAssembly, and resource isolation using standard operating system (OS) features. We demonstrate the use of snapshot and restore to reduce initialisation times, and replicate thread and process semantics across hosts. 2. Serverless shared memory. To enable shared memory programming and reduce the overheads associated with external storage, we propose a new serverless runtime that supports parallel processing on local in-memory state, and synchronises this state across hosts. We present an object-oriented API that gives transparent access to this distributed state, as well as support for high-level declarative APIs, e.g. OpenMP. 3. Serverless message passing. We demonstrate serverless message passing between long-lived distributed processes, while retaining a provider's ability to control the distribution of those processes. We describe our approach to asynchronous message passing among groups of processes that can be migrated across hosts, while maintaining message passing semantics.
004
004
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
