
In edge computing, task offloading involves transferring computational tasks from the “far-edge”, which includes end-user devices or less powerful edge devices, to the “near-edge”, comprising more capable edge servers, or to the ’core’ cloud infrastructure. This practice optimizes performance, reduces latency, and enhances overall efficiency. Energy efficiency in particular has recently become a high-priority criterion for task offloading. A prominent technique for making offloading decisions in edge computing environments is Deep Reinforcement Learning (DRL), known for its ability to adapt to complex environments and excel in multi-objective optimization tasks in terms of decision quality and speed. This paper explores the details of DRL approaches, providing an overview of recent research developments in this field. To simplify the literature analysis, we classify DRL approaches for energy-efficient task offloading between two “computing continua”: the far/near-edge continuum, and the (far-)edge-cloud contiuum.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
