Views provided by UsageCounts
arXiv: 1712.07084
handle: 11380/1202571 , 10044/1/59220
We consider a mobile user accessing contents in a dynamic environment, where new contents are generated over time (by the user's contacts), and remain relevant to the user for random lifetimes. The user, equipped with a finite-capacity cache memory, randomly accesses the system, and requests all the relevant contents at the time of access. The system incurs an energy cost associated with the number of contents downloaded and the channel quality at that time. Assuming causal knowledge of the channel quality, the content profile, and the user-access behavior, we model the proactive caching problem as a Markov decision process with the goal of minimizing the long-term average energy cost. We first prove the optimality of a threshold-based proactive caching scheme, which dynamically caches or removes appropriate contents from the memory, prior to being requested by the user, depending on the channel state. The optimal threshold values depend on the system state, and hence, are computationally intractable. Therefore, we propose parametric representations for the threshold values, and use reinforcement-learning algorithms to find near-optimal parametrizations. We demonstrate through simulations that the proposed schemes significantly outperform classical reactive downloading, and perform very close to a genie-aided lower bound.
FOS: Computer and information sciences, Technology, reinforcement learning, Computer Science - Information Theory, 0805 Distributed Computing, Systems and Control (eess.SY), Electrical Engineering and Systems Science - Systems and Control, Computer Science - Networking and Internet Architecture, proactive content caching, Engineering, 1005 Communications Technologies, FOS: Electrical engineering, electronic engineering, information engineering, Networking and Internet Architecture (cs.NI), Science & Technology, Information Theory (cs.IT), policy gradient methods, Engineering, Electrical & Electronic, 004, 620, 0906 Electrical and Electronic Engineering, Telecommunications, Electrical & Electronic, Markov decision process; policy gradient methods; proactive content caching; reinforcement learning, Networking & Telecommunications, Markov decision process
FOS: Computer and information sciences, Technology, reinforcement learning, Computer Science - Information Theory, 0805 Distributed Computing, Systems and Control (eess.SY), Electrical Engineering and Systems Science - Systems and Control, Computer Science - Networking and Internet Architecture, proactive content caching, Engineering, 1005 Communications Technologies, FOS: Electrical engineering, electronic engineering, information engineering, Networking and Internet Architecture (cs.NI), Science & Technology, Information Theory (cs.IT), policy gradient methods, Engineering, Electrical & Electronic, 004, 620, 0906 Electrical and Electronic Engineering, Telecommunications, Electrical & Electronic, Markov decision process; policy gradient methods; proactive content caching; reinforcement learning, Networking & Telecommunications, Markov decision process
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 85 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
| views | 119 |

Views provided by UsageCounts