
Summary: Stochastic scheduling problems are considered by using discounted dynamic programming. Both, maximizing pure rewards and minimizing linear holding costs are treated in one common Markov decision problem. A sufficient condition for the optimality of the myopic policy for finite and infinite horizon is given. For the infinite case we show the optimality of an index policy and give a sufficient condition for the index policy to be myopic. Moreover, the relation between the two sufficient conditions is discussed.
Markov and semi-Markov decision processes, Deterministic scheduling theory in operations research, stochastic scheduling, myopic policy, Stochastic systems in control theory (general), discounted dynamic programming, Dynamic programming, index policy
Markov and semi-Markov decision processes, Deterministic scheduling theory in operations research, stochastic scheduling, myopic policy, Stochastic systems in control theory (general), discounted dynamic programming, Dynamic programming, index policy
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
