
handle: 11104/0161594
In this note we focus attention on characterization of policies maximizing growth rate of expected utility, along with average of the associated certainty equivalent, in risk-sensitive Markov decision chains with finite state and action spaces. In contrast to existing literature, the problem is handled by methods of stochastic dynamic programming on condition that the transition probabilities are replaced by general nonnegative matrices. Using the block-triangular decomposition of a collection of nonnegative matrices we establish necessary and sufficient condition guaranteeing independence of optimal values on starting state along with partition of the state space into subsets with constant optimal values. Finally for models with growth rate independent of the starting state we show how the method work if we minimize growth rate or average of the certainty equivalent.
average optimal policies, optimal growth rates, risk-sensitive Markov decision chains
average optimal policies, optimal growth rates, risk-sensitive Markov decision chains
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
