
handle: 11581/331782 , 11581/353990
This paper presents a method of comparing two queueing networks. In this context one typically thinks of one network as being a solvable modification of another unsolvable one of practical interest. The approach is essentially based upon evaluating steady-state performance measures by a cumulative reward structure and strongly relies upon the analytical estimation of so-called bias terms. To this end, in contrast with the standard stochastic comparison approach, a Markov reward approach is presented. This approach is based upon a discrete-time transformation and one-step Markov reward or dynamic programming steps. The essential ingredients of this approach, in more detail, are - to analyze steady-state performance measures via expected average rewards; - to use a discrete-time Markov transition structure and to compare the difference of the two systems in its one-step transition structure; - to use inductive arguments to estimate or bound the so-called bias terms for one of the two systems. Leonardo Pasini
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
