
doi: 10.1063/1.5043835
An approach is presented to compare two Markov Chains, particularly Continuous-Time Markov Chains (CTMC) such as to model Queueing Networks (QN). Here one may typically think of one CTMC or QN to be a solvable modification (e.g. a product form QN) of the other one, say the original, which is of practical interest but unsolvable. The approach is essentially based upon evaluating performance measures by cumulative reward structures and analytically bounding so-called bias-terms, also known as relative gains or fundamental matrix elements. A general comparison and error bound result will be provided. The approach, referred to as Markov Reward approach, is related to Stochastic Dynamic programming and • may lead to analytic error bounds for the discrepancy, and• may still apply while stochastic comparison failsTo motivate and illustrate the approach, the presentation will contain an instructive finite tandem queue example and a practical result for a real-life application of an Operation Theater-Intensive care unit system. Some remaining questions for research will be addressed briefly.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
