
We investigate the use of a distributed asynchronous algorithm utilizing infinitesimal perturbation analysis (IPA) gradient estimators for on-line optimization of tandem networks of queues. In our scheme, each queue has a processor that updates a control parameter associated with the queue according to a stochastic gradient algorithm driven by IPA estimates of the gradient of the performance measure. The update times of the processors are not synchronized. The processors also communicate results of computations with each other, and this communication involves delay. We give conditions under which the algorithm converges with probability one. In our proof of convergence we analyze a particular subsequence of the sequence of control parameters, and show that this subsequence behaves like a sequence generated by a centralized synchronous gradient algorithm which updates before the start of certain busy periods of the network, and with gradient estimates that are asymptotically unbiased.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
