
In this paper, we consider the problem of assigning a set of n independent tasks onto a set of m identical processors in such a way that the overall execution time is minimized provided that the precise task execution times are not known a priori. In the following, we first provide a theoretical analysis of several conventional scheduling policies in terms of the worst case slowdown compared with the outcome of an optimal scheduling policy. It is shown that the best known algorithm in the literature achieves a worst case competitive ratio of 1+1/f(n) where f(n) = O(n2/3) for any fixed m, that approaches to one by increasing n to the infinity. We then propose a new scheme that achieves a better worst case ratio of 1+1/g(n) where g(n) = ?(n/ log n) for any fixed m, that approaches to one more quickly than the other schemes.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
