
The MUSE data challenge aims to share information, strategies, and best practices for mitigating these issues with the wider community. We will provide a set of raw science observations and associated calibration to any interested team or individual and ask that they apply their ideas to mitigate a given problem. These best efforts will be collated by the SOC/LOC and presented in November in a friendly and collegial manner. As a community, we will discuss the benefits and drawbacks of each approach, and for the first time, compare various techniques in an 'apples to apples' manner with a standardized set of data. We will encourage sharing this knowledge including promoting the outcome of this challenge in a Messenger article, or possible peer-reviewed journal article(s). Most importantly, we hope to understand the impact of various choices on MUSE data products and identify promising new and existing avenues for further work in this field.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
