
AbstractGood “quality metadata supports the functional requirements of the system it is designed to support” (Guy, Powell, & Day, 2004). Researchers (e.g., Moen, Stewart, and McClure, 1997; Bruce & Hillman, 2004) have identified evaluation criteria and issues surrounding metadata evaluation. This panel further explores metadata quality and evaluation challenges. Specific topics covered include how metadata specialists (library catalogers) utilize available rich content designation for MARC bibliographic records and professional metadata creators—and the identification of evaluation criteria; means for comparing metadata generated by resource authors, automatic metadata generation applications; and two National Science Foundation National Science Digital Library (NSF‐NSDL) projects: a tripartite evaluation involving a metadata quality study, an information retrieval study, and a metadata user study; and a quality assessment of metadata records contributed by members to the NSDL.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
