NSF Workshop on “Scholarly Evaluation Metrics:

Opportunities and Challenges

 

Organizers:

  1. Johan Bollen, School of Informatics and Computing, Indiana University

  2. Herbert Van de Sompel, Research Library, Los Alamos National Laboratory

  3. Ying Ding, School of Library and Information Science, Indiana University

Moderator: Clifford Lynch, CNI

Confirmed Speakers/Panelists:

Oren Beit-Arie (Ex Libris), Peter Binfield (PLoS ONE), Johan Bollen (Indiana University), Lorcan Dempsey (OCLC), Tony Hey (Microsoft Research), Jorge E. Hirsch (UCSD), Julia Lane (NSF), Michael Kurtz (Astrophysics Data Service), Clifford Lynch (CNI), Alexis-Michel Mugabushaka (ERCEA), Don Waters (Andrew W. Mellon Foundation), Jevin West (UW/eigenfactor.org), Jan Velterop (Concept Web Alliance)

Location: Renaissance Washington DC Hotel

Time and date: Wednesday, December 16th 2009, 09:00 AM - 05:30 PM [1]

Supported by:



1. Summary

The quantitative evaluation of scholarly impact and value has historically been conducted on the basis of metrics derived from citation data. For example, the well-known journal Impact Factor is defined as a mean two-year citation rate for the articles published in a particular journal. Although well-established and productive, this approach is not always best suited to fit the fast-paced, open, and interdisciplinary nature of today's digital scholarship. Also, consensus seems to emerge that it would be constructive to have multiple metrics, not just one.

In the past years, significant advances have been made in this realm. First, we have seen a rapid expansion of proposed metrics to evaluate scientific impact. This expansion has been driven by interdisciplinary work in web, network and social network science, e.g. citation PageRank, h-index, and various other social network metrics. Second, new data sets such as usage and query data, which represent aspects of scholarly dynamics other than citation, have been investigated as the basis for novel metrics. The COUNTER and MESUR projects are examples in this realm. And, third, an interest in applying Web reputation concepts in the realm of scholarly evaluation has emerged and is generally referred to a Webometrics.

A plethora of proposals, both concrete and speculative, has thus emerged to expand the toolkit available for evaluating scholarly impact to the degree that it has become difficult to see the forest for the trees. Which of these new metrics and underlying data sets best approximate a common-sense understanding of scholarly impact? Which can be best applied to assess a particular facet of scholarly impact? Which ones are fit to be used in a future, fully electronic and open science environment? Which makes most sense from the perspective of those involved with the practice of evaluating scientific impact? Which are regarded fair by scholars? Under which conditions can novel metrics become an accepted and well-understood part of the evaluation toolkit that is, for example, used in promotion and tenure decisions?

This workshop features speakers and panelists that have done concrete research and exploratory thinking in this problem domain, that have perspectives on the existing practices in this domain, and/or carry a vision of assessment approaches for the rapidly emerging digital and network-based scholarly environment. The workshop will be an ideal opportunity for a public discussion on characteristics and requirements of novel assessment approaches that are acceptable to all stakeholders. 

2. Schedule overview:

The workshop will consist of three sessions:

  1. 1.Morning Session: “Impact metrics: Existing Practice and Approaches, and Recent Developments.”

  2. 2.Afternoon Session: “Scholarly assessment: Existing Practice and Approaches, and Visions for the Future.”

  3. 3.Afternoon Panel and Discussion with Participants: The focus is on identifying requirements for novel assessment approaches to become acceptable to community stakeholders including scholars, academic and research institutions, and funding agencies. 


3. Outcomes:

On the basis of the presentations and moderated discussion, the organizers will produce and publish a white paper that summarizes the workshop, and issue a list of major challenges  that can serve as recommendations to funding agencies and parties interested in supporting the development of this domain.



[1] Note: Following the CNI Task Force Meeting held on December 14th and 15th at the same location.