|

Tradmetrics; citation analysis

1 Leave a comment on paragraph 1 3 Traditional metrics (tradmetrics) are generally constructed by conducting analysis on citation indexes.

2 Leave a comment on paragraph 2 1 In Can Citation Indexes be Automated? (1965) Eugene Garfield – the inventor of the citation index – provides a list of possible motivations for citing prior work. These include paying homage, provision of links to background reading, making corrections, critiquing other work, identifying methodology and disputing the assertions of others. Additionally citations are often used (as in this paper) to build the “general rhetorical argumentation structure”, a structure that is common to most scholarly writing (Swales 1990, p110-176, cited in Teufel et al. 2006). Although these are the reasons citations are usually made, they actually have further utility. Realising this motivated early postulations on indexing citations:

3 Leave a comment on paragraph 3 0 “It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable.” (Garfield 1955)

4 Leave a comment on paragraph 4 9 Garfield founded the Institute for Scientific Information (ISI) in 1960, which was later acquired and has become Web of Knowledge (Anon 2013). Although they didn’t know it at the time, by indexing citations the ISI not only provided the means to do thorough literature searches quickly, but the seeds had been sewn for a new way of understanding research impact through citation analysis. This didn’t happen overnight though. Vaughan & Shaw (2003) present a detailed and balanced history of how citation analysis came to be used as a tool for evaluation, pointing to numerous studies exploring the pros and cons (e.g. Anderson 1991; B Cronin & Overfelt 1994; Blaise Cronin 1984; M. MacRoberts & B. MacRoberts 1989; Seglen 1992; E Garfield 1983b; E Garfield 1983a). A full discussion of the content of such papers is beyond the scope of this one, but a synthesis of the arguments suggests that citation analyses are useful and do have value, but when considered in isolation certainly aren’t a good instrument for evaluating impact.

5 Leave a comment on paragraph 5 3 Collectively the papers listed above lead to the conclusion that any analysis of citations, with a view to evaluating impact, must be considered with a long list of caveats and that list of caveats changes depending on the field, the journal, and the metric being considered. Citation-derived metrics should be interpreted carefully and with full consideration of their limitations. Vaughan & Shaw (2003) note “It took approximately a generation (20 years) for bibliographic citation analysis to achieve acceptability as a measure of academic impact” and despite prolonged studies of the limitations, citation-derived values are ubiquitous throughout academia (Braun 2003; Monastersky 2005). Given the long incubation time for tradmetrics being accepted, it follows that altering perceptions and uses of tradmetrics may also need to be considered in stretched timeframes.

6 Leave a comment on paragraph 6 0 Exemplifying one issue with tradmetrics Garfield (2012) notes “we asked thousands of authors to write commentaries on their highly cited papers… often [the] authors would say that these were not necessarily their most important papers”. In a short statement, based upon consultation with members, the European Association of Science Editors clearly show their perspective on the use of journal impact factors for assessing individuals:

7 Leave a comment on paragraph 7 0 “the European Association of Science Editors recommends that journal impact factors are used only – and cautiously – for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes either directly or as a surrogate.” (EASE 2007).

8 Leave a comment on paragraph 8 1 This is in line with Garfield’s (1983a, 1983b) well considered (but critical) conclusions in respect to impact factors: yet citation metrics are applied to individuals and specific projects (Hargens & Schuman 1990; Lowy 1997; Monastersky 2005) despite the clear problems with doing so. A further issue with tradmetrics is demonstrated in studies that show how they can relatively easily be manipulated (López-Cózar 2012; Davis 2011) and are thus open to gaming. The fundamental problem with using citations as a tool for research evaluation is that citations aren’t units of analysis but rather are multipurpose tools for authors to explain the provenance of their research, build arguments, and to assist other researchers in literature searching. You could liken the use of tradmetrics as a universal measure of impact, to judging the Complete Works of Shakespeare based purely on the type of press used to print it. Priem & Hemminger (2010) deride the journal impact factor (references removed):

9 Leave a comment on paragraph 9 1 “Evaluators often rely on numerically–based shortcuts drawn from the closely related fields of bibliometrics and scientometrics — in particular, Thompson Scientific’s Journal Impact Factor (JIF). However, despite the popularity of this measure, it is slow; narrow; secretive and irreproducible; open to gaming; and based on journals, not the articles they contain.”

10 Leave a comment on paragraph 10 0 Among other criticism, Neylon & Wu (2009) make the case that “It is mathematically problematic, with around 80% of a journal impact factor attributable to around 20% of the papers”.

11 Leave a comment on paragraph 11 1 Maybe the convenience of being able to numerically compare research and researchers has eclipsed interest in measuring impact multidimensionally and in the full spectrum.

12 Leave a comment on paragraph 12 3 A quirk of citation-based metrics is that in some way citations represent a social element to scholarly publishing, and academic study. Understanding this social element would contribute to an improved comprehension of impact. Authors citing each other’s work, regardless of citation function, fits well with how CoP was explained previously. The field of study that is making a citation, or being cited, is akin to the domain. The community is constructed by an intangible connection between scholars that citing one-another creates. And practice is equivalent to the function or meaning of the citation. However, because of the diversity of citations function, and with current indexing methods unable to either determine or express the function, this is of little practical use. As they are represented in literature and indexes, citations can be considered a ‘cold’ medium; they aren’t loaded with meaning. They do not have context. Arguably this deficiency stifles the ability for academic CoP to flourish through the interconnectedness of literature (or more accurately it stifles our ability to measure any CoP that do flourish).

13 Leave a comment on paragraph 13 1 When discussing Why focus on communities? Wenger (2000) says “Communities of practice grow out of a convergent interplay of competence and experience that involves mutual engagement” but because citations themselves, nor metrics derived from them, don’t inherently contain any kind of measure of competence, experience or mutuality, they are not a good tool to measure or encourage community. One possible solution to this shortcoming would be addressed by classification of citation function. By classifying their function, citations could be ‘warmed up’. There are a few examples of different approaches to classifying citation function (García-Castro et al. 2012; Teufel et al. 2006b; Teufel et al. 2006a), citations could be assessed in terms of competence, experience and mutuality using similar techniques. Thus citations could be understood in terms of CoP.

14 Leave a comment on paragraph 14 5 One further factor that deserves mentioning is the information infrastructure that supports tradmetrics. There are various competing indexes. These include Web of Knowledge, Scopus, CiteSeer and Google Scholar. Each one of these systems collects their raw data (publications), extracts the relevant meta-information, and exposes this through a public interface; however each has a bespoke method for doing this. Hence there is inconsistency between each index. The root of this is that there is no widely used standard for semantically publishing documents so that they offer machine-readable meta-information. The value in resolving this problem has been discussed (Cameron 1997; Krichel & Warner 2001; Lawrence et al. 1999; C. Giles et al. 1998; Lindley 2013) but currently progress in this area appears to be stagnant. Improving the information infrastructure that supports indexing of scholarly literature would make it easier to describe publications and their impact in terms of CoP.

page 6

Source: http://communitiesofimpact.joesart.org/tradmetrics-citation-analysis/?replytopara=1