Subscribe to Newsletter
Outside the Lab Profession, Guidelines and recommendations

The Best Metric of Success

Funding for clinical research has fallen over the past decade – to the point where the National Institutes for Health now provides more money for basic and translational research than it does for clinical research (1). There’s no getting away from this new fiscal reality, which is why it is now best to encourage research that is inexpensive and high impact, maximizing the returns from finite resources. But, to achieve this, it is first crucial to establish a thorough and effective way to measure research output. All too often, the productivity of a researcher is assessed by the amount of grant money they receive; however, this metric alone is restrictive – and often inappropriate.

Evaluating research based solely on grant money received conflates institutional funding – an important factor in its own right – with researcher funding, a metric that is only loosely tied to academic impact. The number of publications combined with research productivity per grant dollar might be a more useful metric of research output than grant dollars alone. Just think about the process of evaluating the efficacy of a particular medical treatment. A million-dollar treatment is not necessarily better than an alternative that costs US$100 – or one that costs nothing. Treatment efficacy is not directly proportional to its cost. Nor is the cost of scientific research necessarily proportional to its impact. In terms of return on investment for the funders of clinical research, high-impact, low-cost investments are obviously more attractive than those with a low impact, but a high cost.

[Existing approaches] inevitably treat all authors of a given publication as equals, despite differences in the value added by contributors to a multi-author publication.

Clinical and basic research in pathology is no different. Academic contributions in some institutions are assessed by grant dollars alone simply for ease and because other measures are not readily available. Other common metrics include the number of publications and the h-index, a number proportional to the square root of the number of publications. One of the main issues with these approaches is that they inevitably treat all authors of a given publication as equals, despite differences in the value added by contributors to a multi-author publication.

A new alternative

What if there were a better way to measure academic productivity in pathology research? The United States and Canadian Academy of Pathology (USCAP) Annual Meeting is the largest global gathering of pathologists, attracting over 4,700 attendees from across the world during its 107th iteration in 2018 (2). The conference is also home to the largest number of on-site scientific abstract presentations in anatomic, clinical, and molecular diagnostic pathology. After the meeting, all presented abstracts are published in USCAP’s official journal, Modern Pathology. We believe that USCAP abstracts could provide an alternative metric by which to gauge both an individual’s impact on the pathology community and the strength of individual institutions.

What makes using USCAP abstracts as a productivity metric attractive?

  • abstracts are presented at – and published in – one place, so there is no need to develop a metric to compare different academic venues (journals, conferences, websites, and so on)
  • abstracts are reviewed in a blinded fashion
  • leading institutions are well-represented and there is a good deal of data for comparison to the world’s best
  • conference abstracts are an entry point for researchers
  • USCAP abstracts are often the basis of manuscripts published in pathology journals (3)

To test and demonstrate the efficacy of using USCAP abstracts as a metric of research productivity, we undertook an in-depth systematic review to uncover the most prolific researchers and to paint a picture of current research trends in the field. Using data from Modern Pathology supplemental issues (4), we retrieved all abstracts from USCAP Annual Meetings between 2015 and 2018.

How did we do it?

Because the data is available in PDF form on the Modern Pathology website (tp.txp.to/ModernPathology), we serially read all files with a custom pre-processor program – written in Python – and converted them to text (5). The PDFs of scientific papers are typically formatted in two columns, so we “de-columned” the text using custom code to avoid formatting irregularities caused by figures and tables, and then used the logic-based text parser (LBTP) program to obtain the information we needed.

LBTP extracted each abstract “head” and further processed it using an algorithm to obtain the abstract, ID number, title, category, author(s), number of authors, affiliation(s), and number of affiliations. These components were written into a tab-separated (.csv) file.

We then used LibreOffice Calc to examine the .csv file and iteratively refine the extraction algorithm, followed by custom programs to generate an author list and an institution list. The author list tabulated author position both generally and in relation to the abstract category, producing two tables: an unweighted authorship table and one that weighted first authors x3, last authors x1, and NFNLA x0.5. The institution list was categorized into i. country of origin, ii. state or province, iii. institution, and iv. other (uncategorized).

Both lists faced a unique issue – that of similar names. Author surnames were grouped together (“lumped”) if the name was deemed infrequent. Institution names were purged of nonspecific words such as “university,” “medical,” or even “the” to create names consisting of unique words (e.g., “Yale” or “Toronto”) so that similar names could be lumped. In this way, “Yale University” and “Yale School of Medicine” could be considered the same institution. Each affiliation was counted only once per abstract, even if multiple authors claimed the same affiliation.

Our data-mining approach enabled us to extract each abstract’s ID number, title, subspecialty, author(s), number of authors, author affiliation, and number of affiliations. After writing these abstract data – along with analysis codes – into a single tab-separated file, our extensive dataset was complete. The final product contained all of the details from 8,621 abstracts – out of a total 8,683 published between 2015 and 2018 – that could be parsed and extracted. Any parse failures are suspected to be related to the PDF formatting, and those that could not be extracted at all had either been classed by Modern Pathology as “previously published” or “withdrawn.” A random audit of 700 extracted abstracts showed that 99% were processed correctly by the LBTP – an estimated error of only 1 percent.

Prolific publishing

By subspecialty, the largest number of abstracts presented at USCAP Annual Meetings over the four years in question came from genitourinary pathology, with a total of 1,001. This is closely followed by gastrointestinal pathology with 919, whereas the subspecialty with the fewest abstracts was infectious disease (see Figure 1). The median number of authors per abstract is five (see Figure 2). We also looked at the number of authors with a given number of abstracts and, as expected, this figure declined steadily as abstract number increased.

Figure 1. Number of abstracts published in each subspecialty by year of publication.

Figure 2. Number of abstracts published by the number of people listed as authors.

Digging a little deeper into the data, we then identified individual authors and arranged them by the total number of abstracts on which they appear as an author (see Table 1). This process delivers an overview of the most productive authors across the whole field based solely on raw abstract count. However, it is widely accepted that not all abstract authorships hold the same academic value – even when presented at the same conference. For example, the 10th author in an abstract with 20 authors is unlikely to have put in the same amount of work as the senior author – and even less than the first author. On the other hand, in an abstract of just two authors, it is entirely plausible that equal effort was contributed by both. That isn’t to say that a large, multi-author publication can’t be contributed to equally; rather, the issue is that, under the current system, there is no indication as to the amount of work offered by each individual on the author list. And that makes further analysis into author contributions difficult; the author list only allows us to guess contributions based on what might be typical.

Table 1. A list of the top authors in the study period by total abstract count.

In an attempt to adjust the most active researchers (according to raw abstract count) for their relative contribution to different abstracts, we created a subset of the data in which different values were applied depending on a given person’s position in the author list. Our weighted composition score (WCS) assigned first author abstracts a value of 3, last author abstracts 1, and non-first, non-last authorships (NFNLA) 0.5. Interestingly, the top 50 authors according to the WCS differ from those based on total abstract count (see Table 2) – and only 28 authors appear on both lists.

Table 2. A list of the top authors in the study period by weighted composite score (WCS).

A class of their own

One of the main issues with relying on a simple abstract count to quantify the productivity of authors is that it doesn’t reveal the budding researchers who deserve further opportunities. Neither does it identify previously productive researchers who have been struggling and might benefit from assistance. But how can we change such a deeply ingrained routine in academic publishing? Some have proposed that authors should be placed in contribution categories, such as “primary author,” “contributing author,” and “supervisory author,” rather than creating one indiscriminate list (6). Perhaps a hybrid model would be an amicable solution: authors could provide a list – as is customary – as well as placing individuals into contribution categories at the time of submission.

This “category approach” to authorship would allow individuals to be in more than one tier; for example, an author could be in both the primary and supervisory categories. They would also allow each category to be occupied by multiple authors to indicate when primary or supervisory authorship is shared. Ideally, contributions should be captured in a more granular fashion to allow further analysis, revealing any “honorary authorships” that don’t meet the International Committee of Medical Journal Editors guidelines for authorship or “ghost authorships” that leave certain contributors unacknowledged.

First to last

It is interesting to note that, at USCAP’s Annual Meeting, authors can appear on an unlimited number of abstracts in a given year. There is a three-abstract limit on the number of times someone can be a first author in a particular year, but nobody in the top 100 authors hits this cap. Notably, the European Society of Pathologists’ rules for their yearly congress restrict individuals to a maximum of five abstract authorships, regardless of authorship type.

Administrators focus principally on the total number of authorships. In other words, the sheer volume of a researcher’s work is currently the key academic metric.

In our top 100 author list, the average total authorships over the four years is 38.4, the average number of last authorships is 11.1, the average number of first authorships is 2.2, and the average number of NFNLAs is 25.2. The predominant authorship among our top 100 publishers was therefore the NFNLA category. Because researchers are naturally keen to do whatever it takes to get ahead, these findings are in keeping with the idea that administrators focus principally on the total number of authorships. In other words, the sheer volume of a researcher’s work is currently the key academic metric.

We believe that a metric with a built-in disincentive for “honorary authorships” might be desirable to prevent anyone appearing as an author who hasn’t pulled their weight. For example, the academic value assigned to the authors in a category, such as NFNLA or supervisory authors, could be divided by the total number of authors in that category. We want to investigate this type of weighting scheme for different authors further.

Because we found that the median number of authors on a given abstract is five (see Table 1b), hypothetically, if we assume that authorship is random, we would expect authors with multiple abstracts to have a first authorship to total authorship ratio of 0.2. To test whether this was the case, we generated a first author distribution plot (see Figure 3) and calculated the average first author to total author (FA/TA) ratio.

Figure 3. First author distribution plot. Blue markers represent one or more authors. All ~20,019 authors are shown; however, many authors are represented by one marker. The red markers are the maximal allowable FA/TA. USCAP does not allow more than 3 first authorships per year. The yellow markers show the expected FA/TA for the thought experiment.

As mentioned above, in our list of the top 100 authors, the average number of first authorships is 2.2 and the average number of total authorships is 38, giving an average FA/TA ratio of 0.057. Therefore, there must be a subgroup with a high FA/TA ratio – so, to identify those individuals who appear as first author more often, we extracted a new author list with a FA/TA ratio over 0.30 (see Table 3). The traditional thinking is that those with a high FA/TA ratio are residents, fellows, or junior staff and up-and-coming researchers. Is Table 3 a collection of future top performers? Or are they keen but under-resourced researchers who haven’t been given the right opportunities? Perhaps even senior researchers keen on occupying first authorship positions? We suspect it is a mix of the above, although the latter is unlikely because few individuals in Table 3 are names that most pathologists would readily identify.

Table 3. Top author list by weighted composite score (WCS) of those who had a FA/TA ratio over 0.30.

Unique insights

Pathology has many different branches/subspecialty areas that should not be compared directly. Therefore, we mapped out the top authors according to total abstract count broken down by subspecialty (see Table 4). These top ranked authors showed little crossover across the 24 (subspecialty) categories, pointing to a high degree of subspecialization. It is obvious that USCAP is a collection of smaller communities with a rich overlap. As a result, USCAP’s blinded review policy is laudable because it enhances objectivity in the assessment of abstracts in the smaller subspecialty groups, where researchers often know each other. In the evaluation process, multiple raters score each abstract in a blinded fashion.

Table 4. Top three authors in each subspecialty according to total abstract count. Names marked * have multiple aliases, all of which can be found in the full dataset that is available for download at the end of this article.

The raw abstract score – and variation among raters – could be an interesting metric of quality itself, if it were ever released. It would help move the discussion around academic contributions beyond merely counting authorships, enhance transparency, and allow pathologists to revisit past discoveries to uncover whether highly rated abstracts from years ago were genuinely high-impact or just highly rated at the time.

As expected for an American and Canadian conference, when we arranged the abstracts by country of origin, we found that the majority were (very unevenly) associated with these two locations (see Figure 4). The US was involved in for 80.1 percent of abstracts and Canada 5.4 percent. In the four-year period assessed, 1,079 abstracts (12.5 percent), were associated with two or more countries. Table 5 shows that the MD Anderson Cancer Center is the most prolific institution, accounting for 365 of the abstracts in our dataset. This is closely followed by the Memorial Sloan Kettering Cancer Center, Mayo Clinic, and Brigham and Women’s Hospital, each with over 300 abstracts.

Figure 4. World map showing number of abstracts according to their country of origin. The top three countries by abstract count are marked with stars.

Table 5. Total number of abstracts sorted by institution keywords. Institutions marked * have multiple aliases, all of which can be found in the full dataset that is available for download at the end of this article. # Known to capture different institutions. ## Some ambiguity – suspected to represent several institutions. ### Nonspecific term – may capture several institutions. #### Overlaps with keyword(s) further up in the table.

Necessary limitations

No analysis is free of limitations – and that includes this work. The first point at which issues arise is during parsing, when the algorithm improperly parses approximately 1 percent of abstracts – meaning that, especially when examining individual authors, its results should be used with caution.

The name lumping algorithm may lead to false matches. Individuals with the same first name and an uncommon last name – say, “Joe E. Skule” and “Joe B. Skule” – will both be lumped together under “Joe Skule.” On the other hand, individuals with common last names will not be lumped, so the system considers “John D. Wang” and “John D.R. Wang” to be different individuals. This can result in an artificially inflated number of authors, with an artificially diminished level of credit to the individual. Unfortunately, there is no way to definitively separate authors with exactly the same name – and the problem increases when using data prior to 2015, when meeting abstracts listed only last name and first initial. The ideal solution would be for meetings such as USCAP to require a unique author identifier, such as an ORCiD ID.

Institution lumping also presents problems – for instance, that it does not capture variant word orders, so “Brigham and Women’s” is not considered the same affiliation as “Women’s & Brigham.” Nor does it capture abbreviations, so an author from “MSKCC” would not be lumped with one from “Memorial Sloan Kettering Cancer Center.” There are also problems with the “other (uncategorized)” group – some institutions and locations have similar or overlapping names (for instance, Mayo Clinics in three different locations, or 33 different Springfields in the United States). At the moment, because of these issues, analysis of affiliations is limited and has a high degree of “background noise.” The ideal solution would be for meetings such as USCAP to create drop-down menus from which authors can select their affiliations, eliminating duplications and uncategorized affiliations.

Additionally, we determined the weightings of each author position arbitrarily for the purpose of this analysis. A formal study to determine accurate weightings would objectify the analysis. Better still would be a system by which contributions are recorded for each abstract or authors are placed in pre-determined categories with existing weightings.

The geographical factor affects the representation of pathology researchers at the USCAP annual meeting, which is always held in either the US or Canada.

Finally, our analysis is limited to the abstracts as published in Modern Pathology. This does not take into account the two different forms of abstracts presented at the USCAP Annual Meeting – posters and platform presentations (known as “proferred papers”). The latter are generally considered to be more significant contributions; however, our system did not distinguish between the two and thus did not weight those authorships more heavily.

Free for all to see

Although it’s clear that the analysis of free text in this way has limitations, there are clearly useful insights we can gain – both about research trends and about the output of individuals and institutions. Because the barriers to these types of analyses are moderate, they will likely become common in the future.

It seems certain that data will play an increasing role in the allocation of resources and the measurement of academic productivity.

It seems certain that data will play an increasing role in the allocation of resources and the measurement of academic productivity. As a result, we as a field need to determine how best to record the appropriate author data – and how to create a next-generation system that rewards innovation and progress and minimizes the degree to which the system is inevitably “gamed.” How the data is collected determines how easy it is to analyze. In this regard, categorical data – rather than free text – is key. If the data were available in a format that could be more easily processed by a machine, it would facilitate further work.

We now hope others will be motivated to conduct their own analyses. The use of USCAP abstracts as a metric for research productivity would not only enhance the standing of USCAP’s Annual Meeting as a venue to present research, but also allow healthcare leaders to better identify both budding star researchers and those who show great promise in the absence of conditions required to reach their full potential.

Download the extended dataset including all authors (with the abstract ID numbers) and longer top contributor list: https://thepathologist.com/fileadmin/pdf/USCAPAnalysisData.ods.

Contributions

Michael Bonert conceived the study, wrote the computer code that completed the analysis, and drafted a manuscript.

Gaurav Vasisth audited the output of several hundred abstracts, provided feedback to improve the computer code, and revised the manuscript.

Christopher Naugler critically reviewed the analysis, suggested further analysis work that was included, and revised the manuscript.

Asghar Naqvi molded the study’s concept with comments and observations and critically reviewed the manuscript.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. KJ Meador, “Decline of clinical research in academic medical centers”, Neurology, 85, 1171 (2015). PMID: 26156509.
  2. Trade Show News Network, “USCAP celebrates more than a century as largest annual pathology meeting” (2018). Available at: bit.ly/2RhKim7.
  3. J Song et al., “The outcome of abstracts presented at the United States and Canadian Academy of Pathology annual meetings”, Mod Pathol, 23, 682 (2010). PMID: 20173734.
  4. Modern Pathology, “Abstract” (2019). Available at: go.nature.com/382uCt2.
  5. Poppler, “Poppler” (2019). Available at: bit.ly/2LgN2wo.
  6. MO Baerlocher, “The meaning of author order in medical research”, J Investig Med, 55, 174 (2007). PMID: 17651671.
About the Authors
Michael Bonert

Assistant Professor of Pathology and Molecular Medicine at McMaster University, Hamilton, Ontario, Canada.


Gaurav Vasisth

Clinical Fellow in Urology at McMaster University, Hamilton, Ontario, Canada.


Christopher Naugler

Professor in the Departments of Pathology, Family Medicine and Community Health Sciences at the Cuming School of Medicine at the University of Calgary, Alberta, Canada.


Asghar Naqvi

Associate Professor of Pathology and Molecular Medicine at McMaster University, Hamilton, Ontario, Canada.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register