spacer
Skip to content

altmetrics: a manifesto

No one can read everything.  We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped. However, the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem. We call for more tools and research based on altmetrics.

As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest. Unfortunately, scholarship’s three main filters for importance are failing:

spacer

  • Peer-review has served scholarship well, but is beginning to show its age. It is slow, encourages conventionality, and fails to hold reviewers accountable. Moreover, given that most papers are eventually published somewhere, peer-review fails to limit the volume of research.
  • Citation counting measures are useful, but not sufficient. Metrics like the h-index are even slower than peer-review: a work’s first citation can take years.  Citation measures are narrow;  influential work may remain uncited.  These metrics are narrow; they neglect impact outside the academy, and also ignore the context and reasons for citation.
  • The JIF, which measures journals’ average citations per article, is often incorrectly used to assess the impact of individual articles.  It’s troubling that the exact details of the JIF are a trade secret, and that  significant gaming is relatively easy.

Tomorrow’s filters: altmetrics

In growing numbers, scholars are moving their everyday work to the web. Online reference managers Zotero and Mendeley each claim to store over 40 million articles (making them substantially larger than PubMed); as many as a third of scholars are on Twitter, and a growing number tend scholarly blogs.

These new forms reflect and transmit scholarly impact: that dog-eared (but uncited) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero–where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks–now, we can listen in. The local genomics dataset has moved to an online repository–now, we can track it. This diverse group of activities forms a composite trace of impact far richer than any available before. We call the elements of this trace altmetrics.

Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse. Articles are increasingly joined by:

  • The sharing of “raw science” like datasets, code, and experimental designs
  • Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
  • Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.

Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself.

Altmetrics are fast, using public APIs to gather data in days or weeks. They’re open–not just the data, but the scripts and algorithms that collect and interpret it. Altmetrics look beyond counting and emphasize semantic content like usernames, timestamps, and tags. Altmetrics aren’t citations, nor are they webometrics; although these latter approaches are related to altmetrics, they are relatively slow, unstructured, and closed.

How can altmetrics improve existing filters?

With altmetrics, we can crowdsource peer-review. Instead of waiting months for two opinions, an article’s impact might be assessed by thousands of conversations and bookmarks in a week. In the short term, this is likely to supplement traditional peer-review, perhaps augmenting rapid review in journals like PLoS ONE, BMC Research Notes, or BMJ Open. In the future, greater participation and better systems for identifying expert contributors may allow peer review to be performed entirely from altmetrics. spacer Unlike the JIF, altmetrics reflect the impact of the article itself, not its venue. Unlike citation metrics, altmetrics will track impact outside the academy, impact of influential but uncited work, and impact from sources that aren’t peer-reviewed. Some have suggested altmetrics would be too easy to game; we argue the opposite. The JIF is appallingly open to manipulation; mature altmetrics systems could be more robust, leveraging the diversity of of altmetrics and statistical power of big data to algorithmically detect and correct for fraudulent activity. This approach already works for online advertisers, social news sites, Wikipedia, and search engines.

The speed of altmetrics presents the opportunity to create real-time recommendation and collaborative filtering systems: instead of subscribing to dozens of tables-of-contents, a researcher could get a feed of this week’s most significant work in her field. This becomes especially powerful when combined with quick “alt-publications” like blogs or preprint servers, shrinking the communication cycle from years to weeks or days. Faster, broader impact metrics could also play a role in funding and promotion decisions.

Road map for altmetrics

Speculation regarding altmetrics (Taraborelli, 2008; Neylon and Wu, 2009; Priem and Hemminger, 2010) is beginning to yield to empirical investigation and working tools. Priem and Costello (2010) and Groth and Gurney (2010) find citation on Twitter and blogs respectively.  ReaderMeter computes impact indicators from readership in reference management systems. Datacite promotes  metrics for datasets. Future work must continue along these lines.

Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation. Application designers should continue to build systems to display altmetrics,  develop methods to detect and repair gaming, and create metrics for use and reuse of data. Ultimately, our tools should use the rich semantic data from altmetrics to ask “how and why?” as well as “how many?”

Altmetrics are in their early stages; many questions are unanswered. But given the crisis facing existing filters and the rapid evolution of scholarly communication, the speed, richness, and breadth of altmetrics make them worth investing in.

Jason Priem, University of North Carolina-Chapel Hill (@jasonpriem)
Dario Taraborelli, Wikimedia Foundation (@readermeter)
Paul Groth, VU University Amsterdam (@pgroth)
Cameron Neylon, Science and Technology Facilities Council (@cameronneylon)

spacer

v 1.0 – October 26, 2010
v 1.01 – September 28, 2011: removed dash in alt-metrics

10 Comments

  1. spacer Christina Pikas
    Posted October 27, 2010 at 2:28 am | Permalink

    Great ideas – but with respect to divorcing a metric from the publication venue, I’m skeptical that it’s possible. After all, the Matthew Effect became the long tail in web talk.
    Also, it might be useful to contrast Altmetrics with usage metrics which are also being proposed as alternatives to traditional citation-based metrics

  2. spacer Dario
    Posted October 28, 2010 at 9:41 am | Permalink

    Hi Christina, that’s a good point, but author-level metrics (and for what matters any aggregate institution-level measures) are already divorced from individual publication outlets, aren’t they?I discuss what I believe to be the main differences between usage metrics and metrics based on richer usage patterns (such as personal bookmarking/annotation) in my COOP ’08 paper linked above. The bottom line is: usage metrics are the equivalent (in terms of robustness) of 1st generation ranking algorithms based on click-through rates.

  3. spacer Steve Hitchcock
    Posted October 29, 2010 at 12:02 pm | Permalink

    Nice ideas. Do you mean something like scintilla.nature.com? You end by imploring researchers to invest in alt-metrics, but have not yet answered your own questions on the validity of the new metrics. “Researchers must ask if alt-metrics really reflect impact, or just empty buzz.” That should be the other way round: first show the effect on impact then try to convince researchers. “Work should correlate between alt-metrics and existing measures, predict citations from alt-metrics, and compare alt-metrics with expert evaluation.” This seems to be the way to go.

  4. spacer Jason Priem
    Posted October 29, 2010 at 5:57 pm | Permalink

    Hi Steve. As I understand, Scintilla filters news and blog posts, but it does it with keywords rather than measuring impact from a variety of sources. So while it’s a cool project, I wouldn’t call it an alt-metrics tool.The early data suggest alt-metrics measure “real” impact (scare quotes because even the citation people will tell you this is tricky to define). Moreover, it’s encouraging that alt-metrics, like citations, are built around native scholarly processes (saving, linking, etc); we’re not asking for popularity votes, we’re observing the ways scholars naturally interact with their work. So alt-metrics show a lot of promise, and we’d like to see more work in this area.We’re not arguing that alt-metrics is ready for prime-time yet. When we suggest it’s time to invest in alt-metrics, we mean just that: let’s start building systems and doing research and see if alt-metrics live up to their promise. We think they will.

  5. spacer Grove Patel
    Posted January 18, 2011 at 1:55 pm | Permalink

    It would have been more honest of you to mention Thomson Reuters response to the Rockefeller University Press article…
    community.thomsonreuters.com/t5/Citation-Impact-Center/Thomson-Scientific-Corrects-Inaccuracies-In-Editorial/ba-p/717/message-uid/717

  6. spacer jason
    Posted January 19, 2011 at 9:52 pm | Permalink

    Hi Grove,
    Thanks for the link to Thomson’s rejoinder to the Rossner, Van Epps and Hill (2007) article “Show me the data,” which we link to above. It’s always nice to have a full perspective on an issue, and there are certainly good arguments for the value of the JIF.

    However, I disagree that it’s dishonest for us not to link to it, any more than it was dishonest for you to not link to Rosner et. al’s reply to the reply, “Irreproducible results: a response to Thomson Scientific.” Our goal with the manifesto isn’t to put the Journal Impact Factor on trial; that’s been done enough.

    Our point, rather, is to present a better, fuller alternative to the way we measure impact now. I think the JIF, as used today, has deep flaws…but even if it didn’t, why not look into ways to understand and track impact even better?

  7. spacer Julian Newman
    Posted June 19, 2011 at 9:12 pm | Permalink

    It would be kind of helpful to let new readers know what ALT in “alt-metrics” stands for. Is it an acronym for something, or does it just stand for “alternative”? If it only means “alternative” why should we believe the various assertions that are made about alt-metrics? Are we really sure that we want ANY metrics at all, or are these just something to enable bureaucrats to get under our feet?

  8. spacer jason
    Posted June 19, 2011 at 10:31 pm | Permalink

    Good thought, Julian. The “alt” does indeed stand for “alternative,” and that should be more evident. We’ll change it in the next version of the manifesto (along with losing the hyphen, which has already disappeared from most of our other altmetrics stuff online).

    I don’t think think the term, though, has much to do with the value (or lack thereof) of our assertions. Regardless of what “alt” means, I think it’s increasingly accepted that evaluators (some of whom are “bureaucrats,” but some of whom are fellow scholars on hiring, tenure, and grant committees) and working researchers alike are pretty overwhelmed by the quantity of knowledge being produced. We need ways to get a handle on what’s out there, and what’s good.

    Traditional metrics, while useful, have let us down because they only give us a few ways to measure “good.” But eschewing all measurement of science is throwing the baby out with the bathwater. As scientists, we measure things all the time; shouldn’t we examine our own work as closely? Rather than getting rid of metrics (which, let’s face it, are not going away), we should be adding metrics, so that we get a messier but richer picture of what’s going on in science. “Impact,” like lots of things, is hard to define and measure. But certainly using new communication technologies to build our understanding of research impact is a better plan than just throwing up our hands and going home.

  9. spacer Dana Roth
    Posted February 14, 2012 at 7:44 pm | Permalink

    re: “As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest.” Isn’t another filter the journal in which ‘relevant and significant sources’ are published? Most serious researchers combine perusing journal contents pages with literature searching.

  10. spacer jason
    Posted February 14, 2012 at 8:01 pm | Permalink

    @Dana, absolutely the journal is a filtering mechanism. I’d maintain it’s a broken one, because requires lots of expensive manual curation, hides valuable research in peer review for a year or more, permits only binary yes/no filtering, and only supports one judgement per article (since you can’t publish in multiple journals). These were all unavoidable bugs in a system built on paper. But there’s no need to suffer them if we built a system on the Web.

83 Trackbacks

  1. By Beyond the PDF – it is time for a workshop | Gobbledygook on November 6, 2010 at 9:47 am

    [...] posts, etc. The incoming citations are of course very helpful for discovery, and the basis for alternative metrics. This entry was posted in Thoughts and tagged nlm-dtd, pdf. Bookmark the permalink. [...]

  2. By November 8th at 5pm: New Methods in Scholarship – UNC Digital Scholarship Group on November 8, 2010 at 7:02 pm

    [...] out the alt.metrics manifesto he recently [...]

  3. By Alt-metrics: a manifesto – altmetrics.org | Science Report | Biology News, Economics News, Computer Science News, Mathematics News, Physics News, Psychology News on December 2, 2010 at 4:27 pm

    [...] Björn Brembs to Björn's feed, The Life Scientists, Science 2.0 Alt-metrics: a manifesto – altmetrics.org – altmetrics.org/manifes… [...]

  4. By Weekend Reading: Happy Halloween Edition - ProfHacker - The Chronicle of Higher Education on December 3, 2010 at 9:38 pm

    [...] speaking of peer review, here’s a new attempt to measure scholarly impact: Alt-metrics: A Manifesto: These new forms reflect and transmit scholarly impact: that dog-eared (but uncited) article that [...]

  5. By My open access conversion « Anne Peattie on December 13, 2010 at 6:23 am

    [...] not. Is there a better way of defining the impact of your research? Definitely. Start with the Alt-Metrics Manifesto if you don’t believe [...]

  6. By Quora on December 13, 2010 at 10:29 pm

    How do you determine how influential/important a scientific paper is?…

    Since we’re just starting to ask this question seriously, it’s the kind of thing that can only be answered retrospectively for now.  As we collect more rich activity and attention data of the kind that PLoS and Mendeley are gathering, we’ll learn mo…

  7. By January 2011 Topic on December 16, 2010 at 8:38 pm

    [...] and building prototypes to support this approach.  His recent alt-metrics publications include the alt-metrics manifesto, Scientometrics 2.0: Toward new metrics of scholarly impact on the social Web, and How and why [...]

  8. By Has journal article commenting failed? – Jason Priem on January 7, 2011 at 8:01 am

    [...] I collected data on PLoS comments as part of a larger investigation of alt-metrics. As evident from the graphic, the number articles with comments has held more or less steady as the [...]

  9. By my.altmetrics.org: alt-metrics for your CV – Jason Priem on January 15, 2011 at 6:59 am

    [...] a frontend for our crawler–giving working scholars and funders the opportunity to try out alt-metrics for [...]

  10. By Reflections from #scio11 Saturday’s “Open Science” Track – Carl Boettiger on January 18, 2011 at 1:55 am

    [...] broached the idea of altmetrics on several occasions already, our next session dove into the details of how we might construct [...]

  11. By Alternative metrics at ScienceOnline2011 and beyond « the Undergraduate Science Librarian on January 20, 2011 at 7:25 pm

    [...] is “What problems are we trying to solve?”  I am very familiar with the criticisms of the impact factor, but I’m interested in returning to the basic [...]

  12. By Nature Essay – Trial by Twitter « Phase Transitions on January 21, 2011 at 8:30 pm

    [...] altmetrics home Has Journal Commenting Failed Twitter Survey Report (Sept 2010) on scribd More stats for the PLoS [...]

  13. By Link, don’t pass around files | Book of Trogool on January 25, 2011 at 3:09 pm

    [...] there’s an impact question to consider. As alternative impact metrics take hold in journal publishing, view and download numbers take on new importance for authors. If [...]

  14. By Literaturverwaltung „beyond the PDF“ – Ein Forschungsfeld für Bibliotheken?! « Literaturverwaltung & Bibliotheken on January 28, 2011 at 10:54 am

    [...] Publikationsworkflows: “ Originäre Web-Werkzeuge und -Konzepte wie HTML, Wikis, Weblogs, Alternative Metriken etc. sind grundsätzlich besser dazu geeignet, die Potentiale des Webs für das wissenschaftliche [...]

  15. By The impact factor game | Science Library on February 17, 2011 at 4:43 am

    [...] factor of journals should not be used for evaluating research • The misused impact factor • alt-metrics: a manifesto • The mismeasurement of science • Impact factor wars: Episode V–The Empire Strikes [...]

  16. By Dlaczego naukowcy nie współtworzą Wikipedii? « Nauka – Otwarta on February 24, 2011 at 8:27 am

    [...] Alt-metrics Manifesto (altmetrics.org) [...]

  17. By Publisher and Institutional Repository Usage Statistics (PIRUS 2) « Research Communications Strategy on February 24, 2011 at 5:10 pm

    [...] alternative impact metrics (some that PLoS now provides). He cited people such as Jason Priem (see alt-metrics: a manifesto) and commented that changing the focus from Journal to article, would change the publication [...]

  18. By #altmetrics - Are You Reading Yet? - Stephan Dahl's Blog on March 25, 2011 at 12:43 pm

    [...] research” possible.  With the growing popularity of #altmetrics (or less twitter-like: alt-metrics) it is also starting to make inroads into measuring (academic) research impact (N.B. for those in [...]

  19. By Quora on April 14, 2011 at 10:01 am

    What are some good alternatives to Google Scholar?…

    Are you interested in conducting search? It’s not there yet: in the category of “free”, google scholar, used together with Harzing’s POP application (www.harzing.com/pop.htm), is unbeatable, IMO. But check out current developments under #alt…

  20. By Beyond Impact » Blog Archive » Software Impact – the differences from datasets on May 5, 2011 at 10:14 am

    [...] The DataCite consortium are addressing the challenges of making data sets accessible and visible. Alt-Metrics have emerged to suggest alternative views of impact which move away from the more traditional [...]

  21. By On alternative impact factors and “filtering after the fact” on May 16, 2011 at 9:28 am

    [...] out by (among others) Jason Priem, PhD student in Information and Library Science on the project Alt-metrics which is about tracking scholarly impact on the social web. Preem and his colleagues wants to track [...]

  22. By Joe Paz » Tweets on May 19, 2011 at 2:42 am

    [...] [...]

  23. By My presentation at the ORCID Meeting – ChemConnector Blog on May 23, 2011 at 2:19 am

    [...] their SLideshare presentations (that should have ORCIDs for scientists!), and their “AltMetrics“ 4) Aggregating all RSC articles, new and old (with some work on the archive!) under the [...]

  24. By alt-metrics, a new tool for research assessment « Shamprasad Pujar on June 16, 2011 at 5:43 pm

    [...] who are fully embracing the possibilities of Web 2.0.  This has called for new methods of metrics (altmetrics), which better reflect today’s research practices and take advantage of the use of current social [...]

  25. By Jason Priem, alt-metrics - IRISC on June 30, 2011 at 6:52 am

    [...] – from altmetrics.org/manifesto/ [...]

  26. By Random Hacks on July 2, 2011 at 12:55 pm

    [...] “Altmetrics” Manifesto: altmetrics.org/manifesto/ [...]

  27. By Semantically Mapping Science Project @ VU Amsterdam | juliembirkholz on July 28, 2011 at 2:10 pm

    [...] number of projects on altmetrics including: Julie M. Birkholz and Shenghui Wang (2011) Who are we talking about?: the validity of [...]

  28. By Are we using the right metrics? — Digital Fingerprint on July 29, 2011 at 9:17 am

    [...] community has recently emerged in an effort to achieve this. Complete with a manifesto – at altmetrics.org - this community is striving to understand and measure the products and practices of scholarly [...]

  29. By Altmetrics | juliembirkholz on July 29, 2011 at 12:06 pm

    [...] of online metrics for evaluating science; piggy backing on other discussions in the field such as alt-metrics (which Gamble also [...]

  30. By Many authors have begun to call for investigation… « Dr. Direnc Sakarya on August 17, 2011 at 7:35 pm

    [...] authors have begun to call for investigation of “altmetrics”. [...]

  31. By Inundata – DataCite 2011, recap on August 26, 2011 at 3:39 am

    [...] Related: Altmetrics Manifesto [...]

  32. By It’s About Time We Discussed the Business of Identity « The Scholarly Kitchen on September 27, 2011 at 9:31 am

    [...] are also showing interest in the possibilities of a well-configured identity service. The altmetrics movement is essentially predicated on being able to append various signifiers of scholarly output [...]

  33. By Thomson Reuters, Nobel Prize predictions and correlation vs. causation [Confessions of a Science Librarian] | Digital Brain ; Science and Technology News on October 3, 2011 at 11:30 am

    [...] aren’t what’s important in science and aren’t the best way to measure impact. The Alt-Metrics project and many other initiatives have sprung up over the last few years looking for better ways [...]

  34. By Evaluating Research By the Numbers on October 3, 2011 at 9:28 pm

    [...] then turned to a brief discussion about some of the alternative metrics now being proposed by various journals and publishers. Some of the simplest measures in this [...]

  35. By ATG Hot Topics of the Week: Altmetrics, My Mother Was Nuts… & More on the Berlin Declaration | Against-the-Grain.com on October 21, 2011 at 3:49 pm

    [...] At ACRLog, Bonnie Swoger of SUNY Geneseo tells us why some students are interested in impact factors, h-indexes, etc., and points to a manifesto on altmetrics. [...]

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.