In short, instead of actually reading and understanding scientific papers, many people 'assess' their value by looking at how often they have been cited. Instead of reading and understanding their work, many people, even members of search committees or advisors of funding agencies, 'assess' scientists by looking at how often their papers have been cited. And instead of reading and understanding the articles published in them, many people 'assess' scientific journals by looking at how often their average article is cited within the first two years after publication.
And as mentioned before, this approach systematically favours areas of science that have a quick turn-around and lots of practitioners able to cite each other whereas it systematically disadvantages areas of science where many publications are written for long term use, published in books as opposed to journals, and while very useful to many people may not even be meant to be cited. Such as the floras and monographs produced by taxonomists, for example.
But of course, once you think you know what bad looks like somebody will introduce you to something worse.
Recently a friend made me aware of Altmetric. What is this about?
Hi, we're Altmetric, a London-based start-up focused on making article level metrics easy. Our mission is to track and analyse the online activity around scholarly literature.Ah, so it is meant to be an alternative to ye olde boringe number of citations. That's intriguing. But if not number of citations, what other indicators of scholarly and scientific quality do they collect to calculate their Altmetric scores?
We think that...
- Authors should be able to see the attention that their articles are receiving in real-time.
- Publishers, librarians and repository managers should be able to show authors and readers the conversations surrounding their content.
- Editors should be able to quickly identify commentary where a response is required.
- Researchers should be able to see which recent papers their peers think are interesting.
The Altmetric score is our quantitative measure of the attention that a scholarly article has received. It is derived from 3 main factors:So the idea is, a scientific research article is getting a score based on how often some chaps mentioned it on Twitter, how many people blogged about it, and whether the traditional media reported on it.
Volume
The score for an article rises as more people mention it. We only count 1 mention from each person per source, so if you tweet about the same paper more than once, Altmetric will ignore everything but the first.
Sources
Each category of mention contributes a different base amount to the final score. For example, a newspaper article contributes more than a blog post which contributes more than a tweet.
Authors
We look at how often the author of each mention talks about scholarly articles, at whether or not there's any bias towards a particular journal or publisher and at who the audience is. For example, a doctor sharing a link with other doctors counts for far more than a journal account pushing the same link out automatically.
And nobody pointed out that that there might be issues with that? They even won prices for that concept?
Okay, just off the top of my head. First, how on earth are tweets and blog posts connected, even ever so tentatively, to anything worth measuring about a scientific or scholarly article? What has this got to do with quality or its long term impact on the advancement of our understanding of the natural world? Is this really how career scientists should be assessed in the future - great, your work is often mentioned on Twitter, you deserve a promotion?
(First and a half, it should go without mention that really abysmal articles might also be mentioned a lot in social media as people are poking fun at them.)
Second, even in a best case scenario this will clearly favour flashy stuff and penalise all kinds of work that Jane and Joe Average cannot immediately relate to. Do something completely irrelevant with cute-looking animals? SCORE! Construct a funny looking robot? SCORE AGAIN! Figure out a new method for differentiating recent introgression from ancestral polymorphism? SCO..., er what? What do those words mean? What is that, the explanation doesn't fit into a tweet? Bored now.
Third, and perhaps most importantly, don't they realise how easily their metric can be manipulated? Of course, any metric will be manipulated the second anybody starts paying attention to it, but in the case of numbers of citation and journal impact factors at least you have to get yourself published, with a real scientific article, in one of the relevant journals. So citation cartels exist but they can only get off the ground if the people do at least something legitimate.
But if Altmetric should be lucky enough to be taken seriously, what is to keep people from writing Twitter and blogging bots that push the score of an article? Surely there would quickly be entrepreneurs offering that service for money. At a bare minimum, authors will go around asking all their friends to spuriously tweet and blog about their newest article the moment it comes out.
Again, here is a bizarre idea: How about reading an article to figure out if it is any good?
Oh dear god. What a horrendous idea. I'm not even going to be able to articulate in how many ways that is wrong.
ReplyDeleteSave to emphasize your point that the existing system isn't working either. Besides the distasteful idea that some research is more 'impactful' than other, using a journals impact factor as a proxy for the worth of an individual article, or the number of times any particular article is cited is highly unreliable.
The problem is the way the system currently works there is value in having a metric that indicates how 'valuable' (horribly subjective) a researchers work is in some sense. Typically I see this as being in hiring and funding. *Ideally* a selection panel would be able to sit down and read the collected works of all applicants and reach a considered opinion, but with hundred+ applicants this is never going to happen. Also the field is too big for even subgroups of experts to know everyone else in the field by reputation alone, and not all these people can be interviewed. So people throw around impact factors, citation rates, H indices, etc.
But is there a decent alternative?
Maybe with more funding (or fewer qualified PhDs?) there would be a relaxation on management to hire *the one* top prospect or fund the most cutting edge sexy research, but I don't see that happening.