tag:blogger.com,1999:blog-3342041114052632712.post3975434559326177687..comments2024-01-20T16:39:42.179+11:00Comments on PhyloBotanist: A metric of Twitter and blog hits for scientific articlesAlex SLhttp://www.blogger.com/profile/00801894164903608204noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-3342041114052632712.post-52645364327303426352014-08-19T02:19:39.547+10:002014-08-19T02:19:39.547+10:00Oh dear god. What a horrendous idea. I'm not e...Oh dear god. What a horrendous idea. I'm not even going to be able to articulate in how many ways that is wrong.<br />Save to emphasize your point that the existing system isn't working either. Besides the distasteful idea that some research is more 'impactful' than other, using a journals impact factor as a proxy for the worth of an individual article, or the number of times any particular article is cited is highly unreliable.<br />The problem is the way the system currently works there is value in having a metric that indicates how 'valuable' (horribly subjective) a researchers work is in some sense. Typically I see this as being in hiring and funding. *Ideally* a selection panel would be able to sit down and read the collected works of all applicants and reach a considered opinion, but with hundred+ applicants this is never going to happen. Also the field is too big for even subgroups of experts to know everyone else in the field by reputation alone, and not all these people can be interviewed. So people throw around impact factors, citation rates, H indices, etc.<br />But is there a decent alternative?<br />Maybe with more funding (or fewer qualified PhDs?) there would be a relaxation on management to hire *the one* top prospect or fund the most cutting edge sexy research, but I don't see that happening.Borthttps://www.blogger.com/profile/00645468924230544395noreply@blogger.com