Monday, October 16, 2017

What works and what does not work in contemporary science?

Today I participated in a workshop on the way forward for taxonomy in Australasia, so it might be a good opportunity to come back to the topic of the bioinformatician who thinks that all of science is broken and to consider what works and what does not work, at least in my view, in the field of science that I have the most direct insights into. I am not limiting myself only to taxonomy but will include all the broader field of systematics, but taxonomy is a major component.

Funding: Everybody says that funding in their field is too low, so this applies across all of science. But are scientists just whining? No, I believe that there is indeed too little competitive research funding available.

First, I have seen and heard of lots of cases where funding agencies have to reject very valuable proposals because there simply isn't enough money to fund everything that would be good to fund (sometimes apparently called 'approved but not funded'). Second, there are many funding agencies where you have success rates on the order of 2-10%. So to conclude that funding levels for competitive grants are high enough we would have to believe that 90-98% of applications are useless and that the weeks that the unsuccessful applicants have each invested into writing their many applications could not have been used in a more productive way. And that seems like a big ask.

Incentive structure: This is the big one, at least to me, and I guess here I find the most overlap with the aforementioned frustrated bioinformatician. What basically happens is that people are rewarded with jobs and promotions for (a) having publications in JCR-listed journals, in particular if those publications are cited a lot by other papers in other such journals, and for (b) getting external research grants, but of course the decision whether somebody gets a grant is also partly and sometimes mostly based on criterion (a). This is simplifying a bit, as there are also, depending on the job, teaching, textbook writing, conference participation, etc., but not by much. Publication lists are usually the key factor.

The problem is not that publications are a key factor though, because if a scientist does not publish their research it is indeed wasted. The problem is that there are lots of useful outputs that scientists can produce that are not, very specifically, research papers in JCR-listed journals.

Perhaps the most impactful thing a taxonomist can do for end users is to produce a publicly accessible online identification key or to contribute to a flora. But no matter how often this output is used to identify organisms, how many people need it for their work, it does not count the tiniest blip towards the taxonomist's number of citations or their h-index. There is no requirement for the end-user to cite a key in a paper, even if they used it during their work; and even if people cited it, it wouldn't count because an online key or flora volume is not captured by the JCR. Consequently, in terms of career advancement the taxonomist would have wasted their time and should instead have produced journal articles cited by other journal articles.

It is clear that people largely do what is rewarded, and largely cease doing what is not rewarded. So to the degree that there are useful things for scientists to do that do not result in publications in JCR-listed journals the incentive structure in science leaves something to be desired.

More generally, I feel that there is too much of a focus on flashy results and innovative methods but too little appreciation of incremental, everyday work. One of the surer ways to be cited a lot appears to be to develop a new lab method or a new piece of analysis software. This visibly leads to conferences full of rising stars each promoting their own new Bayesian analysis method or bioinformatics pipeline, but very few early career researchers contributing to specimen identifications, describing new species, or conducting taxonomic revisions.

Now to publishing itself. Apart from what I wrote in the previous section, in terms of academic papers I am actually not all that unhappy with the situation. Yes, ideally one would have all journals run as public utilities, cost-free to publish in and cost-free to read, instead of having private quasi-monopolies with massive profit rates and, pick your poison, either research locked away behind paywalls or money that could go towards research spent on publication fees.

But in a system where somebody has to pay I prefer subscription-based funding instead of author-pays open access, which is promoted by many people frustrated with the status quo, because in the latter system the incentives are perverse: journals are financially rewarded by accepting as many papers as they can instead of maximising the quality of their content.

As for peer review, again the system as currently implemented seems to work reasonably well; that is why it evolved to be like it is in the first place! I have received good feedback in many cases. I also had one or two cases where I believe the manuscript was unjustly ripped apart by an individual reviewer, but well, there are human egos involved, and one should not make the perfect the enemy of the good. I am trying to be a charitable and constructive reviewer myself but also suggest rejection papers where the conclusions do not follow from the results or where the methodology cannot address the research question.

If there is anything that I see as a current problem it is that there are rumours of journals increasingly being unable to find enough reviewers, which suggests either a lot of free-loading going on or journals being too unimaginative with reviewer invitations, or both. (Certainly I do not appear to get as many invitations from mid-level plant systematics journals as I would expect if they are struggling to find referees.)

Reproducibility: As I wrote in the previous post on this issue, I do not see any evidence whatsoever that taxonomy, phylogenetics, systematics or evolutionary biology have a reproducibility problem.

So that is how I at least perceive that part of science that I can judge best, for what it is worth. More money would be good, but an even more intractable problem is that the incentive structure currently in place does not reward some of the most useful and impactful work that systematists could be doing. Note that neither of these problems would really be solved by scrapping journals and publishing everything on preprint servers, but more on this maybe in another post.

No comments:

Post a Comment