one of the biggest things that stands out is how I realized even big-name philosophers often produce arguments so awful that it’s hard to even say anything interesting about why they’re bad. [...]Another area comes immediately to mind in which similar problems appear to exist is economics, also known as "the dismal science". In contrast, physicists, chemists, climate scientists, astronomers and geologists agree virtually unanimously on virtually anything of importance in their disciplines. It becomes a bit fuzzier in ecology, evolutionary biology and biogeography but the fact of evolution, for example, is in no doubt whatsoever, something that cannot be said, as Hallquist points out, for any of the major issues in philosophy, nor for a question as fundamental as how to deal with a financial crisis in economics.
A big part of the problem is that nobody seems to know how to resolve any of the major disputes in philosophy. This is closely related to the fact that, as philosopher Peter van Inwagen once said, “philosophers do not agree about anything to speak of.” [...]
The total lack of agreement among philosophers on just about anything is problematic for a couple of reasons. For one, many people would like to be able to settle philosophical disputes by looking at what the experts say, an approach that can make perfect sense on issues where the experts genuinely are agreed. But for any given philosophical dispute, while there may be many philosophers who take a certain position, there will pretty much always be many other philosophers who disagree. It’s safe to assume that anyone who tells you otherwise is trying to pull a fast one.
Another problem, which I detailed in retrospective part 1, is that the lack of agreement on what good philosophy is makes it hard to filter the good philosophy and reward the philosophers who produce it.
So what is the difference? Is philosophy simply fuzzy nonsense, or is it dealing with problems that are simply harder to solve? What about other areas where it seems no agreement is being achieved? I must admit I am not writing this post with a great deal of preparation, but it seems to me that there are, as often, various factors to be taken into account.
One is indeed that some issues are more easily solved than others. Unless you are dealing with very explodey substances it is fairly easy to conduct an experiment in anorganic chemistry but much harder to conduct one in, say, biogeography. This is also a problem in economics, of course. It is not easy or desirable to conduct a well controlled experiment on an entire country especially if you might have to implement policies that the vast majority of the population rejects and that may result in great suffering for many of them if your hypothesis turns out to be false. So economics is limited to modelling and natural experiments, and the latter are inherently messier and more open to interpretation than well designed and controlled experiments.
A second factor are financial interests. Here economics is clearly the most obvious case simply because economic policy is, if we are honest, not only, and perhaps not even primarily, about producing a bigger cake but also or perhaps primarily about deciding who gets to eat how big a slice. It is thus possible that two economists are asking the question, "what is the best economic policy", but what one of them means is "what is the best policy to get many people employed" while the other is thinking of "what is the best policy to maximize the profits of large companies"; and chances are that even if they spell that out each of them will assume that following their priorities is the best way to also automatically achieve the goals of the other.
And then there is the small matter of some economic actors being able to provide incentives to economists and policy makers - in the form of large donations to universities, funding of think tanks, offering well paid positions as advisers or generous speaking fees to retired politicians, etc. - that certain other economic actors cannot provide. There is at least the possibility of this introducing a bias, and the point is that this does not happen in many other areas. There are foundations who finance a professorship in economics if the university promises to use it to teach libertarian ideology, but there are no foundations offering to finance a professorship in systematic botany if the university promises to use it to teach that the bryophytes are monophyletic.
The flip side of this is that there is a great economic interest to get certain fields right. This applies especially to physics, chemistry, engineering, geology, agriculture, breeding and biotechnology. No end user of the research produced by materials science and engineering will be happy if their new high speed train unexpectedly falls apart and kills 150 passengers, so the scientists studying the properties of new alloys better make damn sure that they know what they are talking about. In contrast, there is much less economic incentive to get the biogeographical controversy about long distance dispersal versus vicariance right. There is simply no downside to holding a loopy position except the disapproval of more reasonable colleagues.
Finally, there can be a bias on the side of some researchers or the end users of the scholarship in question even if there are no financial interests involved. Some issues are simply very dear to people, and here it seems philosophy is deepest in the mire. How does one define free will, do we have it, and what does that mean for us? How can we know things? Is it reasonable to believe in gods? These and other questions people find it hard to be rational about, and that includes many of the philosophers themselves.
So my feeling is that the problem is not with philosophy as a field or as an approach but instead with the philosophers and with the incentives existing in the field: Personal irrationality may all too often outweigh the incentive to get things right because little turns on it. That does not mean that there isn't a clear answer to the above questions or that philosophy hasn't found it centuries ago, it means merely that it is easier for people who are fond of the wrong answers to rationalize themselves into believing they are right because their wrongness does not as directly stare them in the face as the train wreck would stare into the material scientist's face.