Before a manuscript is accepted by a scientific journal, it has to undergo peer review. The editor of the journal sends it out to competent colleagues who suggest either acceptance as is, minor revision (language, presentation), major revision (re-analyze, major rewrite, etc.) or outright rejection. In turn, the reviewers' own manuscripts are likewise reviewed pro bono by other colleagues. In my area the authors generally do not know who their reviewers were, the idea being that nobody should have to fear offending somebody, especially very influential colleagues, by reviewing their papers negatively. On the other hand, the reviewers generally know the name of the authors, which is a possible source of positive or negative bias on their part.
The ideal is, of course, that we should all be objective, fair to each other, and provide constructive criticism within a reasonable time. That ideal is not always achieved. Personally, I think I was overall pretty lucky so far, but between my own less positive experiences and additional tales told by friends and colleagues, I have enough material for a list of how not to do it. I present to you Peer Reviewers From Hell.
The freelance English teacher
Cannot find anything that is scientifically wrong with your paper but does not like the way you express yourself. Will suggest "improvements" for every single sentence so that your revised manuscript would look like something written by them, not something written by you. Usually not a native speaker of English themselves, they will have purist ideas about phrasing and placement of commas where the average North American, British or Australian colleague would not see any issue whatsoever.
The shameless self-promoter
Sees reviewing mostly as a chance to inflate their number of citations and h-index. Will strategically suggest certain of their own publications for you to cite in your manuscript despite their complete irrelevance for the same. On the plus side, this strategy only works if your study actually gets published, so they are less likely to recommend rejection.
Accepts the editor's request to please review the paper within three weeks and then looks at it for the first time two months later. Better hope you did not need your results published urgently.
So your group has just finished a three year field study, examining the biodiversity of 30 carefully chosen rain forest plots in some faraway country. The PhD student has defended and is moving on, the postdoc has a new job in the USA, the field work permits have expired, the money is spent, and you submit the major resulting manuscript to an ecological journal. This is the reviewer who will argue that your results are very intriguing and important, and that they would gladly find the study acceptable if only you went back into the field and added another ten plots to the study design.
The true wonk
Belongs to one of several competing methodological schools (Bayesian vs frequentist statistics, likelihood vs parsimony phylogenetics, etc). You have used method A to analyze your data. They categorically demand you do it all over with their favored method B. Of course, in reality both approaches are equally defensible, and unless your data are so bad that you would not have obtained any well supported results with method A either, the outcome will be precisely the same. But hey, it is not the reviewer's time that is sunk into needlessly repeating the analyses and designing new figures and tables, right?
Will demand that you repeat the entire analysis in a way that two seconds of thought show to be either methodologically wrong or plainly impossible. Good luck trying to find a way of rebutting the reviewer's comments that does not show too clearly what you think of their level of competence. They are, after all, very likely to be recruited again to review the second draft of your paper!
Suggests rejection of your paper because your statistically well-supported and carefully argued conclusions disagree with one of their preconceived notions. (Yes Virginia, there is good evidence for the occurrence of long distance dispersal. Deal with it already.)
The just plain arsehole
Thinks that your study is fantastic and ground-breaking. Delays submitting their review as long as they possibly can, then writes a scathing criticism and suggests outright rejection. While you wait for the journal's decision, rewrite and resubmit the paper elsewhere, they have already set one of their postdocs up to replicate your results and rush them into publication before you.
(Yes, seriously. I have that story from one of the culprit's PhD students at the time. Luckily, this cannot really happen in my area of research because you would need to do field work or have relevant biological specimens to replicate the work, and that would take too long. But in purely laboratory-driven areas where everybody is working on the same model organism this is a risk.)
Have you encountered any types that I missed?