In my field of science, peer review generally takes the following form: You submit a manuscript to a journal, it is reviewed by two to three colleagues, and then the editor of the journal decides whether to publish, whether to request changes before re-examining the paper for publication, or whether to reject. The editor and the peer reviewers know who wrote the paper, but you don't usually know who the reviewers were.
There are other ways of doing this; they have their own advantages and disadvantages, but I definitely know how I would change things if I could.
One often heard suggestion lately is that peer review should be "open" (or public); that is, the authors should be informed who the reviewers were. As far as I can see, there are two main considerations behind this. First, that this will make it harder to be unfair and rude, which is of course much easier to be under our current system. Second, a general and in this particular case unwarranted infatuation with the concept of openness, as in open source software or open access publishing.
Because there is one very simple problem: If the author can see who suggested their paper be rejected, will that not have a severely chilling effect? Imagine a postdoc reviewing the manuscript of a very influential professor, for example. Will they dare to say something negative if they know their name will be connected with it, be it ever so justified? I am fairly sure that I at least would say no considerably more often if I were to asked to review papers under an open system. Who knows when I would offend somebody who has to decide about my grant application a year later?
Another idea I consider poorly conceived is post-publication review. Of course, it takes place all the time anyway. A paper that has undergone standard, pre-publication review and has been published will still be scrutinised by other scientists, its results may be tested, its methodology criticised, and so on. The real test of a paper is not whether it got printed but whether people in the field ten years later will say, "ah yes, that was okay for its time".
So the thing is then that those who push for post-publication review today can only mean that they want it instead of pre-publication review; which, again, because post-pub review happens anyway, can only mean no more pre-pub review. And because individual journals have a limit of how many papers they can publish and thus have to do some pre-pub selection, such a system would only work with preprint / arXiv style databases.
But this would be an absolute disaster: In essence it would mean that any bollocks ends up in the same stack as the good science, and then the reader can decide. That is extremely inefficient for the reader, especially for non-scientists. And anybody who thinks that post-pub review in such a database would work because scientists have nothing better to do than volunteer to criticise each other's work, and because the authors will take the suggestions on board and correct mistakes made in the first draft, are surely deluding themselves.
It is one thing for a working scientist to be able to put into their CV that they are a peer reviewer for the American Journal of Botany by invitation; that looks good. It is quite another for them to trawl a massive database out of their own initiative. They'd have considerably more important things to do. And why should the author correct anything after they have published a paper? It can be cited, the next project has started to consume their time. Few of them are going to look back.
If I could change the way it is done in my field of work, I would keep things more or less the same but make peer review double-blind. I consider it a major downside of the current process that editors and reviewers know who the author is, because that introduces an obvious source of potential bias: against women, against foreign sounding names, against the PhD student of one's arch-enemy, take your pick. What is more, these days it should be trivial to design manuscript submission systems so that the server hides the author information even from the editor until a decision on acceptance or rejection has been reached.
That being said, after a recent experience - an outright rejection after somewhat odd reviewer reports - I have started to wonder if another alternative might not be a more collaborative system with true, two-way communication between author and reviewers. What if, instead of merely reading the manuscript and then sending of a report, the reviewers could ask the author for clarification or ask them why they didn't do things in a different way?
In my specific case, reviewer #2 for example did not understand a central method used in the manuscript. They asked why we didn't test whether our loci showed sufficiently congruent topologies to justify concatenation. Answer: Because we did not concatenate our data, because this is not how species tree analyses work. In a similar vein, they asked why they couldn't see the individual samples on the species tree. Answer: because a species tree has species as terminals, not the individual samples. (This is a bit like asking why a phylogeny that shows the relationships of "human", "chimpanzee" and "gorilla" doesn't have an arrow pointing at "Tom Cruise".)
Now one could say that this reviewer perhaps wasn't really qualified to review the paper. Alternatively, one could say that we didn't explain our methods sufficiently well, and that is fair enough. But my point is that considerably less energy would have been wasted on this, and considerable frustration even on the reviewers' part could have been avoided, if there had been a direct communications channel between them and me so that they would ask such questions before arriving at their final recommendation.
Still, there will always be strange decisions under any system that attempts to implement quality control, and I guess I would take the double-blind system if I could get it.