Showing posts with label futurism. Show all posts
Showing posts with label futurism. Show all posts

Friday, May 5, 2017

A good read on superhuman artificial intelligence

This essay written by Kevin Kelly must be the most sensible text on superhuman artificial intelligence (AI) and the allegedly imminent "singularity" that I have ever read.

Although it appears to get a bit defensive towards the end, I am in complete agreement with all main points. In my own words, and in no particular order, I would like to stress:

There is no evidence that AI research is even starting to show the kind of exponential progress that would be required for an "intelligence explosion".

There is no evidence that intelligence can be increased infinitely; in fact there are good reasons to assume that there are limits to such complexity. What is more, there will be trade-offs. To be superb in one area, an AI will have to be worse at something else, just like the fastest animal cannot at the same time be the most heavily armoured. Finally, we don't want a general purpose AI that could be called "superhuman" anyway, even if it were physically possible. We want the cripplingly over-specialised ones. That is what we are already doing today.

Minds are most likely substrate-dependent. I do not necessarily agree with those who argue that consciousness is possible only in an animal wetware-brain (not least because I am not sure that the concept of consciousness is well defined), but it seems reasonable to assume that an electronic computer would by necessity think differently than a human.

As for mind-uploading or high-speed brain simulation, Kelly points out something that I had not previously thought of myself, even when participating in relevant discussions. Simulations are caught in a trade-off between being fast because they leave lots of details out on one side, and being closer to reality but slower, because more factors have to be simulated. The point is, the only way to get the simulation of, say, a brain to be truly 1:1 correct is to simulate every little detail; but then - and this is the irony - the simulation must be slower and more inefficient than the real thing.

Now one of the first commenters under the piece asked how that can be true when emulators can simulate, 1:1, the operating system of computers from the 1980s, and obviously run the same programs much faster in that little sandbox. I think the error here is to think of the mind as a piece of software that can be copied, when really the mind is the process of the brain operating. Simulating all the molecules of the brain with 1:1 precision, and faster, on a system that consists of equivalent molecules following the same physical laws seems logically impossible.

Finally, one point that Kelly did not make concerns the idea that a superhuman AI could solve all our problems. He discussed that more than just fast or clever thinking is needed to make progress, experiments for example, and those cannot be sped up very much. But what I would like to add is that of our seemingly intractable problems the really important and global ones are political in nature. We already know the solutions, it is just that most people don't like them, so they don't get implemented. Superhuman AI would merely restate the blatantly obvious solutions that human scientists came up with in the 1980s or so, e.g. "reduce your resource consumption to sustainable levels" or perhaps "get the world population below three billion people and keep it there". And then what?

Saturday, May 23, 2015

An interesting congruence of objectivist and singularitarian beliefs

In the latest instalment of his dissection of Ayn Rand's Atlas Shrugged Adam Lee of Daylight Atheism discusses, and quotes at length somebody else who discusses, the enormous complexity of production and supply chains that are needed to make items as simple as a pencil, let alone an engine, exposing the absurdity of Rand's belief that "the only thing that's essential to build a tractor, a railroad or an airplane is a rational mind".

I couldn't agree more; the Randian tenet promoted in her books, that all that matters is to be a rational capitalist, and that all company employees and public servants are merely superfluous parasites, falls apart the moment one tries to fit it against the reality of any economy more complex than early Middle Ages subsidence agriculture. And that is also all that needs to be said about those who seriously believe that they shouldn't have to pay taxes because they built all they have by themselves - I'd believe that if they had spent all their life on a lonely island and started by fashioning their own crude stone tools, but not if they are running a company in an industrial age society.

But what really only just occurred to me is that this tenet - if you are only rational and talented enough you can achieve anything, regardless of resource limits and laws of physics - is pretty much identical to a central assumption underlying singularitarianism:

Singularitarians believe that within the next few decades humanity will create a self-improving artificial intelligence which will then quickly achieve an unimaginable level of intelligence. Depending on their general outlook, they are then either hopeful that this event will usher in paradise on Earth, with space colonisation, inexhaustible wealth and immortality for all, or worried that the resulting god-like intelligence will squash us like insects.

In either case a necessary assumption is the same as Rand's: This self-improving supercomputer only needs to be intelligent enough, and then it will be able to achieve anything. Survivable space flight - laws of physics don't matter any more because it is just that intelligent. Solution of all the world's economic and ecological problems - resource limits somehow don't matter any more because it is just that intelligent. Immortality for all - biology doesn't matter any more because it is just that intelligent. Extinction of humanity - and we are helpless and cannot just take an axe to its power supply because it is just that intelligent.

Apparently quite a few Californian information technology entrepreneurs, who are of course the primary support base of the singularitarianism movement, are also libertarians in their political outlook. So perhaps that shouldn't have surprised me, but I just never before made the connection between these two belief systems.

Monday, February 9, 2015

Alter Wein in neuen Schläuchen

It often seems to me as if the milieu or class of technology-savvy and allegedly, according to themselves at least, "rational" people who amalgamate around institutions such as MIRI and Less Wrong and around futurism gurus such as Nick Bostrom, Ray Kurzweil and Eliezer Yudkowsky believe pretty much the same things as people they would, as alleged rationalists in the computer age, most likely consider hopelessly backwards. The difference is that they cleverly tacked a 21st century terminology onto the same beliefs:

Wednesday, February 4, 2015

Singularitarians once more

Browsing through the web I have just come across the current "Edge Question" and Sam Harris' answer to it. The question is, what do you think about machines that think? And Harris' answer shows that he is, or has gone, full bore Singularitarian. He argues that one of the greatest risks humanity faces is the construction of a hostile artificial super-intelligence, and that the construction of a benevolent artificial super-intelligence should be a high priority because it may solve all other major problems.

I can kind of understand why Singularitarians believe what they believe, in the same sense that I can understand why people right after the discovery of radioactivity believed that in the near future everything would be nuclear, including the living room heating. When something is all the rage, and in the current age it is computers, critical thinking can be overwhelmed by unwarranted enthusiasm.

After all, every saturation curve starts out like an exponential curve, and thus the rapid early advances are often happily extrapolated into the future. But I would have thought that Harris, who is, after all, a scientist and otherwise a sceptic, would have been more critical of the idea of an intelligence explosion.

Tuesday, December 9, 2014

Intelligence is not actually magic

Reading once more a discussion of the singularitarian movement Less Wrong, it occurs to me that the principal mistake of singularitarians is not actually their belief in accelerating and unbounded progress. Yes, they are wrong about that too, but the core mistake is this:

They believe that intelligence is a kind of magic pixie dust that enables the being exhibiting that intelligence to achieve, well, pretty much everything it can imagine.

That is really at the core of their fear of hostile artificial super-intelligence, and of their hope for the fruits of building a friendly artificial super-intelligence. They believe that if somebody builds a sufficiently clever supercomputer then this supercomputer can achieve anything. Immortality. Space flight. Free energy. Exterminating all of humanity. Feeding all of humanity. And a pony.

A simple thought experiment should set this straight. Imagine a small island in the middle of the ocean. It is just a bare rock, without any plants, animals, iron ore, coal or whatever resources beyond rock. Now plop down on this island the superest super-intelligence you can just about imagine, and imagine that it wants to leave the island.

Will it succeed? Well no. How could it? There are no resources whatsoever, and rocks don't swim.

The same principle applies if we swap the island for our planet. It is well possible that no matter how super-intelligent a friendly intelligence is, there will still not be enough resources on this planet to work out a way to provide eight billion humans with a lifestyle that is both comfortable and sustainable.

It is well possible that it is quite simply physically impossible for a fragile biological organism to fly to a distant star and survive the journey, full stop, and that even a super-intelligence could only concede that fact.

It is well possible that immortality, even as "brain-uploading", is an unachievable dream, and that even a super-intelligence could only concede that fact.

It is well possible that fusion power cannot be produced economically outside of a star, and that even a super-intelligence could only concede that fact.

And it is actually pretty likely that an evil artificial super-intelligence could be stopped in its tracks by taking an axe to its power supply, just like the most intelligent human could be knocked out or shot by one of the stupidest.

Because intelligence is not magic pixie dust. It is perhaps best defined as the capability to solve problems efficiently, but it cannot solve unsolvable problems, and it does not somehow make the laws of physics go away.

Thursday, July 24, 2014

The Doomsday Argument

There exists something called the Doomsday Argument, and it is considered to be one of the most controversial probabilistic arguments that have been advanced.

Randall Munroe has given a very good summary of the argumentation:
Humans will go extinct someday. Suppose that, after this happens, aliens somehow revive all humans who have ever lived. They line us up in order of birth and number us from 1 to N. Then they divide us divide them into three groups--the first 5%, the middle 90%, and the last 5%:
Now imagine the aliens ask each human (who doesn't know how many people lived after their time), "Which group do you think you're in?"
Most of them probably wouldn't speak English, and those who did would probably have an awful lot of questions of their own. But if for some reason every human answered "I'm in the middle group", 90% of them will (obviously) be right. This is true no matter how big N is.
Therefore, the argument goes, we should assume we're in the middle 90% of humans. Given that there have been a little over 100 billion humans so far, we should be able to assume with 95% probability that N is less than 2.2 trillion humans. If it's not, it means we're assuming we're in 5% of humans--and if all humans made that assumption, most of them would be wrong.
To put it more simply: Out of all people who will ever live, we should probably assume we're somewhere in the middle; after all, most people are.
If our population levels out around 9 billion, this suggests humans will probably go extinct in about 800 years, and not more than 16,000.
He goes on to state that most people immediately conclude that the idea is obviously wrong, but "the problem is, everyone thinks it's wrong for a different reason. And the more they study it, the more they tend to change their minds about what that reason is."

Well, there are two reasons why that could be so. One is that the argument is really quite clever but most people don't realise it. The other is that there is so much wrong with it that people discover new layers of wrongness every time they look at it.

I guess I would have to be counted among those who think that the Doomsday Argument is, indeed, idiotic. Admittedly I cannot come up with a super-deep Bayesian counter-argument such as are referenced in the linked Wikipedia article. But I don't think that is necessary because this does not look like a job for probabilistic reasoning anyway.

Thursday, July 10, 2014

So, how would we get a murderous super-AI, anyway?

Dwelling a bit on the obsessions of MIRI and other singularitarians from the LessWrong spectrum, I have idly wondered how exactly they imagine the kind of superhuman artificial intelligence (AI) they are so afraid of would come about. Even hand-waving the question of whether certain kinds of technology are even possible outside of fever dreams and Science Fiction novels, I see a rather limited number of options for the generation of AI.

Tuesday, July 8, 2014

Singularitarians are really really really strange

Only a few days ago I shook my head about people who breezily assume that, fast forward a few decades of technological progress, interstellar flight will be achievable. (And affordable!)

Now, made curious by a post on a blog that I read from time to time, I checked out the website of the Machine Intelligence Research Institute (MIRI), a kind of Singularitarian think tank. Remember, Singularitarians are futurists who are convinced that progress is accelerating and will soon - that is, within the next few decades - produce a "technological singularity" beyond which everything will be unbelievably different.

The particular MIRI brand of singularitarian appears to be a computer nerd who further believes the way to achieve singularity is to build a self-improving artificial intelligence (AI), because it will then get even more intelligent in a fraction of the time it took to develop it, and the next iteration will improve itself even more, and one day later it has turned into Robot Jesus and there will be immortality, space flight and ponies for all. Because obviously a very intelligent computer must only do some super-quick armchair thinking and it will have solved all social, technological and scientific issues ever. Without having to do painstaking empirical testing of its ideas because it will just be that intelligent. And resource limits won't matter either because, hey, it will just be that intelligent.

(Yes, one does get the feeling that a certain kind of futurist computer nerd considers intelligence, which is sometimes even equated with computing speed, to be magic pixie dust.)

Conversely, they believe that the single greatest danger humanity faces is not resource depletion, antibiotics becoming useless, soil erosion, water shortages, biodiversity loss, or global change induced mass starvation, but instead that Robot Jesus may decide he'd be better off without us useless humans. So the mission of MIRI is to "ensure that the creation of smarter-than-human intelligence has a positive impact" as opposed to Terminator style genocide against humans. Which is obviously a very important goal, much more important than avoiding those other things I mentioned two sentences ago, and so we must all give them lots of donations to finance their, ahem, "research".

Before I looked at their website, I had unavoidably developed some preconceptions about what they might be like. Now that I have seen it, I find it depressing how accurate my preconceptions turned out to be.

Judging from their website, they seem to spend their time approximately as follows:
5% designing websites that look superficially sleek and professional but weird me out once I look closer. (Seriously, "apply to research this math"? "As featured in The New York Times"? Yes, that totally sounds like a legitimate research institute. Please go on.)

10% taking super pretentious staff portraits that they will later find embarrassing when if they grow up.

25% standing in front of white boards trying to look intelligent. Whether they succeed is in the eye of the beholder I guess.

30% writing "papers" that use a lot of big words in their titles to distract from the fact that no actual empirical research or AI development appears to be happening at MIRI (at least as far as I can tell). It is basically as if a biologist claimed that we are going to figure out how to repair telomers and thus achieve eternal youth and then spent their entire career merely writing letters to the editor and review articles summarising other people's work instead of, well, developing an enzyme that repairs our telomers.

20% calling for donations.

10% being unable to believe what a sweet gig they managed to land.

As far as their staff goes, this is what the page "our team" shows as of 7 July 2014. There are seven guys who look so young that unless told otherwise I would assume they'd be university students doing an internship but no, apparently one of them is the director; one guy who has made himself look a bit more senior by growing a beard; and one young woman - that's a full 11%! Yay for diversity and inclusiveness!

Their "research associates" are broadly similar in profile, only this time there is actually an emeritus professor among them. On the downside, there are zero women among them, which kind of reduces the percentage of women playing any role at MIRI to near 0%.

Looking at the titles of their publications:
  • Problems of Self-Reference in Self-Improving Space-Time Embedded Intelligence
  • Definability of Truth in Probabilistic Logic
  • Robust Cooperation on the Prisoner's Dilemma: Program Equilibrium via Provability Logic
  • A Comparison of Decision Algorithms on Newcomblike Problems
  • Ontological Crises in Artificial Agents' Value Systems
  • Intelligence Explosion and Machine Ethics
  • Intelligence Explosion Microeconomics
  • How We're Predicting AI--or Failing To
  • Intelligence Explosion: Evidence and Import
Although that is the title of the relevant web page, I hesitate to call it "research" because, again, most of these are review articles, opinion pieces and meta-stuff looking at how we look at the singularity. The only interesting papers are what we might broadly call philosophical treatises (especially logic and ethics), but I'd rather ask a MIRI-independent philosopher for their opinion before taking them seriously.

Ye gods but this must be awesome. Imagine spending your days fantasising about the potential implications of inventions that will never be made and actually being paid for it. And the sweet thing is, if they play their cards right they can ride that gravy train forever because despite the fact that all we have programmed to date is pretty stupid and that the only self-aware intelligences we have any experience with are biological, nobody has proof positive that a self-aware, highly intelligent and self-improving AI cannot be built. Prove me a negative, won't you?

As long as they don't set a date and perennially keep the moment Robot Jesus will come and make us immortal vaguely in the future (kind of like Christian Jesus has been said to come back any time now for the last two thousand years) MIRI can still ask for our donations in 80 or 200 years. And this being humans we are talking about, they will find takers.

Wishful thinking is a powerful force.

Friday, July 4, 2014

Techno-optimists are strange

Recently a post by PZ Myers at his blog Pharyngula, originally about how frequent intelligent life can be assumed to be in the galaxy, got derailed into an argument about whether the colonisation of space is possible and if yes, how. And I got drawn into it for some time, because I find the airy-fairy, dewy-eyed assumption that it is rather bizarre.

It is, of course, not just in that thread that I have run into the same line of thinking, and everywhere we look we can observe other types of techno-optimists. They fall into several categories, although those are partly overlapping and partly nested. Perhaps a small taxonomy as I grasp it:

Transhumanists are those who believe that we will become able to improve ourselves, perhaps through genetic engineering or cyborg implants, to the point where we will transcend the human condition. Immortality of some kind and freedom from disease are obvious items on their wish-list.

Singularitarians believe that humanity will achieve a stage of technological progress after which everything will be so different that we just cannot imagine how seriously different everything will be. Why and how varies; there are those who see technological progress as accelerating exponentially and simply anticipate the singularity as some moment when that acceleration becomes unprecedentedly fast. Many others believe in the coming of the Messiah the development of self-improving artificial intelligence which will conveniently solve all our problems for us.

Finally, Cornucopians are quite simply those who believe that the combination of human ingenuity and, usually, the incentives provided by the free market (praise be upon it) can magically overcome any limitation or shortage that we will ever be faced with. They include people who promote that ideology quite explicitly but in a wider sense also all those who reply, when for example the unsustainable use of resources is brought up, with the naive mantra that "they" will think of something once those resources really run out. (The irony being of course that if what is meant with "they" are scientists then the scientists have long thought of something: we should stop wasting so many resources. Sadly nobody wanted to hear that answer.)

Anyway, no matter how precisely the individual techno-optimist imagines our glorious future to be brought about, the colonisation of space is perhaps the most ludicrous idea of all.

Thursday, May 1, 2014

Printable solar cells

To allay my notorious pessimism about the future, I went to a talk today. It was about printable solar cells, presented by Scott Watkins, the very same CSIRO scientist featured in the linked news item.

And they are truly amazing. Much cheaper to produce than normal silicon solar panels, flexible (he gave some to the audience to feel), and very simple in their internal structure. Essentially the material scientists just print two layers onto a plastic sheet: first a polymer mixture, then a silver lattice that takes off the power.

In small experimental formats they have achieved the same efficiency as standard silicon solar cells; in larger formats, efficiency is still considerably lower, but they have various polymers to play around with, and they have not even tried how the best mixtures they have already developed work when scaled up, so there is still a lot of potential for improvement. Because so many different polymers can be employed, it is also possible to tailor the printable solar cells to different light wavelengths.

Apparently even in the best case they will most likely not achieve the lifetime of thick and solid silicon solar panels, but again, they will be considerably cheaper. Once the research consortium has settled on the best materials for efficiency and durability, this will have enormous potential.

There is, however, one possible irony: The plastic that the solar cells are printed on, and the polymers themselves, are made from oil. Perversely, that means that the more oil we waste for cars and heating now, the more difficult it will be to produce this type of solar cell in, say, 2050; and the same for any other type of plastic, for all their various uses. Okay, the outlook is not entirely bleak, because some polymers can be made from biomass, but that will be more difficult and use land that could otherwise produce food.

---

In other news, and certainly completely unrelated to the question of our distressing overuse of petrochemicals, today somebody incredulously asked me if I really rode the bike to work every day, even now in the "cold" "winter" here in Canberra.

Well, yes. It is one hour of exercise every day (30 min each way), with the added benefits of saving money, saving the aforementioned resources, and not getting a fit while standing in the traffic jams on Mouat Street and Northbourne.

Also, it is not cold, nor has Canberra got a winter. I'd say it has four months of what would be late autumn in Germany followed immediately by spring. (Which might be a reason why no native trees have ever evolved to be deciduous.) And of course I also rode my bicycle to university when it was -10C back in Germany. There is no wrong weather, only wrong clothes, and it is not the weather's fault if Australians believe that they should be able to wear shorts and flip-flops at all times.

No offence meant. Just saying.

Friday, February 21, 2014

Why we are not getting killed by SkyNet any time soon

Computers are dumb.


For the benefit of non-German readers, this is a screenshot of the German Yahoo News site. The second paragraph starts with the phrase "the process initiated by her predecessor...", with the German word "der" being the direct counterpart of English "the" (for male nouns, that is, but let's not make this complicated).

Behind the German "the", some script has automatically inserted a share price at the Chinese stock market and a link to more stock market information. Why? The prominent box on the left provides the answer. It is not related to the article, which is not about stock markets at all but about the ministry of defense, nor is it some paid advertisement. Instead, the box has been added by the very same script, and it shows the recent development of share prices for a company called DER.

DER; "der Prozess". Get it?

It would be funny if it weren't so sad. (And annoying; let's not forget how annoying it is to have some stupid script clutter every text that contains the male definite article with irrelevant graphs and links).

And this is one of many reasons* why I am not afraid of a so-called technological singularity turning into Terminator: Rise of the Machines. Computers are idiots. Yes, they can do amazing things, but they are "oh look, she is trying feed herself with a spoon now, and most of it ends up in her lap, isn't that cute" amazing things, not "she is the greatest physicist of her generation" amazing things.

Of course, the computer is not really at fault, it is only a tool. Really one has to wonder about whoever wrote this script and thought it would be a great idea.

*) The other reasons are mostly variants on the observation that the singularity itself is a ridiculous idea.

Thursday, September 19, 2013

Consciousness raiser: Things versus processes

Some time ago I wrote about my feeling that many people could use a consciousness-raiser on the topic of trade-offs, partly to put a damper on rampant techno-optimism. One of the points was that biologists are more likely to be aware of the problem of having to make trade-offs because they constantly encounter them in living organisms, ecological adaptations and reproductive strategies. Today I want to talk about another issue that many people appear to have very odd intuitions about and thus could use a consciousness-raiser for, and in this case it might be physicists, chemists and engineers who naturally have the edge.

The context in which I came to think about the issue was again a discussion of futurist hopes (although it had not started out as that), specifically "brain uploading" or "mind uploading". The hope that something like that would be possible sometime in the future is based on the following claims:
  1. We are our minds.
  2. Our mind is best understood as information stored and/or a computer program running on the "wetware" of the brain.
  3. A simulation of a mind is a mind. Just as a simulation of a Windows environment on a Linux machine allows Windows programs to run, a simulation of a brain in a computer would allow our mind program to run on that computer.
  4. Consequently, if we could scan that program and the information (memories) off the brain and simulate it in a computer, we would be in that computer, and thus could achieve immortality (until humanity cannot afford to keep cyberspace running any more, that is, which might be as soon as in a few decades anyway when fossil fuels run out).
Phrased like this, most readers will presumably see immediately that something might be wrong with at least some of these claims. I would go further and argue that they are all complete bollocks, and that some of the problems stem from the human tendency to reify processes, that is to think that processes are things that can, for example, be moved around.

In the present case the specific mistake is to think of the mind as a thing that can be moved, or perhaps at least copied, from the body into a computer, which is essentially a form of Cartesian mind-body dualism. To get over this mistake, and to raise one's consciousness about the ease with which we make it, one could consider historical instances of the same error.

Tuesday, August 6, 2013

Consciousness raiser: Trade-offs

Some recent discussions I was involved in appeared to suffer from two circumstances: First, that many participants were talking past each other because they were not quite agreed on what the controversy was actually about, and, two, a lack of appreciation of certain aspects of reality on one side of the discussion. Aspects of reality, admittedly, that we humans are very likely to get wrong. So this post is once more mostly exploring the issue for my own benefit and perhaps for later reference if ever needed.

Today, let us consider the concept of trade-offs. When discussing the potential for future technological innovations with the kind of people one might classify as singularitarians, transhumanists, cornucopians or simply technology-optimists, one cannot help but become puzzled at their often very simplistic view of technological feasibility and progress. Many of them seem to believe that the history of engineering can be summarized as undertakings that had previously been impossible gradually becoming possible once the awesome power of the human mind is properly applied to them, and that this trend will merrily continue into the future.

That is, of course, a very one-sided perception. Yes, there have been many cases where doubters were proved spectacularly wrong, for example when it was stated that trains could not go faster than a couple tens of kilometers per hour without killing the passengers. However, Jules Verne's cannon shooting astronauts into orbit really has to remain fiction for just that reason: there is no way of accelerating a projectile (as opposed to a rocket) fast enough to reach escape velocity without killing the astronauts. There simply isn't, and that is how it will remain regardless of how much science and engineering progress.

The same is true of perpetua mobiles, for example. Some things are ruled out by the physical realities of our universe, and the same is very likely to be true for many other pipe dreams futurists are coming up with. It may well be that there is no way of traveling to another star system and surviving the journey, of running a stable fusion reaction at a scale smaller than a star, or of uploading a mind into a computer. (Then again, I am only certain about the last of these.)

But beyond plain "this violates the laws of physics" level of impossibility, one of the really under-appreciated reasons that something might turn out to be impossible are trade-offs. What does that mean? It is simply the observation that to achieve some benefit A you often have to accept some correlated downside B.

Sunday, February 17, 2013

Some interesting links

A blogger called Armondikov at RationalWiki takes apart the wishful thinking of Singularitarians about Nanotechnology. Complete agreement - both with the real promises of Nanotech properly understood and their criticism of Singularitarians who perennially fail to realize that we have nanobots already, and we have had them for the last three billion plus years. (They are called enzymes.) The idea that you can build something like a robot at the micrometer scale and still expect it to work as it would on the macro scale is ludicrous. There are simply constraints on and tradeoffs with "machines" that small that are very counter-intuitive to somebody used to thinking in terms of centimeters and more. The comment stream under the post is a bit depressing though, as they often are.

Just like probably everybody else, I sometimes have these "if only people would" thoughts. You know, the hope that there is some simple solution that would, if adopted, make a big positive difference in many fields at the same time. Others may think "if only people were nicer to each other", but in my case it is generally something on the lines of "if only people would receive a better education in science and formal logic". Another one that I would immediately second is discussed by blogger Mathbabe: If only people would consider it a sign of honesty when somebody admits that they don't know something, as opposed to a sign of incompetence. As a third option I would add: if only it would be considered courageous and reasonable when politicians publicly change their mind in the face of new evidence, as opposed to flip-flopping and a sign of weakness.

Finally, a very funny exchange between creationists and a real scientist.