I don’t actually read Hanson, much less the comments (*shiver*), but while I applaud his intent, people tend to go looking for reliable rules that dictate, for any given person telling you something, whether precisely 100% of what that person says is true, or precisely 100% is false. And so if you try to sort of work around that and try to identify and neutralize the kind of cognitive malfunctions that lead you to think that way, you will IN FUCKING EVITABLY collect a large, vibrant, enthusiastic community of deep thinkers who instinctively understand that you’ve invented a much more effective, paradigm-shattering heuristic for identifying which people are always 100% right and which ones are always 100% wrong. Or can be presumed to be, for the sake of convenience.
Basically, the ones who make noises that sound like the commenters at Hanson’s blog are going to be, obviously, on top of their own cognitive biases and others’, so they can’t be wrong, and inductively, anybody they trust ditto. Which is cool, because when we get all these cognitive biases identified and tell people about them — which can’t be that hard, after all — everything will change, and we’ll have flying cars that steer themselves, and, and mining the asteroids, Mars, geriatrics, Q.E.D. future. DUH. Robots. Google. Singularity.
Is Hanson a techno-futurist? I don’t give a shit, he might as well be.