I don't have much confidence in just about any provocative thesis. There are a lot of reasons for this, but one of the big ones is that I think empirical evidence is important and yet
most findings are false. It's hard to get into the mindset of defending something controversial to the death for purely instrumental reasons.
'biased' doesn't equal 'false'. wouldn't want to say 'most findings are biased' equals 'most findings are false'. (though i think more rigorous and systematic documentation of biases in research design and execution is an important issue)
ReplyDeleteand 'defending a controversial thesis' does not need to read 'defending something controversial to the death for purely instrumental reasons', even in the 'academic game'
if its not worthwhile to develop, advance, and defend (ultimately, unavoidably) biased arguments, then all learning sounds like it may be in vain.
but, if its that approach that has got researchers to the point now where they are learning to draw attention to/ place real value on methodological self-awareness (i.e. to value those fellow researchers who qualify their claims with acknowledgment of the real limitations of their scope),then it seems reasonable to say that at least some of the learning outcomes of biased processes should count as progress
"Biased" as used by statisticians in reference to findings or results, does in fact mean "false." They aren't talking about the motivations of the researchers. They're talking about how "X" is claimed in a published paper, but in fact "X" isn't the case due to some problem with the experiment or explanation.
ReplyDeleteMy issue with the academic game is that you have to think of a million ways to support your view and that a uniquely important quality to have is to have way more confidence in your view than is objectively rational in order to maintain your drive in the face of confounding evidence and arguments. I have trouble doing that - I don't think many people know nearly as much as they'll go on and on pretending they do, and I can't do a good job tricking myself otherwise in my own case. (I don't think they're pretending in their own minds.) So I need to fake it. I think some people really are given to having supreme overconfidence and that academia has a selection process that favors this to some extent.
As for your last two points, it's important to distinguish biased "arguments" and "processes" from biased findings. No doubt people are biased, but biased people can converge on truth. The problem is that there are incentives in academia to not care if we are converging on truth. Instead, there is simply the drive to publish and be famous, even if you obviously overreach. An illustration of the problem is that many journals will not publish replication experiments which support a null hypothesis which contrasts with previously published results. In other words, you can't publish disconfirming evidence. This leads to people making a name defending a thesis that might well be false, and no one can say boo about it. Surely this is a distressing problem if you care about getting at the truth? And surely this sets up a selection process for people who want a big name and not so much for people who want simply to get at the truth?
I'm dumping on academia here, but I have problems with other institutions, too, of course. A lot of academics care about truth but still are strangely overconfident about weird things.
Most of this was insightful, thanks. I was trying to draw attention to a broader notion of bias, that I thought relevant to understanding the research you cited. 'Objectively rational' and 'converging on truth' are interesting, abstract notions that what you say hinges on-- and that, however I grasp at them, I'd wonder if I had enough overlap with your intended meaning to be on the same page as you. I agree with your points on academia.
Delete