The owner of an elist I take part in posted an article from the New York Times questioning the restrictive role of ethical review panels in controlling academic research. She thought it might spark a lively discussion, but so far, it seems only to have struck a match in my flammable mind.
Ethical review boards were mandated by the federal government in the mid-1970s to protect the subjects of biomedical research, a wholly sensible thing given past abuses (the article mentions the horrifiic Tuskegee syphilis experiment). But now there are 5,564 extant review boards covering virtually all U.S.-based research involving human subjects, including historical research and other inquiries entailing no risk to life and limb:
Among the incidents cited in recent report by the American Association of University Professors are a review board asking a linguist studying a preliterate tribe to “have the subjects read and sign a consent form,” and a board forbidding a white student studying ethnicity to interview African-American Ph.D. students “because it might be traumatic for them.”
I am interested in the boundary-line where protection slides into prohibition, with social costs that go largely unremarked. Society needs structures of protection from abusive greed: now more than ever, we have a public duty to prevent contaminated food being sold, industrial waste being dumped into the water supply, toxic chemicals being pumped into the air. But when the impulse to protect us from ourselves spills over into the territory of thoughts and feelings, I see more harm being done than good.
For instance, empowering panels of experts to protect Ph.D. students from deciding for themselves whether to be interviewed has at least two negative consequences: it feeds our growing addiction to official expertise, placing yet more aspects of normal social intercourse under the purview of credentialed experts; and it infantilizes the rest of us, implying our own judgment and capacity for autonomy are so weak, we shouldn’t even be authorized to choose for ourselves whether or not to speak to a researcher.
Something very strange is going on here: as the mega-risks we face grow even larger (with climate change and rumors of perpetual war in the daily headlines), we become more and more risk-averse on the small scale of daily life. Is it whistling in the dark? Lulling ourselves with a fantasy of ultimate protection? Ironically, the resulting loss of personal responsibility is likely to make us even less safe, eroding our capacity to respond to real dangers and opportunities as the muscles of personal responsibility weaken from disuse.
It shows up in many other ways too, not just our growing tendency to devolve decision-making power to authorized experts. Some of the consequences have already become matters of course: just the other day, a tableful of academics and nonprofit organizers nodded agreement as I said it had become impossible to get an honest confidential reference; people reliably dissemble or lie to avoid being scolded or even sued for giving a negative recommendation. Everyone had a story about how a disastrous person had come into their midst from lack of honest information about his or her past conduct.
Ethical standards mean a great deal to me; indeed, I’ve been told I’m obsessed with them, and that’s probably true. But my obsession manifests not as the wish to erect even more structures of expert review and authority to watch over us. Rather, I feel the keenest desire to explore and debate ethics with all sorts of people who will be called upon to apply them in everyday life. Society is strongest when all of us are acquainted with such challenges, each of us internalizing our own personal ethical review board, each of us forming judgments we can trust. Citizens with spine, rather than prisoners in clumsy suits of social armor.