Cognition-Enhancing Drugs: Can We Say No?
A recent book on health care rationing in the US, Can We Say No?, worries that political pressures for health spending will ultimately bankrupt the US economy. This idea of a spending ratchet is a commonplace of the health care finance literature. Less well-covered has been a creep toward performance-enhancing drugs. Though less of a threat to the public till, they raise fundamental questions about individuals' capacity for autonomous reactions to technological trends.
Consider a recent discussion in Edge, a fascinating online salon/magazine which asked 151 luminaries "What Will Change Everything"? Marcel Kinsborne predicts a growing market for "neurocosmetics" which translate the benefits of cosmetic surgery to the social world:
It's one thing to read these imaginings in the fiction of a Houllebecq, Franzen, or Foster Wallace; it's quite another to see them predicted by a Professor of Psychology at the New School for Social Research. I have also predicted an arms race in the use of personality optimizing drugs, but I believe such an arms race would defeat, rather than reveal, humanity's true nature. My difference with Kinsborne suggests a technophilic bias at the heart of Edge's inquiry: an implicit belief that certain technologies will inevitably change us, rather than being changed or stopped by us.
We need to understand that it's a conception of the self that is driving the acceptance of new technologies of self-alteration here, rather than vice versa. Consider eHarmony consultant Helen Fisher's acceptance of the arms arms race metaphor in the same issue of Edge:
In a recent editorial in Nature entitled Towards responsible use of cognitive-enhancing drugs by the healthy, distinguished contributors have endorsed a "presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs." Against various Luddites who worry about the rat races such drug use could spark, the editorialists argue that cognitive enhancement is here to stay: "From assembly line workers to surgeons, many different kinds of employee may benefit from enhancement and want access to it, yet they may also need protection from the pressure to enhance." Instead of the regulation encouraged by Francis Fukuyama, they would have us rely on robust professional standards to guide "appropriate prescribing of cognitive enhancers."
But it's easy to see where this arms race can lead. Perhaps at some point we'll all end up like those apostles of reductionist philosophy Patricia and Paul Churchland, who, rather than acting out, expressing, or displaying emotions, appear to prefer to refer to their supposed chemical determinants:
Nicholas Carr has noted that "institutionally supported programs of brain enhancement [may] impose on us, intentionally or not, a particular ideal of mental function." Fisher, Kinsborne, and the Churchlands suggest the metaphysical foundations of self-mechanization. It's a vision of the self as "multiple input-multiple output transducer," which, as I said in this article, follows a long line of reducing "soul to self, self to mind, and mind to brain." This last step of understanding what the brain is as what it does is a functionalism that begs the question Bourne used to put to Dewey: what exactly is the point of this pragmatic deflation of our self-understanding?
In a recent series of posts at PopMatters, Rob Horning has explored the psychology of consumerism, a condition we are endlessly told by elites to consider the linchpin of global development, economic growth, and domestic order.
The neurocosmetics forecast in Edge have the same place in the social world that marketing has in the worlds of goods and services. For example, the complex mixture of ennui, detachment, skepticism, and embers of warmth in office life limned in Joshua Ferris's And Then We Came to the End could be flattened into the glad-handing grin of an unalloyed will-to-succeed. Horning suggests that "consumerism makes the will and ability to concentrate seem a detriment to ourselves:"
Similarly, neurocosmetics promises to relieve the mental effort of crafting a genuine response to events from the welter of conflicting emotions they generate, leaving only the feeling induced by drugs.
In a world of neurocosmetics, emotions lose their world-disclosive potential and moral force. Rather than guiding our choices, they are themselves one among many choices. The industrial possibilities are endless, and I'm sure some rigorous cost-benefit analyses will prove the new soma's indispensability to such varied crises as aging, unemployment, and gender imbalances.
I shudder at such a world, but I doubt economic analysis can provide any basis for rejecting it. Neurocosmetics and consumerism are but two facets of the individualist, subjectivist, economic functionalism that's become our default language for judging states of the world.
If I were asked to participate in Edge's salon, I think I'd flip the question and ask "what kind of common moral language do we need to stop random technological developments from changing everything?" Philosophers like Langdon Winner and Albert Borgmann have started answering that question as they consider technology and the character of contemporary life. Borgmann notes that "simulations of reality can lead to disastrous decisions when assumptions or data are faulty." Perhaps we should start thinking of neurocosmetics as a faulty source of emotional data about our responses to the world around us. As Ellen Gibson reported at Businessweek, autonomous decisions to compete on can adversely affect everyone's identity:
From autonomy to automata: a provocative possibility.
Consider a recent discussion in Edge, a fascinating online salon/magazine which asked 151 luminaries "What Will Change Everything"? Marcel Kinsborne predicts a growing market for "neurocosmetics" which translate the benefits of cosmetic surgery to the social world:
[D]eep brain stimulation will be used to modify personality so as to optimize professional and social opportunity, within my lifetime. Ethicists will deplore this, and so they should. But it will happen nonetheless, and it will change how humans experience the world and how they relate to each other in as yet unimagined ways. . . . We read so much into a face — but what if it is not the person's "real" face? Does anyone care, or even remember the previous appearance? So it will be with neurocosmetics.
Consider an arms race in affability, a competition based not on concealing real feelings, but on feelings engineered to be real. Consider a society of homogenized good will, making regular visits to [a] provider who advertises superior electrode placement? Switching a personality on and then off, when it becomes boring? . . .
We take ourselves to be durable minds in stable bodies. But this reassuring self-concept will turn out to be yet another of our so human egocentric delusions. Do we, strictly speaking, own stable identities? When it sinks in that the continuity of our experience of the world and our self is at the whim of an electrical current, then our fantasies of permanence will have yielded to the reality of our fragile and ephemeral identities.
It's one thing to read these imaginings in the fiction of a Houllebecq, Franzen, or Foster Wallace; it's quite another to see them predicted by a Professor of Psychology at the New School for Social Research. I have also predicted an arms race in the use of personality optimizing drugs, but I believe such an arms race would defeat, rather than reveal, humanity's true nature. My difference with Kinsborne suggests a technophilic bias at the heart of Edge's inquiry: an implicit belief that certain technologies will inevitably change us, rather than being changed or stopped by us.
We need to understand that it's a conception of the self that is driving the acceptance of new technologies of self-alteration here, rather than vice versa. Consider eHarmony consultant Helen Fisher's acceptance of the arms arms race metaphor in the same issue of Edge:
As scientists learn more about the chemistry of trust, empathy, forgiveness, generosity, disgust, calm, love, belief, wanting and myriad other complex emotions, motivations and cognitions, even more of us will begin to use this new arsenal of weapons to manipulate ourselves and others. And as more people around the world use these hidden persuaders, one by one we may subtly change everything. [emphasis added]
In a recent editorial in Nature entitled Towards responsible use of cognitive-enhancing drugs by the healthy, distinguished contributors have endorsed a "presumption that mentally competent adults should be able to engage in cognitive enhancement using drugs." Against various Luddites who worry about the rat races such drug use could spark, the editorialists argue that cognitive enhancement is here to stay: "From assembly line workers to surgeons, many different kinds of employee may benefit from enhancement and want access to it, yet they may also need protection from the pressure to enhance." Instead of the regulation encouraged by Francis Fukuyama, they would have us rely on robust professional standards to guide "appropriate prescribing of cognitive enhancers."
But it's easy to see where this arms race can lead. Perhaps at some point we'll all end up like those apostles of reductionist philosophy Patricia and Paul Churchland, who, rather than acting out, expressing, or displaying emotions, appear to prefer to refer to their supposed chemical determinants:
One afternoon recently, Paul says, he was home making dinner when Pat burst in the door, having come straight from a frustrating faculty meeting. "She said, 'Paul, don't speak to me, my serotonin levels have hit bottom, my brain is awash in glucocorticoids, my blood vessels are full of adrenaline, and if it weren't for my endogenous opiates I'd have driven the car into a tree on the way home. My dopamine levels need lifting. Pour me a Chardonnay, and I'll be down in a minute'."
Nicholas Carr has noted that "institutionally supported programs of brain enhancement [may] impose on us, intentionally or not, a particular ideal of mental function." Fisher, Kinsborne, and the Churchlands suggest the metaphysical foundations of self-mechanization. It's a vision of the self as "multiple input-multiple output transducer," which, as I said in this article, follows a long line of reducing "soul to self, self to mind, and mind to brain." This last step of understanding what the brain is as what it does is a functionalism that begs the question Bourne used to put to Dewey: what exactly is the point of this pragmatic deflation of our self-understanding?
In a recent series of posts at PopMatters, Rob Horning has explored the psychology of consumerism, a condition we are endlessly told by elites to consider the linchpin of global development, economic growth, and domestic order.
[Harry Frankfurt] calls attention to “second-order desires”, or the desires we have about our primary desires. These are what we want to want and, according to Frankfurt, make up the substance of our will . . . . [W]e often have multiple sets of preferences simultaneously, which foils the more simplistic models of neoclassical economics with regard to consumer demand. . . .
The persuasion industry is seeking always to confuse the communication between our first- and second-order desires; it’s seeking to short circuit the way we negotiate between the many things we can conceive of wanting to come up with a positive will to want certain particular things at certain moments. It seeks to make us more impulsive at the very least; at worst it wants to supplant our innate will with something prefabricated that will orient us toward consumer goods rather than desires that are able to be fulfilled outside the market.
The neurocosmetics forecast in Edge have the same place in the social world that marketing has in the worlds of goods and services. For example, the complex mixture of ennui, detachment, skepticism, and embers of warmth in office life limned in Joshua Ferris's And Then We Came to the End could be flattened into the glad-handing grin of an unalloyed will-to-succeed. Horning suggests that "consumerism makes the will and ability to concentrate seem a detriment to ourselves:"
Dilettantism is a perfectly rational response to the hyperaccessibility of stuff available to us in the market, all of which imposes on us time constraints where there was once material scarcity. These time constraints become more itchy the more we recognize how much we are missing out on (thanks to ever more invasive marketing efforts, often blended in to the substance of the material we are gathering for self-realization).
Similarly, neurocosmetics promises to relieve the mental effort of crafting a genuine response to events from the welter of conflicting emotions they generate, leaving only the feeling induced by drugs.
In a world of neurocosmetics, emotions lose their world-disclosive potential and moral force. Rather than guiding our choices, they are themselves one among many choices. The industrial possibilities are endless, and I'm sure some rigorous cost-benefit analyses will prove the new soma's indispensability to such varied crises as aging, unemployment, and gender imbalances.
I shudder at such a world, but I doubt economic analysis can provide any basis for rejecting it. Neurocosmetics and consumerism are but two facets of the individualist, subjectivist, economic functionalism that's become our default language for judging states of the world.
If I were asked to participate in Edge's salon, I think I'd flip the question and ask "what kind of common moral language do we need to stop random technological developments from changing everything?" Philosophers like Langdon Winner and Albert Borgmann have started answering that question as they consider technology and the character of contemporary life. Borgmann notes that "simulations of reality can lead to disastrous decisions when assumptions or data are faulty." Perhaps we should start thinking of neurocosmetics as a faulty source of emotional data about our responses to the world around us. As Ellen Gibson reported at Businessweek, autonomous decisions to compete on can adversely affect everyone's identity:
Dr. Anjan Chatterjee, a neurologist at the University of Pennsylvania Hospital, raises [a] red flag. Creative insights often arise when the mind is allowed to wander, he says. If drugs that sharpen concentration become widespread in the workplace, they may nurture "a bunch of automatons that are very good at implementing things but have nothing to implement."
From autonomy to automata: a provocative possibility.
4 Comments:
First, let me say thanks to all the parties responsible for resuscitating this blog. I think the subject matter is extremely important, particularly in a society such as ours, wherein technological developments are at a pace all out of proportion to our capacity to assess their contributions according to psychological and ethical criteria of human growth and fulfillment.
Frank speaks of "economic functionalism," but I suspect we should come clean and name the system for what it is: capitalism, and endeavor to figure out precisely how and why the economic rationality and logic of this system gives shape to the substance and dynamic of modern technology: why it pushes technological developments in this direction rather than that, for example, to meet the "needs" or "wants" (which it in turn also creates) of *this* class or group, rather than *that* class or group (I suspect we are witnessing the 'distortion' of so-called green technologies as we speak), etc., etc.
Ellul's "autonomous technology" thesis may have been overdrawn and, in the end, a bit simplistic, but there's still much to learned from it, as the work of both Winner and Borgmann, which importantly goes beyond Ellul, attests. I'm delighted to see the former two mentioned as the philosophy of technology is a comparatively neglected field of intellectual inquiry, a rather curious if not disturbing fact when one considers the leading role of technology in our contemporary world. (I hope future posts will continue to engage works in the philosophy of technology.)
As is usually the case with Frank's provocative posts, there's more than a few important issues raised but I will here highlight only a couple of them.
Frank,
When you write that "We need to understand that it's a conception of the self that is driving the acceptance of new technologies of self-alteration here, rather than vice versa," I think this is only half right and insufficiently dialectical: As you well realize, conceptions of "the self" (and the mind) rely on metaphors of technological provenance, hence we refer to the thinking person as "processing information" and the mind, rather, the brain, is understood on the model of a computer (cf. Daniel Dennett as well as the Computational-Representational Understanding [theory] of Mind or CRUM--what I prefer to call 'CRUMMY'), indeed, it's organized into modules as a computing organ, and hence the characterization of "the system" as a whole in functionalist terms, specifying the discrete parts of machinery involved. Indeed, on some accounts, the neural "hardware" becomes the stuff of consciousness itself! Needless to say, this is usually combined with some fashionable bits from evolutionary psychology so as to achieve the requisite scientific mantle and mitre for this particular and popular picture of the mind (often reduced to the brain, as in the quote you cite about 'the chemistry of trust...', and the logical end of which is expressed in the 'eliminative materialism' thesis forthrightly formulated by the Churchlands), Searle's objections notwithstanding (Searle himself, in the end, has scientistic proclivities insofar has he views philosophy to be the conceptual handmaiden of science). In short, technological models and metaphors of "the self" or crucial components thereof are ubiquitous both in popular ideological discourse, in the natural sciences (especially cognitive science and evolutionary psychology), and even philosophy of mind (it's particularly troubling in this last domain insofar as one would expect philosophers to be far more critical on this score). This suggests, in other words, that these new technologies will (if they do not already) likewise affect our conceptions of self.
I happen to be interested in works in the philosophy of mind that consciously eschew technological metaphors and conceptions while refusing to reduce the mind to the brain. I won't here cite this literature, but while it is rather small, I think it helps us see the myriad problems that arise from the philosophical presuppositions and assumptions about the self and mind intrinsic to apologies for the brave new world of "neurocosmetics." I thus hope the kinds of philosophy of technology you mention here will take cognizance of this literature in support of its efforts and that, in turn, this will become part and parcel of any "law and technology theory."
Fascinating post Frank! The last comment mentioned the works of Ellul (sometimes lumped in with other substantive theorists of technology). I suspect Ellul, if he was still with us, would not be surprised by the development of neurocosmetics as he might see the drugs as yet another way that we humans accommodate the demands of rationality and efficiency in our culture as we become increasingly machine-like (and, according to Ellul, less authentically human).
Putting aside Ellul's somewhat dystopian perspective for the moment, should we worry too much about this? Can neurocostmetics be portrayed as part of a larger continuum that began millenia ago when people started to load up with mead or pot, then later self-medicated on various substances, and more recently turned to the newest products offered by Big Pharma. If someone takes ADHD medication is that all that different from neurocosmetics? The ADHD medication might permit an individual to work more efficiently (hence providing some advantage over other workers).
More troublesome, at least to me, are technologies that increasingly alter us prior to birth. At least an adult can choose to take the neurocosmetics while the pre-born cannot.
Still, Ellul might tell us that while we think we are acting in an autonomous manner when adults take neurocosmetics, in fact we're fooling ourselves, as we can't understand the larger process of rationalization that is influencing our actions, making us pursue goal-oriented choices (enhanced efficiency, more pleasant disposition) and not value-oriented choices (living as an authentic individual).
Not only do we already engage in "neurocosmetics" through alcohol and other substances, but we engage in personality control through a far older "technology" - parenting. My children have no choice but to be subject to my own views in this regard (complete with naughty corners, education, toy selection, Internet access, bedroom architecture etc). In this sense, I control (at least partly) their ultimate personalities.
Personally, I find neurocosmetics scary. But is it more a manipulation of my children than all the other things I do? The children, of course, consent to neither. Parents make decisions about upbringing and about "neurocosmetics" based on a similar desire - to ensure their children's "successful" navigation of the world. Is the objection to neurocosmetics that it is more technology-mediated than, say, toy selection (many of which have substantial "technical" elements)? Is it that neurocosmetics is somehow more manipulative (which I doubt). Or is it that while parenting strategies differ (creating a somewhat diverse generation), neurocosmetics is more likely to create similarity?
I know someone who put their child on Ritalin largely as a result of disappointment with his school results. I argued against it and still believe it was wrong. But the difficulty seems to lie in explaining why it is more wrong than all the other parenting decisions made by others that I disagree with.
HI great article In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry.
Post a Comment
<< Home