"Science discovers, genius invents, industry applies, and man adapts himself..."
One of the slogans of the 1933 Chicago Worlds’ Fair was the following: “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry."
This wasn’t a new idea. There is a long-standing strand in human thinking about technology that emphasizes the important (and sometimes apparently decisive) effect of our technologies on society. In 1620 Sir Francis Bacon wrote in the Novum Organum that:
“…it is well to observe the force and virtue and consequences of discoveries, and these are to be seen nowhere more conspicuously than in those three which were unknown to the ancients, …; namely, printing, gunpowder, and the magnet. For these three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, insomuch that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.”
Numerous subsequent writers have raised the same suggestion that the technologies that we create and use have profound effects on social structures and on human history. At the extreme, technology itself is viewed as a phenomenon that drives history and society. It seems likely that this idea is, in part, true. However, at the same time, technologies are produced and used by a given society and so are themselves determined by that society. In other words, the influence appears to flow in two directions between technology and society, and it is difficult to untangle the primary cause (if there is one).
And yet, the complexity of this interaction makes it seem sometimes as if technology calls the shots. As they said at the Worlds’ Fair, society and individuals “fall into step with,” “adapt to,” or are “molded by” the technology. In this pair of blog postings, I would like to tackle the following two questions. First, has technology and technological ideology so pervaded the law and judicial thinking that it can be said that the law is determined by technology rather than that technology is controlled by the law?
The second blog posting will look at the effects of technology on the autonomy of the individual human being rather than the effects of technology on the collective self-determination of humans in a society. In that second post, I would like to explore the mechanisms by which individuals come to feel obliged to adopt a given technology, and how inequality (of power, natural or material resources) between humans drives this process. With this second posting, I am indebted to Frank Pasquale, whose excellent recent posts in this blog and previous writing on equality and technology have spurred my thinking in this direction. My discussions with my good friend and extremely insightful colleague at the University of Ottawa, Ian Kerr, on the complex effects of technology on human equality were both fun and deeply illuminating too!
Onward with the first posting!
A year or so I published an article that asked whether courts control technology or simply legitimize its social acceptance. I raised this possibility because I kept coming across judgments that suggested that either (1) our legal rules are biased in favour of technologies and against competing non-technological values, or (2) judges find ways to reframe disputes in ways that tend to favour technologies. This is a bold accusation; it is possible that counter-examples could be proposed. However, let me give two examples to illustrate what I mean.
The doctrine of mitigation in tort law states that a plaintiff who sues a defendant for compensation cannot recover compensation for those damages that could reasonably have been avoided. So far, so good. It makes sense to encourage people to take reasonable steps to limit the harm they suffer. In practice, however, this rule has been applied by the courts to require plaintiffs to submit to medical treatments involving various invasive technologies to which they deeply objected, including back surgery and electro-shock therapy. Although plaintiffs have not been physically forced to do so, a seriously-injured plaintiff may face considerable economic duress. Knowing that compensation will likely be withheld by the courts if they do not submit to a majoritarian vision of reasonable treatment, they may submit unwillingly to these interventions. I think that this doctrine operates in a way that normalizes the use of medical technologies despite legitimate objections to them by individual patients.
In the trial level decision in the Canadian case of Hoffman v. Monsanto, a group of organic farmers in Saskatchewan attempted to start a lawsuit against the manufacturer of genetically-modified canola. The farmers argued that because of the drift of genetically-modified canola pollen onto their crops, their organic canola was ruined and their land contaminated. The defendants responded that their product had been found to be safe by the Canadian government and that it had not caused any harm to the organic farmers. Instead, the organic farmers had brought harm upon themselves by insisting on adhering to the organic standards set by organic certifiers and the organic market. The trial judge was very receptive to this idea that the losses flowed from actions of organic certifiers and markets in rejecting genetically-modified organisms, and not from the actions of the manufacturers. I find this to be a very interesting framing of the dispute. In essence, it identifies the source of harm as the decision to reject the technology, rather than the decision to introduce the technological modification to the environment. Once again, the technology itself becomes invisible in this re-framing of the source of the harm.
These judges do not set out to make sure that humans adapt to the technologies in these cases. Instead, I think these cases can be interpreted as being driven by the ideological commitments of modernity to progress and instrumental rationality. An interpretation of the facts or a choice of lifestyle that conflicts with these ideologies sits highly uneasily within a legal system that itself also reflects these ideologies.
More recently, I have begun to explore a second question along these lines. If judges and our legal rules are stacked in favour of technologies and against other values, what happens when it is the judges themselves who are in conflict with the technologies. Do the judges adapt? Here I turned to the history of the polygraph machine (lie detector), and the attempts to replace the judicial assessment of veracity with evidence from the machine. The courts have generally resisted the use of polygraph evidence on two bases. First, they say, it is unreliable. Second, the assessment of veracity is viewed as a “quintessentially human” function, and the use of a machine for this function would dehumanize the justice system. While the judges appear to be holding the line at the attempted usurpation by the machine of this human role in justice, it is interesting to speculate about how long they will be able to do so. Will they be able to resist admitting reliable machine evidence, particularly given concerns about how reliable humans actually are at detecting lies. Novel neuro-imaging techniques such as fMRI which purport to identify deception by patterns of activity in the brain, represent the next step in this debate. If these neuro-imaging techniques are refined to the point that they are demonstrably superior to human beings in assessing veracity, would it be fair to exclude this evidence in a criminal trial? The right to make a full answer and defence to criminal charges may say “no.”
I am currently researching neuro-imaging technologies and their use in the detection of deception in order to predict how our law may be affected by them. In the background is the continued question: Is it true that “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry"?
This wasn’t a new idea. There is a long-standing strand in human thinking about technology that emphasizes the important (and sometimes apparently decisive) effect of our technologies on society. In 1620 Sir Francis Bacon wrote in the Novum Organum that:
“…it is well to observe the force and virtue and consequences of discoveries, and these are to be seen nowhere more conspicuously than in those three which were unknown to the ancients, …; namely, printing, gunpowder, and the magnet. For these three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, insomuch that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.”
Numerous subsequent writers have raised the same suggestion that the technologies that we create and use have profound effects on social structures and on human history. At the extreme, technology itself is viewed as a phenomenon that drives history and society. It seems likely that this idea is, in part, true. However, at the same time, technologies are produced and used by a given society and so are themselves determined by that society. In other words, the influence appears to flow in two directions between technology and society, and it is difficult to untangle the primary cause (if there is one).
And yet, the complexity of this interaction makes it seem sometimes as if technology calls the shots. As they said at the Worlds’ Fair, society and individuals “fall into step with,” “adapt to,” or are “molded by” the technology. In this pair of blog postings, I would like to tackle the following two questions. First, has technology and technological ideology so pervaded the law and judicial thinking that it can be said that the law is determined by technology rather than that technology is controlled by the law?
The second blog posting will look at the effects of technology on the autonomy of the individual human being rather than the effects of technology on the collective self-determination of humans in a society. In that second post, I would like to explore the mechanisms by which individuals come to feel obliged to adopt a given technology, and how inequality (of power, natural or material resources) between humans drives this process. With this second posting, I am indebted to Frank Pasquale, whose excellent recent posts in this blog and previous writing on equality and technology have spurred my thinking in this direction. My discussions with my good friend and extremely insightful colleague at the University of Ottawa, Ian Kerr, on the complex effects of technology on human equality were both fun and deeply illuminating too!
Onward with the first posting!
A year or so I published an article that asked whether courts control technology or simply legitimize its social acceptance. I raised this possibility because I kept coming across judgments that suggested that either (1) our legal rules are biased in favour of technologies and against competing non-technological values, or (2) judges find ways to reframe disputes in ways that tend to favour technologies. This is a bold accusation; it is possible that counter-examples could be proposed. However, let me give two examples to illustrate what I mean.
The doctrine of mitigation in tort law states that a plaintiff who sues a defendant for compensation cannot recover compensation for those damages that could reasonably have been avoided. So far, so good. It makes sense to encourage people to take reasonable steps to limit the harm they suffer. In practice, however, this rule has been applied by the courts to require plaintiffs to submit to medical treatments involving various invasive technologies to which they deeply objected, including back surgery and electro-shock therapy. Although plaintiffs have not been physically forced to do so, a seriously-injured plaintiff may face considerable economic duress. Knowing that compensation will likely be withheld by the courts if they do not submit to a majoritarian vision of reasonable treatment, they may submit unwillingly to these interventions. I think that this doctrine operates in a way that normalizes the use of medical technologies despite legitimate objections to them by individual patients.
In the trial level decision in the Canadian case of Hoffman v. Monsanto, a group of organic farmers in Saskatchewan attempted to start a lawsuit against the manufacturer of genetically-modified canola. The farmers argued that because of the drift of genetically-modified canola pollen onto their crops, their organic canola was ruined and their land contaminated. The defendants responded that their product had been found to be safe by the Canadian government and that it had not caused any harm to the organic farmers. Instead, the organic farmers had brought harm upon themselves by insisting on adhering to the organic standards set by organic certifiers and the organic market. The trial judge was very receptive to this idea that the losses flowed from actions of organic certifiers and markets in rejecting genetically-modified organisms, and not from the actions of the manufacturers. I find this to be a very interesting framing of the dispute. In essence, it identifies the source of harm as the decision to reject the technology, rather than the decision to introduce the technological modification to the environment. Once again, the technology itself becomes invisible in this re-framing of the source of the harm.
These judges do not set out to make sure that humans adapt to the technologies in these cases. Instead, I think these cases can be interpreted as being driven by the ideological commitments of modernity to progress and instrumental rationality. An interpretation of the facts or a choice of lifestyle that conflicts with these ideologies sits highly uneasily within a legal system that itself also reflects these ideologies.
More recently, I have begun to explore a second question along these lines. If judges and our legal rules are stacked in favour of technologies and against other values, what happens when it is the judges themselves who are in conflict with the technologies. Do the judges adapt? Here I turned to the history of the polygraph machine (lie detector), and the attempts to replace the judicial assessment of veracity with evidence from the machine. The courts have generally resisted the use of polygraph evidence on two bases. First, they say, it is unreliable. Second, the assessment of veracity is viewed as a “quintessentially human” function, and the use of a machine for this function would dehumanize the justice system. While the judges appear to be holding the line at the attempted usurpation by the machine of this human role in justice, it is interesting to speculate about how long they will be able to do so. Will they be able to resist admitting reliable machine evidence, particularly given concerns about how reliable humans actually are at detecting lies. Novel neuro-imaging techniques such as fMRI which purport to identify deception by patterns of activity in the brain, represent the next step in this debate. If these neuro-imaging techniques are refined to the point that they are demonstrably superior to human beings in assessing veracity, would it be fair to exclude this evidence in a criminal trial? The right to make a full answer and defence to criminal charges may say “no.”
I am currently researching neuro-imaging technologies and their use in the detection of deception in order to predict how our law may be affected by them. In the background is the continued question: Is it true that “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry"?
4 Comments:
Jennifer,
This is a wonderful post. I am especially intrigued by your description of judges indirectly imposing the adoption of new technologies. I generally believe that the legal regime focuses much more on encouraging innovation of new technologies than their adoption. Important technologies often suffer from diffusion failures.
This makes me wonder whether the examples you bring indicate a trend or whether we can find examples that point in both directions. One example I wrote about was artificial insemination where some courts in the 1940s-1950s pronounced the women using the technology to be committing adultery and the resulting child to be illegitimate. Other more recent examples are where courts restrict use of privacy threatening technologies for privacy considerations. I am wondering whether we can draw more refined lines of when courts would encourage technological adoption and when they wouldn't. Does it depend on whether there are important social conflicting values at stake?
Thanks Jennifer and Gaia.
I agree with Gaia and think it might depend on the technology and values at stake. Perhaps, even, the extent to which the values accord with the values of the typical member of the judiciary. So, judges often prevent, directly or indirectly, the refusal of medical treatment (think also of children of parents with anti-intervention religious beliefs). But they are reluctant to embrace technologies if that would limit their own role. The example of lie detectors is one, but also the Gutnick case where jurisdiction was maintained despite the alleged threat to the Internet. AI could also fit in here if one assumes judges of the 40s and 50s to have been relatively socially conservative. Admittedly cynical, but how do we judge except through our own values?
Jennifer,
That was an informative and interesting post. Thanks.
You might be interested in a recent paper by Michael Pardo and Dennis Patterson, "Philosphical Foundations of Law and Neuroscience," available: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1338763
Perhaps it goes without saying but I'm quite partial to their argument and hope it gets a wide hearing in the legal academy, especially in light of the current popularity of neuroscience and neuroethics.
By the way, with regard to your forthcoming post, you might want to look up (if you've not already) some of what Amartya Sen has written about relative deprivation and inequality (and 'social exclusion;' as mentioned by Sen, an approach pioneered in the work of Rene Lenoir). See, for instance, his contribution, "Conceptualizing and Measuring Poverty," in David B. Grusky and Ravi Kanbur, eds., Poverty and Inequality (2006): 30-46. Cf.:
"[T]oday a person in New York may well suffer from poverty despite having a level of income that would make him or her immune from poverty in Bangladesh or Ethiopia. This is not only because the capabilities that are taken to be basic tend to change as a country becomes richer, but also because even for the same level of capability, the needed minimal income may itself rise, along with the income of others in the community. For example, in order to take part in the life of the community, or for children to be able to communicate with others in the same school, the bundle of commodities needed may include a telephone, a television, a car, and so on [e.g., a computer!], in New York, in a way that would not apply in Addis or in Dhaka (where an adult may be able to participate in social affairs and children can talk to each other without these implements)."
And of course was also a preoccupation in the work of the late Ivan Illich.
Jennifer: I'll look forward to your work on brain-imaging. As you may have heard, there was a recent criminal case in India where someone was convicted in part on the basis of fMRI evidence against the accused (at least that is my recollection).
The ability of these technologies to determine veracity may be on the increase.
In a somewhat under-the-radar development fMRI can now, at least in a primitive fashion, actually measure and read thoughts. In 2008, scientists from Berkeley published a paper in Nature that showed how brain-imaging technologies can let them accurately predict the images that people are looking at. First, a test subject was asked to view thousands of images over five hours while a software decoder scanned their visual cortex. This step taught the decoder how that person's brain codes visual information. Next, the subject was shown new sets of images while the decoder tried to predict the brain activity it would expect if the subject was viewing each of the sets. Finally, the test subject looked at images from this second set while in the scanner. The software then matched their observed brain activity with the predicted activity from the decoder. When using a set of 120 images, the software got it right nine out of 10 times. If the software were to simply make random predictions, its success rate would be just 0.8%. According to the researchers, "This general visual decoder would have great scientific and practical use … perhaps even to access the visual content of purely mental phenomena such as dreams and other imagery."
Awesome--and frightening!
As with respect to Frank's earlier posts re: cognitive-enhancing drugs (and my comments concerning reprogenetic drugs), we seem to be moving into a world where technologies more and more determine or reveal fundamental aspects of human identity (including 'truthiness'!). The instrumental perspective that traditionally called for judicial/policy deference to new technologies, as suggested by Jennifer's blog, could be potentially challenged to the extent the law recognizes that the stakes are higher than ever in terms of the ways that tech change can harm traditional values and interests protected by law (the right of the accused to remain silent may need to evolve to become the right of the accused to not have his/her brain scanned, which presumably could be conducted in a remote and surreptitious manner by state agents at some point).
Post a Comment
<< Home