Thursday, March 31, 2011

Digital Recording for Virtual Conference

We’ve posted here a digital recording of the virtual conference held last year on ‘Human Autonomy, Law and Technology,’ which followed an earlier blog at this site on the same topic. You’ll also see an agenda to help you view the digital recording. If you right click on the “Watch Video” link and choose “view in separate tab or separate window” then you can see both. Also, to enlarge the video you can click on the X at the bottom right of the video screen. Thanks again to all of the participants. The articles from the techtheory blog and the virtual conference were collected in a symposium issue of the journal Bulletin of Science, Technology & Society and can be found here.

Tuesday, June 01, 2010

The Laws of Technology and the Technologies of Law Event 2011

The Laws of Technology and the Technologies of Law Event 2011


Lawyers and legal institutions regularly face technological change. The public record of the Twentieth and this century is populated by numerous crisis events that surround emerging technology where law was called forth to channel, to regulate, or prohibit certain technologies and technological mediated activities. This rich history coupled with the ever present concern of technological change would suggest that there is a detailed scholarly reflection on the relationship between law and technology. However, this is not necessarily the case. Most scholarship on law and technology is reactive to concerns surrounding a specific technology or technological mediated activity. This orthodox scholarship remains within a reasonably narrow frame of reference concerned with securing a desirable future through law as an instrument of public policy. In this the lawyer-scholar’s task is primarily descriptive; it involves the identification of the ‘issues’, ‘uncertainties’ and the ‘gaps’ to be addressed by policy-makers and legislators. This symposium aims to challenge this orthodoxy at three key points.

The first challenge can be through a taking seriously of the past of a law’s engagement with technology. Instead of issue specific piecemeal engagements that look narrowly to the future, it is hoped through archival, historical and cultural sources to gleam a more sophisticated account of the social, political, economic and cultural factors that gave form to concrete law and technology moments.

The second challenge can be through a taking seriously of the present of law’s engagement with technology. Law faces profound technological change. However, instead of falling back on the narrow nomology of the orthodox scholarship, what is hoped for is a diverse array of methods and resources – social scientific, cultural and literary studies for example – to expose, critique and understand the current political-legal engagements with technological change.

The third challenge can be through a taking seriously of the future of law’s engagement with technology. The predominant theory of law in the orthodox scholarship is instrumental and sovereign. At a fundamental level law is conceived as a process, a machine that can be deployed. And significantly it is a process that can claim sovereignty over the future. Ironically the law called forth by technology can be characterised as technological. Through jurisprudential, philosophic, semiotic, psychoanalytic and other theoretically informed discourses it is hoped to question and think these deep connections between law and technology.


Kieran Tranter, Deputy-Director Socio-legal Research Centre and Managing Editor Griffith Law Review.


The focus is a one day workshop at Griffith Law School, Griffith University Gold Coast Campus to be held on 3 May 2011.

While it is hoped that presenters can present in person it is planned that presenters will be able to contribute through Skype and video-linking technologies. The workshop will be run afternoon-evening to allow northern hemisphere presenters to be involved.

There will be no cost for presenters to attend the workshop.


A selection of the papers presented at the workshop will be refereed and edited for appearance as a symposium in the Griffith Law Review (2011) 20(2).

An edited volume comprising all the presented papers with a well regarded law publisher is planned.


Proposals, including a title and 300-word abstract are due 28 January 2011. Send proposals to

Confirmed Participants

Lyria Bennett Moses, Faculty of Law, University of New South Wales, “Agents of Change.”

Gaia Bernstein, Seton Hall Law, Seton Hall University, “When is Timing Important in the Regulation of New Technologies?”

Arthur Cockfield, Faculty of Law, Queens University, “From Cyberlaw to Law and Technology.”

Jennifer Chandler, Faculty of Law, University of Ottowa, “Technological Justice: Identification and Distribution of the Benefits and Harms of Cognitive Enhancements and Therapies.”

Charles Lawson, Griffith Law School, “Deploying Law to Bound Nature.”

Joseph Pugliese, Faculty of Arts, Macquarie University, "Drone Technologies and the Inexecution of Law."

Kieran Tranter, Griffith Law School, “Gaming the Speculative Jurisdiction in Law and Technology.”

Friday, January 22, 2010

Announcing Upcoming Virtual Conference

On March 18, 2010, from 1 pm to 5 pm (Eastern Standard Time), we will hold a 'virtual' conference in Second Life (at the Queen's University Faculty of Education virtual island). The topic of this conference will be the same as the one for our most recent blog: 'Human Autonomy, Law and Technology.'

Dean Jim Chen will open the virtual conference with a keynote speech. Then professors, lawyers and others from different countries will appear as avatars to give papers and commentary.
More information on this virtual conference including a draft agenda is located here, which will be updated as the conference date approaches.

Individuals can view the conference proceedings in three ways: (a) as avatar audience members attending the conference; (b) via a live video feed; or (c) later viewing of an archived digital copy of the conference proceedings.

Tuesday, March 10, 2009

Summary: Human Autonomy, Technology and Law

I’d like to thank our bloggers for their many thought-provoking posts: Frank Pasquale, Jennifer Chandler, Kieran Tranter, Gaia Bernstein, Lyria Bennett-Moses, Lisa Austin, and Samuel Trosow (also thanks to Jennifer for suggesting the blog topic). Many thanks as well to Jim Chen for administrating the techtheory blog, and for helping out with technical glitches. Final thanks to those individuals who provided helpful comments.

At the outset of this blog on the topic of 'human autonomy, technology and law', we asked whether we controlled machines or whether machines controlled us, and what did all this have to do with law.

Not surprisingly, the diversity of views expressed on this topic resists any straight-forward summary.

Our bloggers and commentators explored technologies that included cosmetic surgery, nuclear weapons, cell phones, surveillance technologies, digital copyright protections, databases, Facebook, neurocosmetics, reprogenetics, email, videotext systems, Blackberries, Neanderthal tools, airplanes, handguns, virginity-restoring surgery, human growth hormones, insulin, genetically-modified canola, fMRI, and even parenting styles.

Areas of law discussed included torts, copyright, privacy, cyberlaw, space law, virtual law, civil procedure, contracts, property, constitutional, family, labour, and mental health (nobody mentioned my personal favorite ‘tax law and tech change,’ but don't get me started on that one...).

A number of our bloggers drew from works from non-legal academic disciplines with an emphasis on the ways that philosophers, sociologists, political scientists, historians and economists have struggled with technology theories as well as perspectives on the relationship between human agency and technology. Views ranged from the near-impossibility of staving off technological determinism ('those darn machines do control us!') to emphasizing the ways that people adopt and successfully resist technology ('we aren't going to let mere machines push us around!').

Those bloggers who fell toward the ‘machines control us' perspective tended to support more interventionist legal policies while those who identified more closely with the human agency position seemed to take a more 'wait and see' attitude and were reluctant to change the legal status quo in an aggressive manner as this could unduly upset traditional legal interests.

Many comments seemed to fall within the middle-ground that accepts the potential for human willpower within technological determinism. This position appears to track the 'soft determinism' perspective articulated by some technology theorists (which, in turn, is related to the philosophical notion of compatibilism that holds out the prospect of free will in a deterministic universe). This perspective could require a careful examination of the facts and circumstances of each legal issue to see whether technological determinism could harm legal interests.

Accordingly, how one thinks about the blog topic can carry important consequences with respect to both legal analysis and the ultimate legal/policy recommendation meant to address situations where law and technology intersect.

A final note: a collection of works on general law and technology theories and perspectives, including some by repeat bloggers at this site, has recently been published in a book entitled Law and Technology: An Interface from Amicus Books, Icfai University Press (edited by K Prasanna Rani).

Thursday, March 05, 2009

Bringing in some economic analysis (2)

The development of a general theory of law, technology and society should go a long way towards providing a coherent lens from which to understand and intervene in the information / information technology policy process on several levels. In building this theory, attention needs to be paid to economic analysis, and we should remain mindful of the disparate theories and assumptions that inform different economic schools of thought.

A major weakness in the information / communications/ technology policy process (I’ll use information policy as a catch-all term that encompasses all of these and which includes substantive areas such as intellectual property, privacy, censorship, network policy, etc) stems from the failure to question many ‘taken for granted’ assumptions, particularly those about the nature of our economic system. As history shows that economic systems change over time, it is a mistake to assume one particular system as being universal and immutable.

Yet much policy making fails to get beyond certain assumptions about the superiority and inevitability of market exchange and the price system as the only possible allocative mechanism. Utilizing an approach rooted in critical political economy helps correct for some serious ‘blind-spots’ that, at least in the case of intellectual property and other information technology issues, results in the assumption that the public goods nature of information is a “problem” that needs to be “cured. This “cure” tends to involve crafting policies that utilize technological measures to induce scarcity, promote rivalry in consumption, and create new exclusion and control mechanisms, all with resulting negative social effects. Under the predominant utilitarian approach much in favor with contemporary policy makers, these social costs are often justified based on the need to foster economic incentives. On the surface at least, it would seem that economic analysis plays an important role in the information policy process. But when one cuts below the surface, it appears that the policy process is not quite so driven by any real economic analysis as much as by the power of economic interests. (Elsewhere I have more fully argued that copyright policy developments can be located within a broader framework of commodification and the logic of capital, and that a critical theoretical framework rooted in political economy is needed.)

The copyright policy environment in the United States in the mid to late 1990’s provides good examples of how an unswerving faith in market exchanges combined with a ‘circle-the-wagons’ response to the challenges of new technologies, resulted in some very skewed policies. Skewed, that is, in the direction of insuring that market mechanisms could operate without the increasing pesky interference being caused by the public goods nature of information. During that period, there was a convergence of several policy initiatives, which taken together constituted an unparalleled proprietization, (or maximalist drift as it was often called) of intellectual goods and services. Several measures were passed into law, such as the strong anti-circumvention and digital rights management rules in the DMCA, the Sonny-Bono Term Extension Act, the No Electronic Theft Act, the passage of UCITA in Maryland and Virginia, and the general ratcheting-up of mandatory levels of IP protection through trade agreements. Other measures failed to secure passage, such as a continuous series of sui generis database protection bills and UCITA in all but two states. But throughout this entire process, the policy process was generally devoid of any serious understanding of the relationship between law, technology, the economy and society. Little effort was made to try to understand how the technological advances on the horizon would interact with the social, cultural, political and economic practices. What little of what even tried to pass for economic analysis tended to focus on alarmist accounts of the dollar losses to the information and entertainment industries on account of piracy; or dire warnings of the impending demise of the domestic database industry should the problems created by the Feist case not be “cured” with expansive database legislation. (I have written a general critique of the database right and about databases in general elsewhere).

In retrospect we can view the passage of the anti-circumvention rules of the DMCA as a high-water mark of a backward-looking maximalist agenda, or perhaps better stated as the low-water mark of progressive and future-oriented information policy making. One only need review the Electronic Frontier Foundations “Unintended Consequences: Seven Years Under the DMCA” to get a sense of how ill advised this particular legislation truly was. That it was accompanied by other similarly oriented measures, and then exported for international adoption through an expanding series of trade agreements (which I have addressed elsewhere) only exacerbated the problems. It didn’t take long for even some of the key policy makers to have second thoughts about what they had unleashed in the heady 90’s (view Bruce Lehman’s statements at a 2007 conference at McGill).

For its part, Canada has done well in avoiding the excesses of the DMCA, but the pressure is still on to adopt similar policies. Substantive economic analysis, based on an understanding of how people are using new technologies and how the old business models might not be the best way to foster innovation, induce creativity and enable sustainable levels of growth, will help ensure that the Canadian policy process doesn’t fall victim to the same traps that the US fell into over a decade ago.

Perhaps in retrospect it is all too easy to say that the failure of the policy process was due to a lack of careful economic analysis or a clear understanding of the nature of the technological changes then underway. It might be that the forces pushing for these changes had simply captured the policy process at the time and no amount of economic analysis, (critical or otherwise) and no amount of technological insight would have changed anything. Economic times were good, and new information and communications technologies promised an optimistic future of seemingly unlimited growth and plenty.

For better or worse, today we are living under very different circumstances. Some of the utopian glitter of the high-tech enthusiasts has worn off (well at least some of it) and we are giving some serious thought of possibility of ordering our economic system in different ways, at least insofar as immediate government policies are concerned.

So all in all, I think now is a particularly good time to be working on the development of a general theory of law and technology (or law, technology and society) which takes due account of a wide range of cultural, social, political and economic factors. If the forces of technology can be utilized to expand access to intellectual goods rather than to devise ever more insidious exclusion, metering and surveillance systems it will take some conscious effort and some affirmative information policies. I thank the organizers of this discussion for moving this agenda forward, and I am grateful to have had the chance to participate.

Tuesday, March 03, 2009

Bringing in some economic analysis

In my posts I will address issues pertaining to the economics of information and information technologies. In particular, I will argue that a theory of law and technology needs to account for economic phenomena, and that it needs to do so in a conscious, purposeful and critical manner.

In an earlier posting (February 5th), Peter Yu discusses the general theory of law of technology, which he argues should be about a triangular relationship between law, technology and society. I agree with this general formulation and the need to view these three components as being interdependent, avoiding either a technological determinist account or the formalist positivist view of law as an internal system unto itself. But I worry that not enough attention is given to economic issues, which I believe a crucial component of a full understanding of the relations between law, technology and various social phenomena broadly construed. At the outset, I should disclaim any interest in adopting a law and economics approach to the problem; it is not my intention to substitute economics for technology as a key determinate. I just want to make sure that choices concerning questions of economic policy, indeed some very basic threshold questions about economic policy, are not lost under the broader guise of “law”, “technology” or “society”.

In his February 2nd posting, Art Cockfield reminds us of two differing views on the relationship between human autonomy and technology, the instrumentalist and substantive schools of thought. He contrasts these two competing theories and points to how they may influence legal analysis. I will do something similar with respect to what I see as two competing theories of economic analysis, which for simplicity I will refer to as mainstream positive economics and critical political economy. While Art goes on to propose a synthesis of the two philosophical perspectives he identifies, I will argue that these two ways of thinking about economics are less susceptible to reconciliation and have inherent tensions that are often played out in the information policy making process that results in intellectual property and related laws.

My goal in these posts is not necessarily to convince anyone that one of these two disparate world-views is better than the other, or some potential mixture; but rather of the need to take economic issues into account when thinking about the parameters of a theory of law and technology. Having said that, I should indicate that in my work I rely heavily on an economic approach grounded in radical political economy, and I use this to approach inform my critique of expansionist intellectual property policies and other policies that tend to reduce intellectual goods (data, information, knowledge) and intellectual and communication technologies to commodities without regard to their public goods qualities.

In the remainder of this first post, I will briefly review the contours of these two competing theories and their underlying assumptions. In the next post, I will argue that any theory of law and technology, or law, technology and society needs to consciously take account of economic issues in a purposeful manner that explicitly recognizes the contention between these schools of thought.

Mainstream positive economics starts with the assumption that the free market system, as it operates through a price mechanism is the ideal allocation mechanism to govern the production, dissemination and use of intellectual goods and information technologies. It is thought that such goods will be under-produced without a guarantee of sufficient market-based financial incentives to creators, inventors, owners and distributors. A related assumption then is that an expansion of property rights are necessary in order to protect these market-based interests from being under-produced or undermined by acts of appropriation, especially in an era of easy reproduction. In contrast, one might reject this market based system of allocation in favor of an approach rooted in the tradition of critical political economy.

With political economy, which has historically stood at the intersection of politics, social theory and economics, a society’s prevailing reward structure and economic institutions and models are not taken as a pre-determined given. Rather they are constantly subjected to evaluation and re-evaluation especially under conditions of change. Taking a broad historical approach, it becomes evident that societies can, and often do, change their economic institutions and the manner in which they operate in response to new arguments. With respect to a theory of law and technology, it is also evident that economic analysis has played an important role in informing copyright, patent and related policies and these utilitarian theories are often rooted in the search for the optimal trade-off, or balancing, of the various interests of creators, rights holders and users. Central to this utility-maximizing is the presumed need to provide direct economic incentives to create intellectual goods, be they works of expression or works of invention.

One recurring criticism of the efficiency-centric, cost-benefit analysis mode of thinking is that certain gains and losses are not as susceptible to precise quantitative measurement as are others. So in the area of intellectual property, it is often argued that losses to the general public interest resulting from over-protection are not as easy to identify and measure as those concrete financial benefits accruing to the rights holders, and that this disparity creates a built-in bias in the policy process in favor of over-protection of intellectual assets. It can also act to marginalize other policy options which are not rooted in proprietary mechanisms.

Proponents of an approach rooted in critical political economy would argue that a deeper analysis is needed than what can be provided by models tied to market efficiency assumptions. Within the critical perspective, the “public goods” quality of information (it tends to be non-rival in consumption and not inherently subject to an exclusion mechanism) is seen as a good thing that presents society with many potential social benefits. But within the logic of the market, the public goods nature of information is view as a market failure “problem” that needs to be “cured” so that the price mechanism can properly operate. These cures take on various forms designed to induce scarcity, promote rivalry in consumption, or employ new exclusion mechanisms.

Getting back to the triangular relationship between law, technology and society, I’m not sure where to place the consideration of economic issues. Perhaps it is a subset of the “society” prong, or perhaps it is embedded in the “technology” or perhaps even with the “law.” I am clear though, that economic issues need to be considered somewhere in this mix, and such difficult economic issues need to be considered in an explicit manner. Failing such explicit treatment, the underlying assumptions of one of the models continues to go unchallenged, and its values and suppositions are absorbed into the policy process even if only implicitly.

In my next post, I’ll provide some examples and attempt to flush these arguments out in a bit more detail.

Monday, March 02, 2009

Introducing Samuel Trosow

Our last blogger on the topic of 'human autonomy, law and technology,' is Sam Trosow from the University of Western Ontario where he holds a cross-appointment with the Faculty of Law and the Faculty of Information and Media Studies.

Several years ago, Sam and I were on a panel that looked into law and technology theories, and his paper discussed how social theory could help legal thinkers understand the ways that technology change can subvert legal interests: see Samuel Trosow, "The Ownership and Commodification of Legal Knowledge: Using Social Theory of the Information Age as a Tool for Policy Analysis," Manitoba Law Journal 30(3): 417 (2004).

He has more recently published a book (with Laura Murray) on Canadian copyright reform.

Looking forward to what should be a couple of fairly provocative posts!

Saturday, February 28, 2009

Control yourself, or at least your “core” self

One of the dominant definitions of privacy—particularly in the policy world but by no means confined there—is that of control over personal information. Certainly it influences data protection law in Canada, which requires organizations to obtain the consent of individuals for the collection, use and disclosure of personal information. One of the great advantages of such a model is that it does not limit protection to a particular sub-class of personal information such as information that is sensitive and intimate—“personal information” is simply information about an identifiable individual. This makes such models potentially more responsive to information practices that rely less on intruding into a sensitive sphere and more upon compiling pieces of information that, on their own, are not sensitive and may even be “public.” However, the breadth of a control-model is also its Achilles heel: to create a workable scheme one needs many exceptions and without careful thought these may be clumsily introduced. Canada’s experience with these regimes bears this out, and I have documented these problems elsewhere.

For the purposes of this blog, I want to focus here on a particular strategy for limiting the breadth of a control-over-personal-information model of privacy that is popular in Canadian jurisprudence: the idea of a "biographical core." Canadian Supreme Court constitutional privacy jurisprudence (arising out of the search and seizure context) has often endorsed ideas like control over personal information in relation to informational privacy. However, most of the real work is in fact being done by a much narrower idea. Informational privacy is said to protect one’s “biographical core of personal information,” which has been defined as including “information which tends to reveal intimate details of the lifestyle and personal choices of the individual.” (Plant) This narrowing of personal information to one’s biographical core is also present in data protection regimes, although less explicitly, because of the need to provide some personal information with stronger protection than other information (for example, this sometimes plays out in debates regarding the type of consent required or in how a balancing test is implemented).

I have pointed out this trend at a number of practice-oriented forums and usually get one of two responses. The first, from decision makers, is that of course they have to operate with some idea of a “biographical core” because some information is more sensitive than others and this is the only way to properly engage in a privacy risk assessment. The second, from various privacy advocates, is shock and dismay that the privacy community is reverting to what looks like an idea of sensitive and intimate information that seems wholly unsuited to meet current privacy challenges.

I, however, think that privacy-as-protection-of-one’s-biographical-core has far more in common with privacy-as-control-over-personal-information than simply its pragmatic use to narrow an overly-broad definition. They both draw upon a similar idea of the self.

This becomes readily apparent if we consider the work of Alan Westin in his influential book, Privacy and Freedom. Westin is often cited for this classic privacy-as-control statement:

Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others. (p.7)

But Westin also goes on to write:

privacy is the voluntary and temporary withdrawal of a person from the general society through physical or psychological means, either in a state of solitude or small-group intimacy or, when among larger groups, in a condition of anonymity or reserve. … [E]ach individual is continually engaged in a personal adjustment process in which he balances the desire for privacy with the desire for disclosure and communication of himself to others (p. 7)

The most serious threat to the individual’s autonomy is the possibility that someone may penetrate the inner zone and learn his ultimate secrets, either by physical or psychological means. (p.33)

From this we can see that Westin’s claims regarding control over information are in service of an idea of privacy as social withdrawal—an idea that lines up with more traditional privacy ideas such as the protection of secret, sensitive and intimate information. Moreover, this withdrawal is ultimately in service of the protection of an “inner zone” that parallels the Supreme Court of Canada’s biographical core. Social interaction is something that is balanced against this need for withdrawal, something that is in constant tension with it—which echoes the difficulty that many judges have in understanding why someone might have a privacy interest in information that has been voluntarily disclosed to others, or in regards to something that has in some context been made “public.”

There are other alternatives for thinking about the self and privacy. Suppose instead that we took up the challenge posed by some of the first generation philosophers of technology that we need to rethink the modern subject if we are to properly respond to the challenges of technology. Suppose, for example, that instead of the idea of an individual with an inner core transparent to itself upon solitary introspection, we posited a self that is in fact formed through social interaction. The point of privacy would not be to protect the conditions of social withdrawal in order to maintain the integrity of such a self—it would be to protect the conditions of social interaction in order to provide the basis for identity formation in the first place.

I am currently working on outlining an account of privacy such as this. Inspired explicitly by Goffman, but influenced by many others, I want to claim that privacy should be understood in terms of protecting our capacity for self-presentation. This “self” that is presented may or may not be different in relation to different “others,” may or may not be constituted through these relationships, and may or may not vary over time and across contexts in contradictory ways—in other words, it stays far away from positing anything like an “inner zone” or “biographical core.” What becomes important is not the protection of different layers of an already-constituted self but rather an individual’s ability to know the others to whom she presents herself—and even, in some case, to be able to choose these others. For example, if I take a photo of you in a public place and publish it in a magazine I have dramatically changed the nature of the others to whom you were presenting yourself—the “audience” shifts from the other people sharing this public space to the other people reading the magazine. This shift, I want to argue, undermines one’s capacity for self-presentation and therefore raises at least a prima facie privacy claim—even though the photo was taken in “public” and even though it reveals nothing embarrassing or sensitive (I have written elsewhere about the Aubry case, which has these facts).

There is, of course, much more to say and this is what my current work is focusing on. My point in these two blog posts has been to try to show that the first generation of philosophers of technology raise an intriguing challenge to legal theorists regarding the need to examine the view of the self that we adopt in thinking about technological questions. I think that privacy law and theory would do well to rise to the challenge.

Thursday, February 26, 2009

Are We All Control Freaks Now?

Earlier this month, Facebook quietly changed its terms of service and waded into what I will call the “control wars” over personal information. Facebook’s changes would enhance its control over users posted information, including material that had been deleted. The response was swift and angry. A Facebook Group, “People Against the New Terms of Service,” attracted over 130,000 members to pressure Facebook to revert to its old terms of use. The Electronic Privacy Information Centre (EPIC) threatened to file a complaint with the Federal Trade Commission. Facebook backed down.

This incident is interesting for many reasons. For one, it illustrates public anxieties regarding personal information. Tracked by public surveillance cameras, profiled by marketers, tagged by Facebook friends—increasingly we fear that information and communications technology has placed our personal information beyond our control. And, given that one of the most popular definitions of privacy is “control over personal information”, any loss of control is viewed as a problematic loss of privacy.

The Facebook incident also highlights the accepted solutions to this problem. The way to halt the rapid erosion of privacy is to provide individuals with more control over their personal information. This has both a technological and a legal aspect. The technological aspect can be seen by the use of technology itself (a Facebook group) to mobilize individuals into an effective pressure group. The legal aspect can be seen through the threat of legal action. In fact, EPIC claims that this incident is evidence of the need for more comprehensive privacy laws in the United States. Canada has such legislation, including our federal Personal Information Protection and Electronic Documents Act (PIPEDA), which aims to provide individuals with greater control over the collection, use and disclosure of their personal information. Even before this recent controversy, the Canadian Internet and Public Policy Clinic (CIPPIC) filed a complaint with the federal Privacy Commissioner alleging that Facebook was in violation of its obligations under PIPEDA.

I am a supporter of comprehensive privacy legislation and, as a Facebook user, happy that Facebook reversed its decision. Nonetheless I think we should be concerned about the prevalence of “control” as the paradigm for both the problem of, and solution to, information and communication technology.

What interests me here is the striking parallels between contemporary privacy angst and technological fears from an earlier era. Like the “information age,” the modern industrial age engendered dystopian visions of out-of-control technology, technology that did not simply herald a new age of freedom but rather brought with it new types of threats to human autonomy, health, communities and the environment. This spawned a great deal of academic commentary across many disciplines; I want to focus here specifically on the philosophy of technology and what it can both contribute to, and learn from, the control wars.

Hans Achterhuis usefully distinguishes first and second generation philosophers of technology. Perhaps the most influential philosopher of the first generation is Martin Heidegger. According to Heidegger, the instrumental conception of technology—that technology is simply a means that we create and use to further our chosen ends— blinds us to the true essence of technology. As he famously—and rather cryptically—argued in The Question Concerning Technology, “the essence of technology is by no means anything technological.” Instead, the essence of technology is more akin to what we might now call a cultural paradigm that conditions us to view the world as resources at our disposal. Moreover, for him the essence of technology is intrinsically tied to the project of modernity itself. In this way, his work fits within a general category of primarily European thinkers who made technology an explicit theme in their reflections and who argued—although each in quite different terms—that the significance of modern technology does not lie in specific features of its machinery but rather in a kind of rationality and cultural milieu intimately linked with the project of modernity and the Enlightenment values that animate it but simultaneously threatening to undermine human freedom. In addition to Heidegger, Jacques Ellul, Gabriel Marcel, and the Frankfurt School were all influential in this regard.

Second-generation philosophers of technology share a general rejection of instrumental definitions of technology but have largely tried to distance themselves from the strong dystopian flavour of these earlier more radical techniques. According to these second-generation thinkers, these earlier critiques fail because they are essentialist in talking about “Technology” rather than “technologies,” and determinist in not seeing the myriad ways in which human contexts and values shape and constrain the uses of technology. In a world where modern technology is ubiquitous and most often welcomed, they argue, we need a more nuanced view of technology, one that has a place to laud the victories of technology and a program for technological design that enhances democratic and ethical values. Indeed, as Hans Achterhuis has argued, second-generation philosophers of technology have largely taken an “empirical turn.”

This second-generation empirical turn can enrich legal discussions of technology by opening legal discussion to the insights of theorists from a variety of disciplines who have indicated that technology is in fact not neutral, that it often embodies important social and political values and therefore can have unintended and undesirable effects beyond simply physical consequences. It can also point to the ways in which we have the resources to think about, and build, technologies in a number of different ways and give us a richer basis upon which to think about law’s role in this.

However, in distancing itself from these various elements of earlier critiques, second generation philosophers of technology have largely lost sight of the normative elements of earlier critiques. The danger is that in showing how technologies are shaped by a complex of social forces, as well as how they open up a plurality of options, these theories fall into a kind of descriptive obscurity. Indeed, Langdon Winner accuses some expressions of this “empirical turn” of ignoring—even disdaining—any normative inquiry into technology in favour of highlighting the interpretive flexibility of any particular technology. As Winner argues, the important question is not how technology is constructed but which norms we should invoke to evaluate technologies, their design, implementation, and effects.

This is where legal scholars need to intervene.

What some of the legal debates regarding technology highlight is that it is not clear that the traditional normative strategies we might employ to evaluate technologies are adequate. And many of these normative strategies center on a particular idea of the self. For example, in an earlier posting, Frank Pasquale indicated that the question of the acceptance of self-enhancing technologies is not being driven by the technology itself but rather by a conception of the self that should be questioned. Kieran Tranter wrote of the need for alternative stories of self-creation.

These observations—with which I agree—suggest that we should rethink the empirical turn. What the first generation of philosophers of technology understood was that at the root of their questioning of technology lay the need to question the modern self itself. At the end of the day, this was Heidegger’s message regarding technology: the instrumental definition of technology blinds us to the real essence of technology but the supreme danger of this is that we are thereby also blinded to the true nature of what it means to be a human being. Discussions of controlling technology – through law or other means—misses this entirely and in fact risks perpetuating a problematic view of the self.

In my next post, I will try to show how this insight can be helpful in understanding the limits of a privacy paradigm centered on control of personal information even if we don’t return to the radical excesses of first generation philosophy of technology.

But in closing let me respond to one possible objection to my claim that law is an important site for normative engagement with technology and, in particular, claims of control. One might ask whether law itself is a technology and therefore not something that can be easily and straightforwardly enlisted to judge other technologies. Ellul, who has already been mentioned in a number of previous posts, himself wrote of “judicial technique,” placing it in the realm of calculative rationality that characterizes other techniques. Nonetheless, because law is a site of justice it is also in a kind of privileged position in relation to technology as that which can never fully become technique. He argues:

Judicial technique is in every way much less self-confident than the other techniques, because it is impossible to transform the notion of justice into technical elements. Despite what philosophers may say, justice is not a thing which can be grasped or fixed. If one pursues genuine justice (and not some automatism or egalitarianism), one never knows where one will end. A law created as a function of justice has something unpredictable in it which embarrasses the jurist. Moreover, justice is not in the service of the state; it even claims the right to judge the state. Law created as a function of justice eludes the state, which can neither create nor modify it. The state of course sanctions this situation only to the degree that it has little power or has not yet become fully self-conscious; or to the degree that its jurists are not exclusively technical rationalists and subordinated to efficient results. Under these conditions, technique assumes the role of a handmaiden modestly resigned to the fact that she does not automatically get what she desires. (The Technological Society, p. 292)

One might say that justice eludes control and we would do well to attend to this and its significance.

Wednesday, February 25, 2009

Introducing Lisa Austin

Thanks to Lyria and all of the previous bloggers for their many thought-provoking posts. We're now in the home stretch with two bloggers left to go.

Our next blogger, Lisa Austin from the University of Toronto, conducts research in areas that include privacy law and the ethical and social justice issues raised by emerging technologies. A recent work focuses on the challenges to privacy rights and interests presented by state information-sharing practices. Lisa is currently working on a research project involving privacy and identity.

Monday, February 23, 2009

Technology bias

In her comment on my previous post, Gaia Bernstein asks an important question:

The question is should the autonomy of scholars be constrained and their efforts be directed to areas of law where their insights would be most effective?
Actually, I agree with Gaia, that the answer is "no." I am not attempting to cramp the autonomy of legal scholars to write about what they wish, only to encourage greater self-reflection.

No single article or author writing about virtual worlds is doing any wrong or harm. Having read 126 such articles, many of them are very interesting - as I have said previously, I love legal hypotheticals involving new technologies. I am not the only one - analysis of legal issues surrounding new technologies (from virtual worlds to genetics) can often be found in the mainstream media. And no-one is harmed by an exploration of how transactions concerning a moon platform or a virtual mace are classified from a legal perspective.

But there are concerns that result from legal scholars' interest in technology. The first is that raised by Beebe, it allows lawyers to pretend that law is still in control. We "domesticate" technological innovation by analysing it in legal terms.

Interest of this sort is usually short-lived, so that we still have cyberlaw (though much of this is being assimilated) and virtual law, but no longer railroad law. And we now expore property concepts by testing them against virtual objects rather than space platforms. If the point is to understand "property" better, why no longer space platforms?

The other concern is that legal scholars might focus on technological aspects of particular issues, while ignoring broader questions. It is one thing to say that the law can control technological monsters, but another to see only technological monsters.

For example, technology might be portrayed as a “monster” while analogous non-technological threats recede into the background. Consider Frank Pasquale’s discussion on this blog and in a previous article of the dangers of technologies that offer competitive advantage. As I said in my comment, I personally find the idea of neurocosmetics pretty horrific. But I have no trouble using parenting techniques to manipulate my childrens' personalities. In using such techniques, I am taking advantage of my children’s neuroplasticity to alter (to some extent at least) their future "selves." In this way, parenting can operate as an alternative path to the ends achieved by neurocosmetics. But parenting is not “scary,” not even if I know that it gives some children an “advantage” over children whose parents, perhaps due to socio-economic disadvantage, lack the resources to learn and utilise various parenting strategies. Which leads back to the question, if the concern is competitive advantage, is it reasonable to focus on the newest technological means of gaining a competitive advantage? Frank would say "yes" because

Technology is often far more sudden, effective, and commodifiable than social or cultural methods of accomplishing ends.
This suggests that technological means to achieving competitive advantage are of more concern than non-technological means. But it might be argued that a technological focus also deflects attention away from the (currently) greater social problem. I would perhaps justify a technological focus in a different way in this case - absent a rejection of capitalism in its current form, the only regulation likely is restrictions on technological means of gaining competitive advantage. Thus I am not saying that a technological focus might not be constructive nor that a particular article cannot choose to focus on technological aspects of a problem. But by focusing on the technological, we should not ignore the non-technological. In other words, it is important to consider the broader question about competitive advantage, in particular any other aspects of it that can realistically be limited. We should still consider, for example, whether students ought to be obliged to disclose the use of tutoring colleges when applying for university or jobs.

Even where the problem is not containing technological "monsters," but merely exploring uncertainties or filling legal gaps, it is important to justify a technological focus. I have tried to do this in Recurring Dilemmas and Why Have a Theory of Law and Technological Change. Others will judge my efforts. One interesting observation I made, though, was the tendency for lawmakers to use technological change as an excuse to change a law where that is not the real or only reason they wish to do so. We are used to the story of law falling behind technology and needing to be updated. While this narrative is sometimes pertinent, it is important to remain vigilant as to the bias it can cause. In some cases, portraying a new technology as the problematic element is used to advance a particular perspective. For example, digital copying and peer-to-peer technologies have been portrayed by organisations like the RIAA as requiring "updating" of copyright law (eg the DMCA). The narrative is one of an existing status quo, upset by technological change, requiring new laws to ensure reversion to the status quo. The DMCA may or may not be a good idea, but portraying technology as the disruptive element in need of a legal "fix" is not the only story to be told.

So, what lessons to draw? I am still unsure which aspects of virtual world scholarship can fairly be distinguished from golden age space law. But I think it is an important question to ask. Given our autonomy, why do we so often choose to explore legal issues surrounding new technologies? What justifications can we offer to counter any dangers of an overly technological focus?

Sunday, February 22, 2009

Turning the lenses inward

With my posts, I am going to do a different blend of the concepts autonomy, law, technology and explore the reasons why legal scholars use their autonomy to focus on issues surrounding new technologies. By “issues surrounding new technologies” I don’t mean why are we here discussing law and technology theory (there are, after all, relatively few of us, and many justifications we could offer for our choice of scholarship, some of which were collected in the MJLST symposium). Rather, I am referring to the vast fields of scholarship exploring particular legal issues surrounding particular technologies.

In my first post, I will set up the question, and in the second go some way towards an answer. One caution – I have much further to go with this project before producing a piece for publication, so my ideas are still tentative. Hopefully, these two blogs will generate critique and suggestions! But on with the show…

Beebe, in an excellent note entitled Law’s Empire and the Final Frontier: Legalizing the Future in the Earlyorpus Juris Spatialis (108 Yale L.J. 1737), discusses the fate of “space law.” He describes the “Golden Age” of space law in which lawyers debated such questions as whether title to a space platform would be transferred by bill of sale or deed. Far from lagging behind technology, lawyers were leaping ahead. He argues that lawyers’ focus on outer space was an attempt, as Kieran Tranter might put it, to ensure that the “law” story won over the “technology” story, and hence that lawyers had a place in the future.

Note that Beebe does not deny that new technologies generate new legal issues. In an earlier piece, I categorised legal issues generated by technological change. It might in fact be uncertain, on the basis of pre-existing law, how title to a space platform would be transferred. Beebe’s point is not that this issue was meaningless or easy, but rather that the purpose of discussing it is to assert the dominance of a legal narrative in a technological future rather than to set out an authoritative, coherent statement of legal doctrine. “Space law” still exists, although Beebe distinguishes modern space law from “golden age” space law by describing the former as “a highly technical discourse spoken primarily by specialist practitioners.”

Might today's legal scholars, with the freedom to discuss whatever they wish, fall into a similar trap as "golden age" space lawyers? One area where this might be happening is in the scholarship surrounding legal issues in virtual worlds. I should start by admitting my own musings on this topic in an article on the scope of property law which employed virtual property as one of its examples. So, why am I worried about the parallels? First, it is not self-evident why legal scholars would be concerned with virtual worlds. Unlike a technology such as cloning, there is no “obvious” role for law to play. Second, people spending time and doing business in constructed virtual worlds arguably pose a similar "threat" to lawyers to that posed by the possibility of space travel in the 1960s.

With the help of a research assistant, I am in the process of compiling a list of all articles dealing with legal issues in virtual worlds published (or appearing on-line) before the end of 2008. We have over 100 articles dealing with legal issues in virtual worlds. I am not currently including books such as that by Duranske on Virtual Law (published by the American Bar Association). As well as getting a sense of numbers, I have “coded” them for explanations offered as to why the issue being discussed is important or urgent. Some articles gave more than one reason, in which case more than one coding was allocated. My “coding” is necessarily subjective (as the justification for exploring issues in virtual worlds was often implied from introductions rather than explicitly identified as a rationale). But what I wanted was a sense of whether there was any expressed need for legal scholarship on virtual worlds that could take it outside the realm of Beebe's concern.

Most articles offered at least some rationale for finding the topic of interest. A few (including my own) were concerned with broader legal development, using virtual worlds as a launching pad to explore more general legal issues. Of the ones that considered the resolution of legal issues in virtual worlds important for themselves, the most popular reason was the rate of growth of virtual worlds, by reference to changes in population or profit. A few raised the need to ensure continuing growth and productivity of virtual worlds as a rationale for their discussion. Government and judicial activity was sometimes mentioned as justifying legal analysis. Quite a few articles referred to the fact that virtual world transactions have corresponding “real money” values, with some more referring to “real world” effects of virtual activity more broadly. Some articles referred to the importance of virtual worlds in the lives of (at least some of) their residents. There is also a cumulative effect, with some articles referring to previous media or academic interest in virtual worlds as a rationale for further discussion of virtual worlds.

So, is there anything in all this that might explain the popular focus on legal issues in virtual worlds? Some, still tentative, thoughts:

Growth: The growth of virtual worlds might be important for two (related) reasons: (1) if there are legal dilemmas, it is possible that more and more people will encounter them, and (2) if laws are going to be made, they need to be made soon before the technological status quo becomes entrenched.

The first of these is true, but statements about the number of citizens in Second Life is no more impressive than lists of man's accomplishments in outer space in the 1970s. Neither tells us whether resolution of the legal issues is timely or premature. Growth itself might signal either - ongoing growth and development might make early legal responses obsolete. Growth might also be illusory - a passing fad.

The second of these does seek to explain the urgency of attention to legal issues. However, “growth” as such may not be the relevant factor. According to Gaia Bernstein, diffusion patterns can signal a need for urgent consideration of legal issues. Diffusion patterns are not, however, mere reference to rate of uptake but rather features such as centralisation and the existence of a critical mass point. Although decentralised, the fate of virtual worlds (in terms of critical mass point) is less clear than the fate of the Internet discussed by Gaia in her paper. However, demonstration that the diffusion pattern of virtual worlds made particular legal problems more urgent would satisfactorily distinguish virtual law from space law.

Technology promotion: Where a technology is independently desirable, but diffusion is stymied for an external reason, then law reform to remove the blockage might be desirable. Gaia Bernstein gives an example of this in her discussion of privacy concerns inhibiting the diffusion of genetic testing technologies. Whether this scenario (or something similar) applies in the case of virtual worlds would require demonstration. I am not so sure that promoting virtual worlds is a high government priority right now anyway.

Government and judicial activity: Certainly, a judicial decision, proposed law or proposed agency action can be a good reason for legal commentary. However, in the case of virtual worlds, few decisions and little action tends to lead to plenty of commentary. Bragg v Linden Labs only reached the interlocutory stage before being settled, yet academic commentary is plentiful.

Real world implications (including the possibility of exchange between virtual currency and real currency): The fact that actions in virtual worlds can have real world implications is generally a pre-requisite for their being of interest to lawyers at all. However, given the vast amounts of possible activity that has implications, including financial implications, this cannot be a reason in itself. However, if for example large amounts of money depended on the answer to a legal issue arising in virtual worlds, that could justify further exploration. Some virtual worlds literature falls into this category.

Importance to individuals using the technology: This seems a good reason to resolve legal issues surrounding virtual worlds. If the lives of many individuals would be enhanced by particular legal treatment of virtual worlds, then advocating such treatment seems sensible. Of course, ideally, one would have empirical proof of what legal issues virtual citizens are concerned about, rather than mere supposition.

In summary, there are some glimmers of hope that virtual law scholarship will turn out to be less humorous in retrospect than "golden age" space law scholarship, although the jury is still out. Most likely, as in the case of space law, some aspects of virtual law jurisprudence will become relevant and important, perhaps confined to true specialists. Other areas may seem, in retrospect, a distraction, motivated by legal academics’ desire to explore strange new worlds.

But, if scholars can do what we like, why does this matter? The answer (or at least further musings) will have to wait until my next post.

Saturday, February 21, 2009

Introducing Lyria Bennett Moses

Our next blogger, Lyria Bennett Moses, hails from the University of New South Wales.

An earlier paper by Lyria discussed how the law deals with 'recurring dilemmas' when confronted with new technologies as well as the ways that technology change differs from other social changes that challenge traditional legal interests. In this paper and elsewhere, Lyria has been developing a framework for legal analysis at the intersection of law and technology.

Her earlier posts at this blog can be found here.

Friday, February 20, 2009

Two Technological Tales: Email and Minitel

We tend to think that a technology which failed to diffuse must have been a bad idea. But, there are technologies, which undergo long social adoption processes and eventually achieve mainstream adoption. These long social adoption processes, if at all acknowledged, are usually attributed off-handedly to technical issues. Yet, diffusion delays are often related to a complex interaction of factors, many of which are not related to technical difficulties but to individual adoption decisions. In this post I want to use the stories of two eventually successful technologies, which underwent long social adoption processes in order to underscore the need to focus legal attention and resources on the user as an adopter.

The first story is about videotext systems. We often marvel at how the Internet transformed our lives: from the abundance of information to the conveniences of online shopping. The Internet has reached mainstream adoption in the mid-1990s. But, few realize that the majority of the French population has enjoyed the conveniences of the Internet from the early 1980s through use of a videotext system called Minitel. Minitel consisted of a small monitor and keyboard, which used the phone connection to transmit information. Minitel was used for online banking, travel reservations, information services, online grocery shopping and messaging services. All in all it encompassed many of the features we have come to associate with the Internet.

While the Minitel was introduced in France in 1982 and reached mainstream adoption by 1985, similar videotext systems were launched in the United States, most European countries and Japan, yet these systems were not adopted. The residents of most of the world had to wait until the mid-1990s to enjoy the conveniences the French enjoyed a decade earlier.

The second tale is about the email. Most people consider the email to be a 1990s technology. But, it was in 1971 that the first email was sent between computers. The major technological difficulties were overcome by the early 1980s with the adoption of the uniform TCP/IP standard. Commercial email, in fact, existed during the 1970s. The Queen of England sent her first email over the Atlantic in 1976. Jimmy Carter’s campaign also used email in 1976. Then why have most of us started using email only during the mid-1990s? Technological issues alone fail to account for the time lag.

The stories of the videotext systems and the email leave many questions unanswered. What prevented users from adopting these technologies earlier? What could have been done to accelerate diffusion? I hope to further explore these issues. But, my main goal in this post was to use these stories to illustrate the importance of shifting the legal regime’s attention and resources toward regulating user adoption behavior because of its important role in technological diffusion delays.

Wednesday, February 18, 2009

The User as a Resister of New Technologies (or Hail the Couch Potato)

Legal scholars have recently discovered the user of new technologies. But, we tend to concentrate on a specific type of user – the user as an innovator. We look at the user who designs, who changes a technology to reflect his needs. For example, much has been written about users innovating with open source software. We also pay ample attention to new users’ abilities to create using digital technology and the abundant content available on the Internet.

I do not wish to belittle this recent focus on the user as an innovator. But, I believe our concern with users should be significantly broader. After all, the user as an innovator is not our typical user. I want to suggest in this post that we begin paying attention to the ordinary user – the couch potato.

You may be wondering – why dedicate our time to the couch potato – isn’t our goal to encourage users to actively participate and innovate to promote progress? I propose that we focus on the ordinary user because despite the common belief that a technology failed because it was inherently destined for failure, it is this user who routinely makes decisions about whether to adopt or not to adopt new technologies. Users resist new technologies in different ways. Sometimes they actively resist them. Demonstrations against nuclear weapons are an example of active resistance. But most commonly, users engage in avoidance resistance. Examples of avoidance resistance are plentiful. From a woman not buying genetically modified food in the supermarket to an aging poet refusing to replace his typewriter with a computer.

I suggest that we start focusing on the user as an adopter of new technologies. The importance of concentrating on users daily adoption decisions lies in our emphasis on progress as an important socio-legal value. We care about the user as an innovator because we believe that innovation promotes progress and human welfare. But, if a brilliant new technology is not adopted, the progress goal itself is frustrated, and our investment in innovation is wasted.

In my next post, I will use the stories of two technologies: Videotext systems and the email to illustrate the importance of paying attention to user resistance.

Tuesday, February 17, 2009

Introducing Gaia Bernstein

Our next blogger is Gaia Bernstein from Seton Hall Law School.

Gaia, along with Frank Pasquale, organized and hosted our earlier law and technology theory blog.

Gaia also organized the first symposium issue on works that considered the development of a general theory of law and technology. In this issue, her own contribution built on her earlier research to focus on the role of law with respect to the diffusion of new technologies. This work helped me to understand how law interacts with our (mainly) love/hate relationship with new technologies as we for the most part embrace these technologies while, in certain cases, fear their individual or social consequences.

Drumroll please ...

Sunday, February 15, 2009

Stories of Autonomy, Technology and Law II

The Autonomy Story

Freedom has exercised particular attraction to the modern imagination. The technology story saw the tool using human as freeing humanity from the constraints of a fickle and oppressive nature. The legal story saw contract and government as freeing human from too much freedom in the state of nature. Freedom is defined in relation as a freedom from. The concept of will that Nietzsche exalted (as a rejection of the orthodoxy that ‘freedom’ had become) turns out, on a simplistic analysis, to be ‘freedom from’ on steroids. Freedom from or the pure exercise of will has a tinge of irresponsibility about it; as first year law students demonstrate when they are allowed to play, under close supervision with negative rights in tutorials (I am free to swing my fist to within 1/1000 of an inch of your nose). Autonomy can suggest something else; and that something else can be seen in the autonomy story of autonomy, technology and law.

The autonomy story emerges from critiques of both the technological and the legal story. One of the first disciplines to question the technological vision of humanity as the freed being of brain and tool was technology studies. I am referring to Lewis Mumford’s canonical two part Myth of the Machine (1966). In it, drawing upon the breath of human diversity as a catalogued by mid-twentieth century cultural anthropology, Mumford argued that it was not tool use that defined humans, but language and culture, and the evolution of our mental hardware was stimulated by increasing sophistication in usage of signs and symbols. Human freedom from nature was not because of tools but because of culture that allowed more effective domination – technology was the material manifestation of culture; not the substratum on which the superstructure of culture was erected.

This meant that for Mumford culture - law, morals, myths and technology – is what liberated humans. Notice that unlike the other stories there are no second order consequences. Law and technology as culture are tied to human freedom. Mumford’s project was clear – that modern accounts of technology that posited technology as outside of human control were false and ‘placed our whole civilisation in a state of perilous unbalance: all the more because we have cast away at this critical moment, as an affront to our rationality, man’s earliest forms of moral discipline and self-control’ (Mumford 1966: 52). Mumford regarded law (moral discipline and self-control) and technology as elements from a cultural whole. The need for law, for discipline and control, of technology was self-evident.

There is the spectre of the noble savage that haunts Mumford’s work; and a sort of negative ethnocentrism, that became obvious in the appropriate technology movement of the 1970s that his writing helped found, in favour of indigenous society against the ‘unbalanced’ West and all its works. However, this extremism is not core to the story that Mumford tells. Indeed, what this cultural re-reading of the technological story posit is a relation between law and technology that does not reify technology as either essentially human, and by location ‘good’ (as in the technology story), nor unessentially secondary and by location ‘bad’ (as in the legal story). What Mumford’s story allowed is a freedom to choose, but in that freedom hid responsibility. Humans, through culture are the creators of their own destiny, and law and technology are equal partners in this self-creation.

This still talks about freedom, but it is a qualified freedom. Not a freedom from but a freedom to. It seems that a vision of human in the world that involves culture and self-creation also includes a concept of responsibility. It is this freedom to and normative demand of responsibility that is captured by autonomy. This can be glimpsed in the critique of the legal story.

A fundamental challenge to the legal story of autonomy, technology and law comes, like Mumford’s critique of the technology story, from the social sciences. As early as thelawyer turned sociologist Max Weber began the task of cataloguing legal systems it became increasingly clear that social contract narratives failed to account for what it meant to live with a fully rationalised legal system, modern executive government and industrial capitalism. In this mass urban context of the machine (it must be remembered that ‘technology’ only become common parlance in the 1950s) concepts like nature, reason, freedom, sovereign, contract, rights had difficulty being recognized by identifiable ‘things.’ The US realists of the 1920 and 1930 tried to grasp this, but were hampered by their common law training, law school context and remained, in the main, fixated on judicial decision-making. It is the work of Michel Foucault that fundamentally challenges the legal story of autonomy, technology and law. Instead, of postulating a natural human and a state of nature, Foucault presents a plastic human constructed by techniques. Human subjectivity (that place were one feels free or otherwise) was not a private zone of autonomy that survived and was to be guaranteed by the social contract, but a product of context. Foucault talks about the cultural processes in modernity through which humans are made: The processes that Mumford glosses with his broad brush stokes. These processes are the discourses of the self (medical, sexual, legal) and the mundane training, through routine, reports and discipline by panoptic institutions (the family, schools, hospitals, army, prisons, churches, and especially universities) that construct the ‘I’ of modern life. There is not the binary sovereign-subject but ever-changing and ever- to-be negotiated networks of power relations. Here ‘law’ is more properly experienced as mores, authority, disciplines and punishments, and ‘technology’ is more properly experienced as techniques for self control and for power over others. Talk of autonomy is a relative and negotiated affair that can be represented spatially as zones where the reflective possibility of choice is possible. However, that is not freedom from; the range of choices are always limited and circumscribed.

In Foucault’s story the emphasis is on how the individual as a self negotiates the everyday; through using techniques and in being subjected to techniques, and in so doing changing. I am suggesting that, notwithstanding their obvious differences that Foucault fits within Mumford’s very grand account. Mumford on the primacy of culture, and with that humankind’s responsibility for self-creation, while Foucault explains the processes, at the level of the individual, through which an individual is made to be responsible for the self.

Now this autonomy story might seem quite removed from the mainstream of law and technology scholarship. However, I would submit that the more complex assessments of technology that are being voiced in this forum owe there formative moment to a realisation that it is human doing with technology, that is the cultural registry, that is the frame from which law and technology needs to be considered. Further, the existence of this forum, with all these signs and symbols (Mumford would be proud, and ‘signs and symbols’ sounds more retro-cool than the po-mo ‘discourse’) exercising our autonomy to reflect on our freedom to, and our responsible for the world that we make through technology and law.

In short, and this is the punch line of my argument, we tell stories. I continually and on purpose used the noun ‘story’ and verbs like ‘talk’and ‘telling’ throughout. What I have endeavoured to show has been how law and technology thinking replicates and transmits fundamental narratives about autonomy, technology and law, even in the guise of practicality. What I also have suggested in the conclusion with the autonomy story is a realisation that these stories, embedded and persuasive that they are, are cultural and we have responsibility for them. This is why my research continues to circle back to science fiction (even when I feel I should grow up, get grants and do practical law and technology research). Putting aside the mountains of chaff within the opus of science fiction there are some grains - some concepts, characters, plots, narratives -that are resources to write alternative stories about the relation between humans, technology and law.

Friday, February 13, 2009

Stories of Autonomy, Technology and Law

I’ll address the most important topic raised in Art’s introduction. Re: Galactica. I am planning to do some more writing on Galactica later in the year and that might answer the question whether I am ‘enjoying’ Season 4. The enjoyment has morphed into a compulsion...

On the matters at hand.

I am very glad that Art has suggested this topic for this year’s blog as it has allowed me to untangle some ideas that have lay undisturbed by my past thinking about law and technology.

Like Jennifer what follows are new ideas (at least for me) – I have welcomed this as a forum for expressing new thoughts and I would be very keen to engage in a dialogue. It also means that I do not have the solidity of a worked paper behind these thoughts, please forgive the roughness of ideas and expression.

In recent years due to teaching and editing responsibilities I have found myself to be becoming more and more a legal philosopher. This, I think, is a good discipline to bring to a discussion on human autonomy, technology and law. My argument in what follows is that specific engagements with law and technology tend to be scripted by stories that posit a fundamental relationship between human autonomy, technology and law. There is direction to my narrative. I examine three of these stories, the ‘technology’ story, the ‘legal’ story and the ‘autonomy’ story; concluding with the autonomy story as exposing the truth of the task at hand.

The Technology Story

The technology story begins with the populist definition of human as tool user. The origins of this story run deep in Western culture but a specific beginning lies in the paleonanthropological theorising of the nineteenth and early twentieth centuries, that the evolution of human, the specific chance relationship that accelerated natural selection, was tool-use by distant apelike ancestors. It was claimed that the chipping of flint and the domestication of fire set the cortex alight. Tool use facilitated greater resource utilisation which in turn gave stimulus to brain development which in turn lead to greater creativity and experimentation in tool use; and very rapidly (in evolutionary time), our hairy ancestors moved from flints and skins to not so hairy modern humans with Blackberries in Armani. In this story what distinguished modern humans was this tool use. The sub-text is autonomy. Tools and brain freed humans from nature. In Bernard Stiegler’s nice phrase from Technics and Time technology allowed ‘…the pursuit of the evolution of the living by other means than life. (Stiegler (1998): 135).’ In this story technology fundamentally relates to autonomy.

What this story about technology and human autonomy does not tell is law. Indeed, law’s absence telling. As a fundamental myth, the tool-using-free human (TUFH?), is before law. Law emerges later, as a second order consequence, a supplement laid over the top of humanity’s essential nature.

The state of debate in contemporary paleonanthropology is that this story, as an account of the evolution of Homo Sapiens, is problematic and simplistic. Further, deep ecologists have been keen to point out since the 1970s that human’s share the planet with other tool using species and a claim of superiority on the basis of tool use is anthropocentric. But it is a good story, a modern version of the myth in Plato’s Protagoras of Epimetheus, Prometheus and the gifts of traits, and is an entrenched, and often repeated, narrative within Western culture.

The essential elements of this story are repeated again and again in the assumption of techno-determinism. It is the meta-form that scripts the arguments of those that enthusiastically embrace technological change as a good in itself. It is also the narrative that animates the legal mind when it turns causally to the question of technology and thinks that law must ‘catch-up’ or that law is ‘marching behind and limping.’ In these phrases technology is placed at the core of what it means to be human, while law is located at the periphery. Its influence can also be seen in the ‘can’t’ or ‘shouldn’t’ regulate technological change arguments. Being technological is regarded as the essence of humanity and artificial attempts to regulate the ever flowering of this being will either fail (can’t) or end in debasement and corruption (shouldn’t).

The Legal Story

The legal story mixes the relationship of human autonomy, technology and law according to a different recipe. This story comes down to us from the social contract tradition of early modernity. In this story the roles of law and technology and reversed. The story goes that humans lived wretched (Hobbes) or simple (Locke) lives in the state of nature; living by passions with only the spark of reason to distinguish humans from animals. This state was the state of complete freedom. However, that spark of reason eventually lead to the realisation that a compact between humans could secure a more peaceful (Hobbes) or propertied (Locke) existence. The social contract was formed and, bingo, government, law, economy, society, global financial crisis, followed. In the social contract some freedoms were sacrificed to preserve others. Here law is fundamentally tied to human freedom at two levels; first it is the legal form of a contract that binds the natural human and second, freedom, reason and covenant combine to provide a justification for the posited legal system. One of the benefits, to use Hobbes phrase, of the ‘sovereign’s peace’, was technology. As humans were no-longer in the ‘war of all against all’ (Hobbes) or worrying about where the next meal would come from (Locke) they could get on with learning about the world and making use of that knowledge. Hence technology emerges as a second order consequence.

Like the technology story, this story permeates Western culture. It remains law’s formal story of origin and so ingrained is it in the modern jurisprudence that explanations of legal orders that do not include such concepts as nature, reason, freedom, sovereign, contract, rights, seem irrelevant. It shows its influence in law and technology scholarship. Fukuyama’s clarion call for law to ‘save’ humanity from biotechnology is an example. Driving Fukuyama’s argument is the social contract vision of the human as a reasoning being who is biologically vulnerable and this combination, on which the Western apparatus for the expression of freedom (government and market) has been constructed, is under threat by technology. The core needs to, and it is legitimate for it to, secure itself against change. In this account technology as a second order consequence is a threat but also a threat that can be met. There is a fundamental confidence in legal mastery of technology that is absence in the technology story.

To recap. The technology story posits human autonomy and technology as essential, with law a second order consequence. In the alternative the legal story narrates human autonomy and law as essential, with technology a second order consequence. My argument has been that much of the scholarship on law and technology emanates (that is draws fundamental structure) from either of these narratives (and sometimes, in the guise of practical-ness – both). What has happened in my telling of these stories has been a muffling of ‘autonomy.’ I moved from autonomy to freedom, and as treating these two words as synonyms is common I should have got away with it. But perhaps I shouldn’t have. This opens to the autonomy story.

Introducing Kieran Tranter

We will hear next from Kieran Tranter of Griffith University.

I first came across Kieran's law and technology work in an article where he studied the complex historical processes that influenced the regulation of automobiles in early 20th Century Australia. (Peter Yu and Greg Mandel have also written and posted views that discuss how history can drive law and technology developments.)

Kieran has also managed to cast a critical eye on the ways that philosophies of technology can assist with the development of law and technology theories, including a discussion of how Battlestar Galactica challenges Heideggerian views on the metaphysics of technology! More importantly, one wonders whether Kieran is enjoying this final season of Galactica ...

Kieran's earlier posts at this blog can be found here.

Thursday, February 12, 2009

Does technology make "an offer you cannot refuse"? Some thoughts on human autonomy and technology.

Autonomy is the state of freedom from external control and constraint on one’s decisions and actions. We are constrained by many things such as, for example, the earth’s gravity. Interestingly, many of our technologies increase our autonomy in the face of some of these constraints. For example, our experience of the constraining effect of gravity is greatly altered when we are on one of the thousands of airplanes circling the earth every day.

However, despite the range of decisions and actions that technologies open to us, there is a way in which we come to feel forced to adopt and use technologies, whether we like it or not. In some cases, this is because the technology becomes an indispensable part of the material or cultural infrastructure of a society and we must use it in order to participate in that society. For example, the widespread use of the automobile has led to styles of life and urban layouts that presuppose mechanical transportation.

In addition to the ways in which some of our technologies cause us to restructure society in a way that presupposes their use, the issues of human competition and equality are perhaps also at the heart of why we feel forced to adopt technologies.

In asking about the interaction of equality and technology, I am adopting the following understanding of human equality: I am interested here in the equality of resources, understood broadly to include not just external resources (e.g. wealth, natural environment, social and cultural resources), but also internal or personal resources (e.g. personality, physical and mental abilities). This is a provisional (“half-baked”) definition, and I am launching into this discussion with some trepidation. However, since this blog is a great opportunity to ventilate and develop ideas – here goes.

Technologies can be used to alter one’s endowment of both internal and external resources. Where there is a pre-existing inequality or disadvantage with regard to some resource (e.g. physical strength), a party may seek a technology to neutralize this disadvantage. Note, for example, that the 19th century nickname for the Colt handgun was “the Equalizer.”

Others may seek to go further with technologies and to create a positive advantage over others, whether they started from a position of pre-existing disadvantage or not. Frank discusses the competitive pursuit of technological enhancement in a fascinating post dealing with “positional competition.” It may be that the social pressure to neutralize disadvantages or to seize advantages is one reason why people feel obliged to adopt technologies.

Another reason why people may feel obliged to adopt technologies arises from a problem at the heart of using technological fixes for socially-constructed disadvantages. By “socially-constructed disadvantages” I mean human characteristics that do not entail any actual harm to an individual other than the negative social valuation of those characteristics. Paradoxically, attempts to neutralize socially-constructed disadvantages through technology merely strengthen that social construction. This has the effect of reinforcing the pressure on the disadvantaged group to “fix” itself to conform to the social expectation.

Several examples could be cited here. As Clare Chambers discusses, the availability of “virginity-restoring” surgery for women may enable them to elude the effects of a double standard applicable to men and women with respect to sexual freedom. At the same time, it strengthens the double standard that forces women in some places to resort to the surgery. In other words, the technological response and the discriminatory norm are in a mutually-reinforcing feedback loop.

In The Case Against Perfection, Michael Sandel discusses the government-approved use of human growth hormone as a height-increasing drug for healthy children whose adult height is projected to be in the bottom first percentile. This allows a few to gain in stature, leaving the rest to seem even more unusually short due to their decreased numbers. It does nothing to disrupt the socially-constructed disadvantage of being short.

In other cases, a technology offers an escape from what appears to be a real rather than a socially-constructed disadvantage. For example, the discovery of insulin and methods to produce it cheaply and efficiently have proven to be helpful in promoting equality at least with respect to pancreatic functioning and health. Interestingly, insulin is an excellent example of a technology that cannot fuel a technological enhancement arms race. As far as I know, insulin is of no use to non-diabetics. As a result, it can only close an inequality, without offering the possibility of seizing an advantage through supra-normal amounts of insulin.

All of this suggests to me that technology has a peculiar effect on human autonomy. The technologies offer us opportunities which, at first glance, would seem to promote autonomy. They expand the range of options open to the individual, and leave it to each person to adopt them or not.

However, there are various reasons that technologies become “offers you cannot refuse.” Society restructures itself to presuppose the use of certain technologies so that it becomes hard to exist in society without them. In addition, human competition for advantage maintains a continuous pressure to adopt technological enhancement. Finally, technologies offer the opportunity to people to neutralize socially-constructed disadvantages. This is most insidious from the perspective of human autonomy since the social expectations that fuel the demand for the technologies are reinforced by those very technologies.

Tuesday, February 10, 2009

"Science discovers, genius invents, industry applies, and man adapts himself..."

One of the slogans of the 1933 Chicago Worlds’ Fair was the following: “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry."

This wasn’t a new idea. There is a long-standing strand in human thinking about technology that emphasizes the important (and sometimes apparently decisive) effect of our technologies on society. In 1620 Sir Francis Bacon wrote in the Novum Organum that:

“…it is well to observe the force and virtue and consequences of discoveries, and these are to be seen nowhere more conspicuously than in those three which were unknown to the ancients, …; namely, printing, gunpowder, and the magnet. For these three have changed the whole face and state of things throughout the world; the first in literature, the second in warfare, the third in navigation; whence have followed innumerable changes, insomuch that no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.

Numerous subsequent writers have raised the same suggestion that the technologies that we create and use have profound effects on social structures and on human history. At the extreme, technology itself is viewed as a phenomenon that drives history and society. It seems likely that this idea is, in part, true. However, at the same time, technologies are produced and used by a given society and so are themselves determined by that society. In other words, the influence appears to flow in two directions between technology and society, and it is difficult to untangle the primary cause (if there is one).

And yet, the complexity of this interaction makes it seem sometimes as if technology calls the shots. As they said at the Worlds’ Fair, society and individuals “fall into step with,” “adapt to,” or are “molded by” the technology. In this pair of blog postings, I would like to tackle the following two questions. First, has technology and technological ideology so pervaded the law and judicial thinking that it can be said that the law is determined by technology rather than that technology is controlled by the law?

The second blog posting will look at the effects of technology on the autonomy of the individual human being rather than the effects of technology on the collective self-determination of humans in a society. In that second post, I would like to explore the mechanisms by which individuals come to feel obliged to adopt a given technology, and how inequality (of power, natural or material resources) between humans drives this process. With this second posting, I am indebted to Frank Pasquale, whose excellent recent posts in this blog and previous writing on equality and technology have spurred my thinking in this direction. My discussions with my good friend and extremely insightful colleague at the University of Ottawa, Ian Kerr, on the complex effects of technology on human equality were both fun and deeply illuminating too!

Onward with the first posting!

A year or so I published an article that asked whether courts control technology or simply legitimize its social acceptance. I raised this possibility because I kept coming across judgments that suggested that either (1) our legal rules are biased in favour of technologies and against competing non-technological values, or (2) judges find ways to reframe disputes in ways that tend to favour technologies. This is a bold accusation; it is possible that counter-examples could be proposed. However, let me give two examples to illustrate what I mean.

The doctrine of mitigation in tort law states that a plaintiff who sues a defendant for compensation cannot recover compensation for those damages that could reasonably have been avoided. So far, so good. It makes sense to encourage people to take reasonable steps to limit the harm they suffer. In practice, however, this rule has been applied by the courts to require plaintiffs to submit to medical treatments involving various invasive technologies to which they deeply objected, including back surgery and electro-shock therapy. Although plaintiffs have not been physically forced to do so, a seriously-injured plaintiff may face considerable economic duress. Knowing that compensation will likely be withheld by the courts if they do not submit to a majoritarian vision of reasonable treatment, they may submit unwillingly to these interventions. I think that this doctrine operates in a way that normalizes the use of medical technologies despite legitimate objections to them by individual patients.

In the trial level decision in the Canadian case of Hoffman v. Monsanto, a group of organic farmers in Saskatchewan attempted to start a lawsuit against the manufacturer of genetically-modified canola. The farmers argued that because of the drift of genetically-modified canola pollen onto their crops, their organic canola was ruined and their land contaminated. The defendants responded that their product had been found to be safe by the Canadian government and that it had not caused any harm to the organic farmers. Instead, the organic farmers had brought harm upon themselves by insisting on adhering to the organic standards set by organic certifiers and the organic market. The trial judge was very receptive to this idea that the losses flowed from actions of organic certifiers and markets in rejecting genetically-modified organisms, and not from the actions of the manufacturers. I find this to be a very interesting framing of the dispute. In essence, it identifies the source of harm as the decision to reject the technology, rather than the decision to introduce the technological modification to the environment. Once again, the technology itself becomes invisible in this re-framing of the source of the harm.

These judges do not set out to make sure that humans adapt to the technologies in these cases. Instead, I think these cases can be interpreted as being driven by the ideological commitments of modernity to progress and instrumental rationality. An interpretation of the facts or a choice of lifestyle that conflicts with these ideologies sits highly uneasily within a legal system that itself also reflects these ideologies.

More recently, I have begun to explore a second question along these lines. If judges and our legal rules are stacked in favour of technologies and against other values, what happens when it is the judges themselves who are in conflict with the technologies. Do the judges adapt? Here I turned to the history of the polygraph machine (lie detector), and the attempts to replace the judicial assessment of veracity with evidence from the machine. The courts have generally resisted the use of polygraph evidence on two bases. First, they say, it is unreliable. Second, the assessment of veracity is viewed as a “quintessentially human” function, and the use of a machine for this function would dehumanize the justice system. While the judges appear to be holding the line at the attempted usurpation by the machine of this human role in justice, it is interesting to speculate about how long they will be able to do so. Will they be able to resist admitting reliable machine evidence, particularly given concerns about how reliable humans actually are at detecting lies. Novel neuro-imaging techniques such as fMRI which purport to identify deception by patterns of activity in the brain, represent the next step in this debate. If these neuro-imaging techniques are refined to the point that they are demonstrably superior to human beings in assessing veracity, would it be fair to exclude this evidence in a criminal trial? The right to make a full answer and defence to criminal charges may say “no.”

I am currently researching neuro-imaging technologies and their use in the detection of deception in order to predict how our law may be affected by them. In the background is the continued question: Is it true that “Science discovers, genius invents, industry applies, and man adapts himself to, or is molded by, new things...Individuals, groups, entire races of men fall into step with science and industry"?