next up previous contents
Next: Understanding Up: Autonomous Authorisation Previous: Autonomous Authorisation   Contents


Intention

An action is either intentional1.5 or not. Faden and Beauchamp define an intentional action as one ``willed in accordance with a plan, even though it may not be wanted'' (p. 243). They arrive at this by way of investigating the meaning of acting intentionally: an act of intending does not qualify as an intentional action, because one can intend to do something but never get to do it1.6, but an action that is performed under such an intention can be called an intentional action. In this section, I am following Faden and Beauchamp, giving an exposition of their position, with qualifications to take into consideration when applying this position to information technology situations.

They identify the problem of accidental action following intention, in that the action is performed by mistake or inadvertently and thus cannot be intentional. This implies that in order to intentionally carry out an action, a person must have some preconceived notion of how it ought to occur. To examine more closely the intricacies of this problem, an example is someone who drops their coffee cup, hitting the left button on the mouse when filling out a form in their Web browser. They intended to click on the correct one, but did not intend to drop the coffee cup to do so. Thus, the clicking of a button is type-identical with the action that was intended, but the dropping of the coffee cup was unintentional. Thus we have a deviant causal chain whose end state is the same as that which would have occurred as a result of the original intention. Faden and Beauchamp acknowledge that understanding plays a large role in intentionality because one must understand that one is performing a particular action. In this case, because the intention was to click on the button, it is a happy accident that the user ended up being clumsy1.7.

Another concern for Faden and Beauchamp is acts that are performed merely because they allow for another wanted act to be fulfilled. An example of this in information technology might be installing advertisement-serving software in order to use another piece of software for free. Such an act may not be particularly desirable but may be tolerated in order to eventually achieve the wanted act. This leads Faden and Beauchamp to base their argument of intentionality around a ``model of willing'', in which tolerated actions are intended, rather than a ``model of wanting'', in which tolerated actions are not intended, so that those acts that are tolerated but not wanted are covered as being intentional actions. In this way, someone might view a Web page that has advertisements, not intending to view the advertisements, but tolerating the advertisements in order to view the content in which the person was interested. However, Faden and Beauchamp take pains to assert that ``acting under the condition of being willing is only one kind of intentional or willed action'' (p. 246) and illustrate this by contrasting a person who will tolerate an action against a person who is ``positively eager'' to do the action. An example of this in information technology would be the user who installs a piece of software. They may be ``positively eager'' to install the software and use it, whereas another user may simply tolerate installing the same software because they are required to use it for work. As there are many decisions that need to be made that involve tolerating unpleasant side-effects of something in information technology (and elsewhere), toleration is an important concept to consider within a theory of informed consent for IT. I will elaborate on this in Chapter 2, when looking at some case studies, and in Chapter 3, when discussing the issue of numbness.

Faden and Beauchamp reject the idea that intention can be measured by degrees [Miller et al., 1971]. Faden and Beauchamp counter the argument that the amount of will-power used in an act can be used to judge the degree of intention, by showing that some actions, especially repeated actions, can start out being well-thought-out and obviously intended, but end up mindless and automatic, with the same amount of intention from the actor. It would be very difficult to tell whether any particular series of actions conforms to this argument, because even though an action may be automatic, it doesn't necessarily show that there is any less will-power driving the action. I could sit down to read a paper, and be automatically translating squiggles and dots into letters in my brain, but it doesn't mean I do not have the will-power to do so. They also argue against the idea that the degree of intention can be derived from the degree by which the execution of an intended action fits the intended action plan. Since this idea relies on qualitative judgements as to what variations are important to an action plan, they claim, it makes it difficult to judge ultimately what the degree of intention might be. They resolve this conflict by discriminating between deliberate and intentional acts, allowing for degrees of deliberation but not intention to account for the above arguments. An example given to illustrate this is that automatically switching on a light when you enter a dark room is less deliberate than deciding to enter into large-scale surgery, but both have equal amounts of intention, that is, that they are both intended. There could, though, be a great amount of deliberation that precedes an automatic action. You could, for example, deliberate over the merit of turning on a light when you enter a dark room (perhaps you can see fairly well in the dark, or are environmentally conscious, so wish to be careful of your electricity use), and decide to turn on the light whenever you encounter such a dark room. At some point, it becomes automatic, despite the deliberation that preceded the decisions. Therefore, to use Faden and Beauchamp's argument above (i.e. that because it relies on qualitative judgements, it is difficult to judge the degree of deliberation), this would theoretically mean that there could not be degrees of deliberation.

There is some disagreement about whether intention can be measured in degrees, since the metaphysics involved in dealing with intended actions is contentious. The primary issue at stake, however, is that the law and other applied normative domains require an action to be either intentional or not, so there can be no degree of intention in legally intended actions regardless of the underlying degrees of deliberation and intention of the action to decide one way or another.

Therefore, intention is a necessary condition of an autonomous authorisation; and it cannot be, for our purposes, measured in degrees (that is, an action can either be intentional or unintentional). Alongside an action that is intended there may be outcomes that are not wanted but are tolerated; this does not make the action any less intentional. The concept of intention is important for consideration in a theory for informed consent for information technology, because there are many subtle influences in computing that could lead a user to making a particular decision unintentionally. This, of course, will be elaborated on later in this dissertation.


next up previous contents
Next: Understanding Up: Autonomous Authorisation Previous: Autonomous Authorisation   Contents
Catherine Flick 2010-02-03