Trust is a concept that is taken for granted in these sorts of informed consent situations. In medical settings, for example, a patient would have a certain amount of trust in their doctor, and, due to that, something of an implicit trust in the institutions supporting the doctor, the schools that taught the doctor, and so on. Some references to trust in medical informed consent literature include comparisons of the gaining of trust with the gaining of consent, in ``the modern clinical ritual of trust'' (O'Neill, 2003, p. 4). Other references mention trust in the context of paternalism, when a patient may give away their autonomy to a doctor to make decisions on the patient's behalf because the patient trusts the doctor to decide what is best for them, but usually trust is associated with the particular type of relationship between a clinician and a patient or subject.
It is important to distinguish between three things often thought of as ``trust'': reliance, trustworthiness, and trust itself. For our purposes it makes sense to talk about reliance on an institution, such as the laws or standards governing the vendor or manufacturer above, and trust as qualified reliance in the vendor or manufacturer to, say, abide by these laws or standards. Trustworthiness is, then, a trait of the vendor or manufacturer (in this case). But more generally, in information technology cases, it is hard to draw a distinctive relationship between a computer user (in the `patient' role) and some other entity in the role of the doctor. Should this be the computer itself? The manufacturer of the computer? The operating system vendor? The vendor of some individual piece of software? The software itself? All of these are institutions or things we rely on or trust. It is hard to draw a definitive conclusion as to who should play the `doctor' role. In some situations, it is easy to choose who or what to trust: a user may well place their reliance in a piece of software to perform a particular task for them, such as an anti-virus guard to protect their computer from viruses, or a calendar application to remind them when it is their mother's birthday. Reliance and trust can be placed in these things by users, just as they can be placed in surgical equipment to perform its tasks correctly, but this is still not the same sort of relationship as the patient-doctor relationship.
The closest we can therefore get to some sort of human-human relationship is to the humans involved in the manufacture of the artefacts: the user can choose whether or not to put their trust in hardware, software, and operating system vendors to provide trustworthy products. There are obviously differences in this situation from the doctor-patient relationship, because users rarely get face-to-face time with the vendors, and vendors are usually mass-producing their products without individual user feedback. So the nature of trust in these situations just gets more complicated because of the levels of abstraction of trust, or the chains of implicit trust that need to be formed. This is supported by the literature, with Gambetta's definition of trusting someone including the implicit meaning ``that the probability that he will perform an action that is beneficial or at least not detrimental to us is high enough for us to consider engaging in some form of co-operation with him'' [Gambetta, 2000]. Although the same could be said for interactions between people and any aspect of computing, Rutter disagrees, claiming that ``trust is a good traded between individuals rather than between people and mediating technologies'' [Rutter, 2001]. There is still trust in the technology; it is simply an embodiment of the trust placed in the developers and vendors of the technology as to the quality of the technology. For example, a user may say they trust a particular vendor, for example, Apple or Microsoft, or that they trust a particular piece of software to handle their finances or filter their email for spam.
Computer users therefore need to approach the relationship they have with technology differently from that with a doctor or researcher, because the trust they have in vendors and software developers is implicit, often embodied in the trust the user has in the technology. The sort of trust implicit in informed consent situations relies a great deal on developing a personal relationship with the doctor or researcher, based on personal experience and other opinions (word-of-mouth recommendations, etc.), whereas it is very difficult for computer users to have personal relationships with computer vendors and developers. It is thus up to the vendor or developer to reach out to the user, and establish a trusting relationship by conforming to the normative expectations of the user. This leads back to the complex and intertwined relationship of trust and informed consent, and the importance of establishing a good communication framework between developers and vendors and computer users.
I realise that I have not presented a completely thorough discussion of the nature of and problems with trust, though I do delve into the issues more deeply in [Flick, 2004]. I have, however, established the relationship between trust and informed consent within an information technology context, that is, that vendors and developers need to establish a trusting relationship with their users by (at the very least) conforming to the normative expectations of the user, and if any expectations are likely to need waiving, that there is a thorough informed consent policy by which it is waived. It is this relationship which I will use in my argument for the theory of informed consent set out in this chapter.