THE RATIONALITY OF VALUE CHOICES IN INFORMATION SYSTEMS DEVELOPMENT
 
September 26, 1996

Heinz K. Klein, SUNY at Binghamtom
Rudy Hirschheim, University of Houston
 

ABSTRACT
 
Following Immanuel Kant's classic view of practical reasoning, it is suggested that the scope of information systems development (ISD) practices should not only embrace rules of skill, but also rules of prudence and rules for rational discourse about competing value standards. Without warranted value standards (design ideals) the choice among conflicting goals in the development of information systems cannot be based on reason. The paper examines the dilemma which results from the inevitability of value choices in the practice of ISD. It discusses some principles by which value choices can be approached in a rational way.
CONTENTS
  1. INTRODUCTION
  2. WHEN DOES THEORETICAL KNOWLEDGE BECOME "PRACTICAL"?
  3. THE INEVITABILITY OF VALUE JUDGMENTS IN DESIGN APPROACHES
  4. A RATIONAL APPROACH TO VALUE CHOICES
  5. CONCLUSIONS AND FURTHER IMPLICATIONS
  6. REFERENCES

1. INTRODUCTION

In carrying out activities, individuals typically follow a set of rules. Tennis players are taught a set of rules allowing them to hit the tennis ball in a fashion which will cause the ball to cross the net landing in the opponents' court. Students follow rules (some formal, others informal) which allow them to score well in their classes. Information systems developers, either implicitly or explicitly, follow a set of rules which help them to undertake their development tasks. Rules then, we would argue, are ubiquitous. They are, to a greater or lesser extent, a structured knowledge representational vehicle. They can be used for passing on knowledge on how best to carry out some task, function or activity; or what we ought to do or not do; about what ends are worthwhile and what are the best means for achieving them; and so on.

Rules are important in the practice of any profession. Consider the field of information systems development. Many rules have been proposed for helping the practice of ISD, from very detailed rules on how to update a file, to very comprehensive rule systems in the form of methodologies and standards for system development (cf. structured analysis and design methodologies). However, most of these deal with technical rules of skill and not with rules for determining value choices.

The intent of this paper is to show that in the practice of ISD, decisions about value choices should not be neglected; that indeed there are ways for rationally making value choices which we will present later. It is our contention that equal emphasis must be placed on three types of rules: Skill, Prudence and Categorical. Rules of skill are concerned with the physical propensity and dexterity to carry out certain operations to achieve specified ends. Rules of prudence are concerned with judgments to achieve ends which informed and reasonable people would not question as being worthwhile, such as designing systems which are acceptable to their intended users. Categorical rules are concerned with choices where the ends themselves are in question, such as a choice between developing systems which are acceptable to one set of stakeholders (e.g. workers on the shop floor) or another (e.g. a company's shareholders).

The classification of practical rules into rules of skill, rules of prudence and categorical (moral) rules, was first proposed by Immanuel Kant (cf. 1785, 1788 - for translations see Kant 1964, 1929). It is our belief that what Kant described in the general context of practical rules also holds for the practice of ISD. Whenever information systems are applied, they serve some human interests; therefore, the design choices made to serve some interests at the expense of others involve moral value judgments. This means that practical advice concerning the design of computer-based information systems must not be limited only to technical rules of skill, but should also address what is good or bad, or right or wrong in any particular application. The key purpose of this paper is to suggest how such value judgments may be approached in a rational way. This necessitates the adoption of a wider concept of rationality, one which allows the insights of critical social science to be brought to bear on the question of how information systems can be used to the benefit of the greatest number of people.

The next section expands on the "Kantian" classification of practical rules to substantiate the claim that the current orthodoxy in the IS literature suffers from a "technical neutrality bias." Next, it is demonstrated that the design of every artificial (i.e. man-made) system is guided by a design ideal - a value concept which is the rationale for the system and provides the legitimation for its cost and operation; and that choosing among conflicting design ideals poses a practical dilemma. The last section explains how fundamental value choices can be made "rationally." At the core of this argument are concepts, principles and rules which could guide an informed moral discourse in which the value choices are decided by the force of the better argument rather than by power politics or blind appeal to social convention.

2. WHEN DOES THEORETICAL KNOWLEDGE BECOME "PRACTICAL"?

Theories can be understood as one form of knowledge representation, but not necessarily the only one (cf. Polanyi 1958). Theories become practical if they serve human interests and thereby become "a cause determining the will" (Kant 1964, p. 128). The practical implications of theories involve a deep concern with the "what" and "how" of human action. The orthodox model of science emphasizes how to do things but refuses to help people decide on the ultimate ends of their actions. In brief: the how preempts the what. Kant, however, suggested that science, as the instrument of human reason, can help to decide not only the how, but also the what. Rules which are only concerned with the how, are called hypothetical rules. They are divided into rules of skill and rules of prudence.

2.1 Rules of Skill

Rules which are only valid under the condition that certain ends are accepted as "given" are called hypothetical. Hypothetical rules have the general form "in order to achieve X then one needs to do Y." Rules which do not question the rationality or goodness of the end toward which they are applied are called rules of skill. They are entirely situation dependent. Rules of skill are often derived from mathematics and the empirical sciences. An example given by Kant (1964, p. 83) of rules of skill is the situation where a doctor prescribes certain drugs to cure a patient. Similarly, engineers would use rules of skill in the design of buildings. Typical rules of skill in ISD are concerned with efficient programming and software engineering.

2.2 Rules of Prudence

A second type of hypothetical rule applies to situations where reasonable and well-informed persons agree that the ends are worthy of universal acceptance. The example given by Kant is that of human happiness. Ignoring for the moment the problem of defining happiness, one can note that in the tradition of utilitarian ethics it has, indeed, been held that every rational being would agree that the pursuit of happiness is a worthwhile end, whether it is his or her own (egoism) or that of the greatest number of people (maximization of average total utility). Rules which are said to further the end of happiness are called rules of prudence. Kant (1964, p. 83) writes: "... skills in the choice of means to one's own greatest well-being can be called prudence in the narrowest sense." In the realm of ISD, rules of prudence are largely limited to professional advice to analysts about how to succeed in their careers by implementing successful systems. Consider the implementation strategies recommended by Alter (1980): divide the project into manageable pieces, keep the solution simple, gain top management support, etc. Of course, similar advice exists for those opposed to new systems (cf. Keen 1981), because they feel that systems change may pose a threat to their interests. The widespread resistance and dissatisfaction with information systems (cf. Lyytinen and Hirschheim 1987) might suggest that their designs may not have been guided by rules of prudence.

2.3 Categorical Rules

As long as there is conflict about which human interests lead to the betterment of 'the human condition', rules of prudence must remain, by necessity, hypothetical. Hence Kant proceeds to the question if there are any rules which must be followed categorically, i.e. that describe an action which is "good in itself" (Kant 1964, p. 82) and therefore necessary. As an example Kant quotes the laws of morality - believing that a single standard of morals can be found by human reason. In search of such a standard, he proposed the categorical imperative. Essentially it says that the principles of all our choices and actions should be such that we would accept them as a "reasonable" standard of a general legislation which also applies to ourselves. In the practice of ISD, this would mean that analysts must not object to using the same type of systems in their work that they design for others. For instance, if a project team implements a system which forces people to sit long hours in front of CRTs or reduces challenge, variety, freedom, or security for users, then project team members should have to accept similar automated support in their own work.

The common observation that no universal moral law has been found to date does not relieve the practitioner from making moral judgments in the application of information systems "somehow." Hence, a practical problem exists on whether or not one admits the possibility of a unique system of values solely determined by human reason. Addressing this practical problem has been hindered by the myth that science is value free and therefore need not deal with value choices. Applied science has become almost synonymous with technical rules of skill, with some important exceptions (e.g. the application of practical psychological or sociological insights in psychotherapy and social counseling). The following section attempts to examine what categorical value choices must be faced in designing information systems, and how these choices may be guided through the exercise of human reason rather than the forces of blind convention or inertia.

3. THE INEVITABILITY OF VALUE JUDGMENTS IN DESIGN APPROACHES

Value judgments, while influenced by many factors such as emotions, traditions, and self-interest, aim at achieving some universal good. This is also true in the area of system design. Underlying every system design is a "design ideal" describing the desirable features and outcomes to be achieved by systems change. It is convenient to refer to the description of the ultimate good to be achieved through system design as a design ideal. It is important because it guides the design and implementation of IS applications.

3.1 Examples and Implications of Design Ideals

Typically a design ideal attempts to achieve a balance among several ends. This typically involves delicate trade-offs and judgments about values. The values may be more human in nature (e.g. need for social relations, peer recognition) or technical. For example, even the design of a simple thermostat is the result of a compromise between accuracy, cost, reliability, and ease of use.

Nor are design ideals new. In Plato's Republic there is an extensive discussion of the interrelationships between justice, wisdom or knowledge, social harmony and competitive strength. Tables 1, 2 and 3 summarize some of the design ideals that currently motivate IS development and IS applications. The classification is not meant to be exhaustive nor closed. In fact, social progress occurs through the construction of new social design ideals.

The design ideals of Table 1 were extracted from the study of electronic funds transfer systems (Rule 1975; Kling 1978). There exists some interdependence of design ideals at different levels of human action, for example, between the values that are seen to govern modes of social control in society in general and in the work place. This has been expressed most succinctly perhaps by the notion that constitutional democracy does not stop at the corporate gate. Due process, checks and balances, and participation are held to apply throughout. The design ideals of Table 2 were extracted from the work of Klein (1981) who attempted to highlight the four primary design ideals which exist in the information systems field. The design ideals of Table 3 come from Hirschheim (1986) who following Burns (1981) sought to articulate a set of criteria for social and ethical acceptability of new information technology.

TABLE 1: The Kling/Rule Core Design Ideals
l. Private Enterprise Ideal. "The preeminent consideration is profitability of the EFT systems, with the highest social good being the profitability of the firms providing or utilizing the systems. Other social goods such as users' privacy or the need of the government for data are secondary."
2. Statist Ideal. "The strength and efficiency of government institutions is the highest goal. Government needs for access to personal data on citizens and needs for mechanisms to enforce obligations to the state always prevail over other considerations." 
3. Libertarian Ideal. "The civil liberties as specified by the U.S. Bill of Rights are to be maximized in any social choice. Other social purposes such as profitability or welfare of the state are to be sacrificed if they conflict with the prerogatives of the individual."
4. Neopopulist Ideal. "The practices of public agencies and private enterprises should be easily intelligible to ordinary citizens and be responsive to their needs. Societal institutions should emphasize serving the common man."
5. Systems Ideal. "The main goals is that EFT systems be technically well organized, efficient, reliable, and aesthetically pleasing."

TABLE 2: Klein Design Ideals
A.) Socio-technical Ideal. The principle objective is to optimize the interrelationships between the social and human aspects of the organization and the technology used to achieve organizational goals. Both the quality of working life (satisfaction of human needs at work) and profitability or system efficiency are to be improved through a high degree of fit between job characteristics (as defined by work design) and a limited set of human needs (as defined by social and psychosomatic profiles). 
B.) Decision Support Systems Ideal. The final criterion is "to help improve the effectiveness and productivity of managers and professionals." A necessary condition for achieving this is to gain user acceptance and actual systems use. In some parts of the literature this is seen as a worthwhile goal in itself. In any case, the emphasis is on tailoring the system design to the user's needs and preferences. Sophisticated techniques for analysis of personality traits or other psychometric variables such as cognitive style, locus of control, motivational type, attitudes etc. are proposed as design tools. Thorough "understanding" of the user's problems and a strategy of mutual trust building through highly participative systems development in a series of small, adaptive steps with rapid feedback through demonstrations are seen as the "right" approach.
C.) Dialectical Inquiry Ideal. Above all information systems must be designed such that they produce maximal motivation to generate "objective" knowledge through the synthesis of the most opposing points of view each supported by the best available evidence. Truth and objectivity are of prime importance and can only be found through an adversary system in which the competing points of view are confronted such as in a court of law. Peace, willingness to compromise, or consensus create dangerous illusions which threaten the objectivity of knowledge and justice. (cf. Churchman l970, l98l; Mason l969; Cosier et al. 1978)
D.) Participatory Design Ideal. Emphasizes that the process by which systems are developed may be more important than the features of the final product, because "people should be able to determine their own destinies." Hence the ultimate moral value to be achieved through participation is human freedom which then leads to such other goods as motivation to learn, genuine human relationships based on equality and commitment to what one has freely chose to accomplish. (Mumford 1983; Land and Hirschheim 1983)

TABLE 3: Hirschheim Design Ideals (after Burns 1981)
1. The technology or IS application and/or its function should be intelligible to the community as a whole. If not, the community should at least be willing to give tacit acceptance to those experts who do understand it. 
2. It must fulfill a socially useful purpose. The application of technology must address social needs that are recognized as justified by an informed democratic consensus. 
3. It should be under the operational control of the local work force. If not, the local work force should at least have a say in its control. 
4. It should, wherever possible, use indigenous resources and skills, and contribute to their growth.
5. It should not displace more desirable jobs than it creates but should possibly eliminate undesirable ones.
6. Its production and use should present no undue health hazards or risk. 
7. Wherever possible, it should be non-polluting, ecologically and aesthetically sound. 
8. It must not lead to external cultural domination. 
9. Its use should permit those elements of work which are recognized as being related to high job satisfaction to be improved (for example development of new skills, task variety, challenging tasks, and the like). 
10. It should not disturb the existing social order if this is widely accepted as just and fair unless the new social order brought about by the change advances and improves society as a whole and can be defended by the Rawls' Principles of Justice (Rawls 1971). 

3.2 The Neutrality Myth

There is a widespread belief that the difficulties of making value choices should be avoided in the exercise of professional methods, implying, of course, a professional ethic of impartiality and objectivity. However, the inevitability of moral value choices in the application of information systems is supported by historical evidence which shows that technology is not "neutral" (cf. Braverman 1974; Briefs 1980; Cooley 1980; Kling 1980; Burns 1981). Mowshowitz (1984) argues this point vociferously when he notes that the use of computers is conditioned by:

  1. what they are (the inherent limitations of the current state of the art);
  2. how they are produced (by a technology which requires the mastery of a certain know-how and therefore favors the interests of certain groups in society at the expense of others);
  3. the techno-cultural paradigm which influences the way in which individuals and groups approach the development and deployment of computers. For example, the data collected by Kumar and Bjorn-Anderson (1990) suggests that the same piece of technology which is marketed in North America with the promise of cost savings is marketed in Denmark with the promise of higher quality of working life. Whether either claim has any merit would seem to depend on the value choices which govern how this particular piece of technology is embedded into the larger application environment.
These three factors combine to favor some uses of computers and prevent others. "The mere fact that computers can be used to improve the quality of life or to increase citizen participation in government does not mean that they will be so used." (Mowshowitz 1984, p 85). Mowshowitz (1984) concludes by noting: "The neutrality thesis is every bit as naive as the stork fable of human birth" (p. 86). However, the inevitability of difficult value choices must not be taken as an excuse for inaction for fear of potential abuses. Inaction favors the status quo, which could be worse than change. Therefore we advocate that reason should guide action no matter how imperfectly and in section 4 we address how this can be done.

3.3 The Dilemma of Choices Between Conflicting Design Ideals

Because of the myth of neutrality and the concomitant inevitability of value choices in the design of computer-based information systems, a practical dilemma exists for both the practitioner and researcher. Because he is supposedly acting for the common good of the organization, the official view of the systems analyst is that of an impartial change agent who can expect voluntary cooperation. If the neutrality of information systems is a myth, analysts are forced to admit that in their work, they are by definition interest bound. They are more like lawyers or partisan advocates. This analogy suggests that partisanship is not necessarily bad, because without lawyers it would be impossible to adjudicate justice. In general it is safe to assume that there is a certain amount of conflict between different groups in organizational life, and, as a consequence, analysts are forced to take sides. The disadvatanged parties will realize this and view the system developers' actions with suspicion. At best, they will not volunteer information and at worst they will provide false information in so-called counter-implementation strategies (cf. Keen 1981). On the other hand, some analysts might decide to side with user interests and oppose systems which threaten the quality of working life, employment, and the like (cf. Ehn 1988).

A dilemma also exists for the academic researcher. According to the orthodox or "received" view that science should be value free, the scientist must not engage in value judgments in his relationship to the object domain, i.e. the field of inquiry. (For a brief description of the received view of the nature of science, see Mattessich 1978, p. 258­265.) But if our argument on the neutrality myth is valid, then this received view is in conflict with reality, because, for the reasons indicated, the ideal of a value free scientific method cannot be implemented in the application of ISD. Research into improving such applications is not value free because it is concerned with practical advice for designing "good" systems that will necessarily serve some interests, to a greater or lesser degree, at the expense of others.

These dilemmas can be resolved if the doctrine of an impartial professional practice based on a value free scientific method is abandoned in favor of a much broader concept of science. Radnitsky (1970, p. 1) has proposed such a broader view: "We conceive of 'science' essentially as a knowledge improving enterprise." Knowledge in this sense is not limited to what can be learned from empirical data collection or mathematical deduction, but includes all human insight and wisdom that can be exposed in moral discourse. In moral discourse, the competing value claims are interpreted, related to each other, and justified. This is taken up further in the next section.

4. A RATIONAL APPROACH TO VALUE CHOICES

If practical knowledge about how to approach value conflicts is accepted into the domain of science, it implies a revival of the kind of concerns which Kant discussed under the concept of categorical rules. In contemporary social theory such concerns continue to be investigated, see for example Churchman's (1970, 1981) systems approach, or Rawls' (1971) Theory of Justice. In this paper, we turn to selected elements of Habermas' (1984) critical social theory (CST) and critical rationalism to identify the conceptual issues that must be addressed if choices among conflicting design ideals are to be justified by human reason rather than by power or appeal to convention.

CST has proposed a number of principles by which the legitimacy of moral value choice can be checked, or as Habermas puts it, the claims underlying such choices can be "redeemed." These principles relate to the concept of rational discourse. A rational discourse can legitimize the selection of a design ideal because it assures that the arguments of all interested parties are heard, that the choice results in an informed consensus about the design ideal, and the formal value choice is only made by the force of the better argument. Because of the importance of the rational discourse concept, its principles need to be considered in some detail.

4.1 The Concept of Rational Discourse

From the perspective of CST, design ideals must be validated by an informed and voluntary (authentic) consensus which has been achieved through a debate which satisfies the conditions of a rational discourse as stated below. The intent of these conditions is to ensure that all viewpoints and all arguments supporting and contesting each viewpoint have an equal chance to be heard. Basically, a rational discourse is defined by the ideal conditions which should characterize an informed, democratic, public debate. In such a debate no force should influence the outcome except the force of the better argument.

The concept of a rational discourse can be applied to the selection of design ideals at the organizational level. Contrary to the widespread belief that value choices are axiomatic, we will show how logical forms of reasoning can be used to construct arguments supporting and contesting alternative design ideals. It is important that the rational discourse about design ideals arrives at an informed consensus in the relevant community without undue pressures. This provides the prerequisite that the force of the better argument alone decides upon the preferred design ideal or ideals. (There are, of course, barriers to the implementation of the rational discourse concept and these are addressed in section 4.2.) If no unique design ideal emerges because of equally strong arguments for more than one design ideal, the tie must be broken by some democratically fair voting mechanism, such as might occur after a parliamentary debate.

According to Habermas (l973, p. 255 and 256), the following four conditions must be met by a rational discourse. These conditions are also said to define an "ideal speech situation" or a communication community (cf. Apel 1973):

  1. All potential participants in a rational discourse must have an equal opportunity to begin a discourse at any time and to continue it by making speeches and rebuttals, and by questioning and answering. Habermas calls this an equal chance to use communicative speech acts.
  2. For all participants there must be an equal opportunity to interpret, to assert, to recommend, to explain and to justify as well as to question or to give evidence for or against the validity claim of any of these forms of speech. The purpose of this condition is to assure that in the long run, no presupposition or opinion can escape from becoming the center of discussion and criticism.
  3. All participants are presumed to be equally able to express their attitudes, feelings, and intentions. These Habermas calls representative speech acts. They serve as a guarantor against self-deceit, illusions, and insincerity of members among the speech community towards one another.
  4. All participants are presumed to be equally able to give and refuse orders, to permit and prohibit, to promise or ask for promises, to account and ask for accounting, etc. Habermas refers to these as regulative speech acts. They guarantee that the formal chance of equal distribution of opportunity to begin or continue a discourse is realized.
It has been widely recognized that the full realization of these conditions is not possible. However, there are typically two lines of reasoning to address the issue of rational discourse implementation. First, from a practical perspective, it would be sufficient if the implementation of a rational discourse eliminates the worst inequities and assures a reasonable amount of fairness in the arena of communal debate such as might be realized in a well functioning parliament.

Second, to deny the practical approximation of a rational discourse is self-defeating, because through the denial one is by definition already engaging in a discourse. As Apel (1973) points out, anyone entering a dialogue in principle presupposes the possibility of a rational discourse. By definition, a signal for dialague if sincere, entails the willingness to submit oneself to the counterfactual norms of ideal speech, i.e. not to use force, listen to counter-arguments etc. Hence the very attempt to start a dialogue presupposes that a form of communication is possible that is unimpeded by the usual cognitive, emotional and social barriers to rationality, at least to some extent.

"He who enters a discourse implicitly recognizes all possible claims of the members of the communication community which can be justified by reasonable arguments ... and at the same time he commits himself to justify his own claims against others by arguments. In addition, all members of the communication community (and that implicitly means: all thinking beings) in my opinion are also obliged to consider all virtual claims by all virtual members, i.e., all human 'needs' insofar as they could make claims to fellow human beings." (Apel, l973, p. 425)

When applying these principles to the justification of design ideals, the rational discourse would have to move through three steps: (a.) identification of possible design ideals (such as those provided in Tables 1, 2 and 3), (b.) improvement of the information available to all participants in a discourse through critical reconstruction and analysis of the implications of the alternative design ideals, which relates to overcoming motivational and organizational barriers to inquiry, and (c.) construction of arguments to form preferences in favor of the design ideal which can marshall the strongest evidence in its behalf. Such evidence might consist in evaluating its congruence with respect to values and democratic ideals accepted at the level of society. If these steps do not converge on one design ideal, then an additional step is needed to legitimize the final selection of one design ideal through some form of voting. In the following, we expand primarily on points b. and c.

4.2 Barriers to Rational Value Choices in Organizational Decision Making

In implementing a rational discourse, two key issues must be addressed: (1) the barriers to rationality which exist in the current practice of organizational decision making, and (2) the nature and principles of arguments by which one can reason about competing moral claims. (Each of these is discussed in some detail below.) If conventional logic is limited to deductive reasoning with factual premises and causal laws (as orthodox science suggests), then there is great difficulty in reasoning about moral claims. However, we hope to show that logical principles exist, which allow checking the plausibility of value claims in a fashion similar to the reasoning rules in propositional and predicate calculus. These principles have a similar axiomatic status as the deduction rules in propositional and predicate calculus.

4.2.1 Barriers to the Rationality of Organizational Decision Making

For a rational discourse to be effectively implemented, three types of barriers must be overcome:

  1. Social barriers exist because of inequalities in power, education, resources, etc. Social inequalities lead to bias of perception and presentation, blockage of information, and conscious or subconscious distortions.
  2. Economic and motivational barriers exist because time constraints make it impossible to deal with all possible participants, arguments, and counterarguments as suggested above. In practice, the psychological and economic cost of debate prevent this. People are simply not motivated to argue for a long time to deal with all possible objections and implications. Furthermore there are social norms that discourage disagreement and thereby breed conformity. Hence commitment to rational discourse implies changing existing social norms from those working against openness and sharing, to rules of discourse which favor good communication. This entails instilling a new, preferred type of social ethics with supporting customs.
  3. Linguistic barriers exist because the rationality of human communication tends to suffer from conflicting and ambiguous meanings, difficulty in expressing complex matters, limits of the human brain to comprehend lengthy reports and other factors which impede mutual understanding.
This latter barrier is particularly thorny because of the pervasiveness of language and the ease with which the rationality of the discourse is threatened by the limitations of language, producing communication gaps, unintentional misunderstandings, and possibly intentional misrepresentations or distortions. Therefore, any argument presented in the discussion of differing design ideals must be checked against four types of validity claims: (i) intelligibility, (ii) truth, (iii) veracity, and (iv) normative justification or legitimacy. (In ordinary, simple conversation these four claims are usually taken for granted, yet they tend to be highly problematic when addressing complex social issues. This explains why even well-meaning and well-informed people tend to have difficulty reaching the same conclusions on complex social issues.) Intelligibility refers to the assumption that the meaning of what is being said is clear to all concerned. Intelligibility must be satisfied before the other claims can be checked. Truth refers to the correspondence between factual claims and the actual state of affairs. Veracity refers to the sincerity of intentions of the speaker, that each is speaking honestly and without guile. Legitimacy implies that the claimed value statement is consistent with norms and principles which have been validated through a rational discourse. For instance, lower level laws are legitimate if they do not violate the norms of the constitution. The norms of the constitution are legitimate if they have been amended by due process. The same principles can be applied to the validation of design ideals by a rational discourse. The validation of values in this fashion we term "moral discourse."

In practice, these three barriers typically lead to a low level of rationality in organizational decision making. Nevertheless, and in spite of these real difficulties, it is possible to see how these barriers could be overcome. This would involve the: (1) changing of social attitudes, and (2) importing of democratic checks and balances into organizations. In the former, the underlying challenge is to change social attitudes of organizational actors so that principles of criticism and logical analysis are elevated above social norms of conformity and acceptance of customs and traditions. Insofar as social norms mutually stabilize and reinforce existing attitudes and beliefs they are undesirable from the perspective of a rational discourse and need to be constantly contested. In the case of the latter, there are some classic examples of the creation of social institutions and arrangements which facilitate a rational discourse. Historical examples include the separation of government powers, and the adversarial system represented by courts of law and in bicameral parliamentary systems. They ensure that the social justification of decisions can be made rational which could also apply to value judgments in the context of management.

4.2.2 Overcoming the Barriers to Organizational Decision Making

In order to overcome the barriers to organizational decision making as discussed in the previous section we need to find organizational arrangements which encourage critical thinking, evidence collection and formation of opinions in response to the best evidence disregarding extraneous influences such as vested interests or social conformity. One of the first to recognize the need to overcome "organizational intelligence failures" from hierarchy, departmentalization (specialization) and dogmatic opinion was Wilensky (1967). Among the countermeasures he recommended (pp.175-178) were: deploy a team or project organization, communicate out of formal channels, rely on informed outsiders, avoid consensual judgements or agreement by exhaustion, develop interpretive skills, minimize loyalty security criteria, insure media competition and diversity, avoid invasion of employee privacy, invest more in general orienting analyses, and use institutionalized adversary procedures or equivalents.

In overcoming the barriers to organizational decision making it is necessary to encourage the articulation of opinions and their sharing through participation in debate by appropriate organizational and technical designs. An example of such a design and its technological support is the group decision room software (Dennis et al. 1988; Vogel et al. 1990). This type of software: protects each participant from authoritarian domination by insuring anonymity; overcomes time and space barriers for fully informed debate through distributed conferencing and electronic mail; provides accessibility of knowledge through electronic libraries and indexing systems; and offers the facility of keeping track of the interlinked chains of arguments occurring during the decision conference thereby improving the transparency of complex arguments. Such types of systems could also remove some of the time and cost constraints by maintaining continuous organization-wide discourse on all agendas of interest. Unfortunately, these systems have not systematically considered the issue of how to provide adequate protection from the many ways in which illegitimate uses of power may intrude upon communication. As it is very difficult to anticipate all subtle social illegitimate uses of power, a social authority is needed to oversee the respect for the arrangements approximating the ideal speech situation. The institution of an information systems ombudsman could be proposed as such a social authority.

Another organizational arrangement for improving the rationality of decision making which recently has received some attention is the concept of quality circles. Their agendas could perhaps be adapted to include the legitimation of design ideals. In addition, other organizational arrangements are possible: using nominal group approaches; applying assumptional analysis; and using dialectical debate. In nominal grouping, proponents of different design ideals could agree to speak out in a fixed sequence similar to the arrangement of a panel discussion. Assumptional analysis could reveal the underlying presuppositions and beliefs of the different interest groups. Dialectical debate could focus attention on diametrically opposed views with supporting and contradicting bases of evidence. It would thereby help to reveal the relative merits of conflicting value choices. Through dialogue, these and other approaches can help to improve the understanding between different groups in the organization which otherwise would remain distant to each other.

Having now discussed the first of our two key issues associated with implementing a rational discourse (i.e. the barriers to rationality which exist in the current practice of organizational decision making and ways to overcome them), we now turn to the second issue: the logical structure of arguments by which one can reason about competing moral claims.

4.3 The Nature and Principles of Arguments about Value Choices

In the previous section we were concerned with overcoming the motivational and organizational obstacles to human inquiry which is needed to improve the information base for the construction of arguments. The following section is concerned with the construction of the arguments themselves and thereby explains how value conflicts can be addressed through logical reasoning instead of politicking.

4.3.1 The Concept of the Logical Structure of Arguments

Figure 1 shows the logical structure of arguments by which validity claims can be checked. It also applies to moral discourse. Figure 2 illustrates how the logical argument structure can be used via an example, representing the argument for participation. Another argument could be constructed for the claim that nonparticipatory approaches to system design are illegitimate. In principle it is possible to reproduce such arguments for and against participation as were presented in (Mumford 1984; Land and Hirschheim 1983; Davis 1982). From such arguments one aspect is selected for closer scrutiny, namely the question of how competing value claims can, in principle, be refuted.


Figure 1: The general schema of rational arguments in discourse
(cf. Toulmin 1958)

EVIDENCE
(typically causes for natural events, motives or intentions for human behavior) 
therefore: 
CLAIMS, QUALIFICATIONS

because of: WARRANT
(typically general laws for claims about nature, values, norms and regularities for claims about human affairs)
unless: REBUTTAL

as supported by BACKING
(the body of knowledge which is used to justify the WARRANT if challenged)

4.3.2 Principles for Reasoning about Competing Value Claims

Albert (1968) has proposed a number of "bridge principles" which allow value claims to be refuted in a fashion similar to the way empirical theories or factual truth claims can be refuted by the scientific method. The basic approach the scientific method uses to refute a truth claim is to show that a theory leads to predictions which are not consistent with informed opinion of the facts. This model can be generalized by applying the following four rules:

  1. Ought implies can. From the classical modus tollens it follows: "Cannot implies not ought" which is the same as the notion "ultra posse nemo obligatur." Hence an ethical system or design ideal can be criticized for requiring or postulating things which cannot be done or are demonstrably contrary to natural laws. Care must be taken not to confuse unquestioned social conventions as "laws of human nature."
  2. The principle of congruency of the legitimation of norms with established - but still fallible - human knowledge. For instance, if an ethical rule is justified as being "God's will" (cf. St. Augustine's justification of slavery) and cannot be justified otherwise, it can be criticized for being incongruent with established scientific principles. The principle of congruency can be used to cross-check design ideals that operate at different levels of society. Slavery, for instance, is seen to be unconstitutional - one does not change the constitution to allow slavery, but forbids slavery. The basic structure of argument applying this idea is set out in Figure 2.
  3. New experiences. If it becomes obvious that an ethical system or design ideal leads to unforeseen undesirable consequences, or if someone uncovers evidence that in all likelihood it will lead to undesirable consequences, it is subject to review.
  4. New moral ideas. An example of this is Rawls' (1971) criticism of utilitarianism.

  5.  


Figure 2: Example of the application of the Toulmin Schema 
An argument for the justification of the participatory design ideal results if the following insertions are made in Figure 1

EVIDENCE:
observations about disgruntled users and the reasons for their dissatisfaction with IS, invisible backlog, red tape of system use, etc.
CLAIMS:
we ought to adopt the design ideal D 
(participatory system development)

WARRANT
Generally accepted reasons why democratic forms of social choices are supposedly superior
REBUTTAL:
?? 
(fill in preferred objections)

BACKING:
historical data and philosophical arguments about the social conditions and quality of life in different life forms, e.g., democratic versus authoritarian.

Using Albert's (1968) "bridging principles", design ideals (or the value statements which they contain) are seen to be fallible and thereby become amenable to Popper's (1963) falsification approach which in its original form was primarily meant for the refutation of descriptive theories by confronting them with factual evidence. The bridging principles extend this logic to ethical theories. A design ideal can be treated as a theoretical conjecture derived from some ethical theory (e.g. the competitive ideal from utilitarianism) on which various types of evidence can be brought to bear in order to refute or corroborate it. The bridging principles provide a way to overcome the gap between "is" and "ought" through critical reasoning. They show how the "naturalistic fallacy" can be avoided in moral discourse. The naturalistic fallacy is committed if someone tries to derive prescriptive or evaluative statements from purely factual (descriptive) evidence. Hume's thesis, that one cannot derive "ought" from "is" is less cogent if it must be admitted that many statements have descriptive, evaluative and prescriptive content at the same time. This was shown by Hare (1952) in his analysis of "The Language of Morals." Another counterargument to Hume comes from Searle (1964). His concern with "How to Derive "Ought" from "Is"" led to the development of illocutionary logic by which the common sense logic of every day speech can be modeled in a much more comprehensive fashion than by classical predicate calculus (cf. Kutchera 1975; Searle 1979). It is now apparent that Albert's (1968) suggestion of constructing appropriate bridging principles can, indeed, serve as a starting point for a rational foundation of ethics. Due to the work of others, particularly based on Toulmin's (1958) "The Uses of Argument", an outline of a theory of informal logic which can deal with the logical structure of moral reasoning is now visible.

A somewhat simplified illustration of the logical structure of moral discourse using some of the bridging principles is presented in Figure 2. The purpose of this particular example is to demonstrate how the choice among competing design ideals can be approached with sound reasoning principles. This is accomplished by showing how a conflict between two ideals at the level of system design can be resolved by appealing to the first two of Albert's logical bridging principles.

Figure 2 depicts a situation where someone argues that the participatory design ideal (cf. Table 2) is too expensive and therefore not feasible. The argument would likely contend that it would diminish the profit rate leading to an insufficient capital base and ultimately lead to bankruptcies and widespread unemployment. Clearly, if this were true, the participatory ideal would not be in the general interest as employees who would lose their livelihood.

The counterargument might begin by pointing out that the above line of reasoning essentially is an application of the bridging principle 1 in that it is claimed that the ideal of participation cannot be practiced in a market economy. This is then backed by the undesirable side-effects of bankruptcy and unemployment. This line of reasoning can be contested in two ways: First, by data which show that some companies or national economies do practice highly participatory, consensual forms of decision making and do not go broke. If such data are produced, it amounts to an application of bridging principle 3 - a new experience - in light of which the infeasibility argument cannot be upheld in its original form. Alternatively, if no such data can be produced, the appeal to bridging principle 1 can be contested with principle 2: if a nonparticipatory (authoritarian) system implementation strategy is incongruent with the Jeffersonian principles of a democratic life style and the later is held to be of higher moral status than economic profits, then it is justified to mandate industrial democracy by legislation for all companies thereby restoring equal conditions for competition. In fact this has happened in Norway and to some extent in Germany (cf. codetermination legislation). This in turn raises the question as to whether or not the Jeffersonian ideal of democratic decision making ranks higher than profits. If there is a consensus on this issue, the argument is decided. If this in turn is contested, it must be dealt with as a new claim to be decided in a separate moral discourse.

In sum, this section presented an approach to deal with value conflicts through informed discourse consisting of three parts: (1) clear identification of the pertinent design ideals; (2) inquiry to collect information to shed light on the implications and tradeoffs of conflicting design ideals; and (3) construction of arguments to facilitate the convergence on a preferred design ideal. In the case where convergence is not complete, one would have to resort to voting to resolve the remaining conflict.

It is our contention however, that the lack of convergence to a unique system of moral criteria cannot be turned into a counterargument against such an approach. We propose the following two logical analogies from physics and mathematics to support this. It is now generally acknowledged that it is unclear whether physical theories converge towards some ultimate body of truth about the "real" structure of time, matter, and energy. Yet, this is not an argument to abandon the application of the theories of physics. Similarly, the incompleteness and undecidability of axiomatic systems of a certain complexity (cf. Godel's proof) is not an argument for abandoning the application of mathematics altogether. By the same token, if ethical theories cannot be shown to converge, it does not mean that the application of the principles of moral discourse should be abandoned.

5. CONCLUSIONS AND FURTHER IMPLICATIONS

In this paper we have set out to show: (1) that the practice of ISD cannot escape value choices; and (2) how, at least in principle, such choices can be approached rationally. Although the literature appears to perpetuate the view that it is not the role of the systems designer to get involved in "politics", that value choices should be left to other stakeholders (cf. DeMarco 1978), we believe that systems designers should be involved in value choices, and moreover, that there is a way to rationally deal with them.

Simply put, it is our belief that ignoring or refusing to deal with the issue of value choices is an unacceptable position for the IS community to adopt. If value conflicts are not dealt with through democratic, rational means, they will be suppressed or dictated by managerial fiat or appeal to tradition, and ultimately lead to overt or covert forms of hostility. The consequences of this include: the growth of bureaucratic infighting; the loss of flexibility through the entrenchment of the status quo; rising alienation with the associated symptoms of high personnel turnover, apathy, scapegoating, low productivity, work-to-rule, and other forms of subtle sabotage as a means of expressing discontent. These kinds of reactions to ignoring value conflicts are not only undesirable but impose a heavy opportunity cost of foregone benefits.

In contrast, an approach to addressing value conflicts that builds on the principles proposed in this paper not only avoids the aforementioned opportunity costs, but also is likely to produce important externalities (i.e. secondary benefits). The identification of design ideals would produce a clarifying effect of management policy. The open forum of the rational discourse encourages identification and commitment to organizational goals. The organizational goals would be reformulated through the rational discourse because they would not be treated as given.

The many organizational and technical arrangements proposed above for overcoming the organizational barriers to improving inquiry (group decision room software, quality circles, etc.) would help to improve the general level of knowledge and communication in the organization. Hence all other organizational activities would benefit from implementing a rational discourse on value choices. These benefits would accrue because a rational discourse motivates participation, stimulates critical thinking yet also encourages the sharing of ideas, and puts a premium on collective problem solving, all of which stimulates creativity. Note how the resolution of value conflicts creates conditions similar to those embraced by the management strategy of social empowerment (Zemke and Schaaf 1989; Lawler 1986; Lawler et al. 1989).

We would like to conclude by speculating on what might happen if the approach advocated here spreads from the literature to leading edge organizations to the whole economy. There would be a macroeffect for society in the sense of producing a society which provides a higher quality of life for its entire work force. Surely this is an ideal whose time has come!

REFERENCES

Albert, H. (1968), Traktat Uber Kritische Vernunft, J.C.B. Mohr (Paul Siebeck), Tubingen.

Alter S., (1980), "Decision Support Systems: Current Practice and Continuing Challenges," Addison-Westey, Reading, Mass.

Apel, Karl-Otto, (1973), "The Apriori of the Communication Community and the Foundations of Ethics", in English, Apel, (1980).

Apel, Karl-Otto, (1980), "The Transformation of Philosophy", Routledge and Kegan Paul, London.

Braverman, H., (1974), "Labor and Monopoly Capital", Monthly Review Press, New York.

Briefs, U., (1980), "The Effects of Computerization on Human Work ­ New Directions for Computer Use in the Work Place", in Mowshowitz, A. (ed), Human Choice and Computers ­ 2, North­Holland, Amsterdam.

Burns, A., (1981), "The Microchip: Appropriate or Inappropriate Technology", Ellis Horwood, Chichester, 1981.

Churchman, C.W., (1970), "Challenge to Reason," Delta Books, New York.

Churchman, C.W., (1981), "The Design of Inquiry Systems," Basic Books, New York.

Cooley, M., (1980), "Architect or Bee? The Human/Technology Ralationship," Hand and Brain Publication, Slough, England.

Cosier, R., Ruble, Aplin (1978), "An Evaluation of the Effectiveness of Dialectic Enquiry Systems", Management Science, Vol 24, No 14, October, pp. 1483-1490.

Davis, G., (1982), "Strategies for Information Requirements Determination," IBM Systems Journal, Vol 21, No 1, pp 4-30.

DeMarco, T., (1978), "Structured Analysis and System Specification," Yourdon Press, New York.

Dennis, A. R., George, J. F., Jessup, L. M., Nunamaker, J. F. and Vogel, D., (1988), "Information Technology to Support Electronic Meetings," Management Information Systems Quarterly, Vol 12, No 4, pp. 591-624.

Dewey, R. and Hurlbutt, R. (eds.) (1977), An Introduction to Ethics, Macmillan, New York.

Ehn, P., (1988), "Work-Oriented Design of Computer Artifacts", Arbetslivscentrum, Stockholm.

Habermas, J., (1973), "Theory and Practice," Beacon Press, Boston.

Habermas, J., (1984), "The Theory of Communicative Action - Reason and the Rationalization of Society", Vol 1, Beacon Press, Boston.

Hare, R. M., (1952), The Language of Morals, Oxford University Press, Oxford.

Hirschheim, R., (1986), "The Effect of A Priori Views on the Social Implications of Computing: The Case of Office Automation", Computing Surveys, Vol 18, No 2, June, pp 165­195.

Kant, I., (1964), "Groundwork of the Metaphysics of Morals", translated and analysed by H.J. Patton, Harper Torch Books, New York.

Kant, I., (1929), "The Critique of Practical Reason", Harper Torch Books, New York.

Keen, P., (1981), "Information systems and Organizational Change", Communications of the ACM, Vol 24 No 1, pp 24­33.

Keen, P., and Scott-Morton M., (1978), "Decision Support Systems: An Organizational Perspective", Addison-Wesley, Reading, Mass.

Klein, H.K., (1981), "The Reconstruction of Design Ideals", paper presented at the TIMS Conference, Toronto, May.

Kling, R., (1978), "Value-Conflicts and Social Choice in Electronic Funds Transfer Developments", Communications of the ACM, Vol 21, No 8, pp 642-657.

Kling, R., (1983), "Value Conflicts in Computing Developments", Telecommunications Policy, March.

Kumar, K. and Bjorn-Andersen, N., (1990), "A Cross-Cultural Comparison of IS Designer Values," Communications of the ACM, Vol 33, No 5, May, pp 528-538.

Kutchera, F. von (1975), "Philosophy of Language," D. Reidel, Dordrecht.

Land, F. and Hirschheim, R., (1983), "Participative Systems Design: Rationale, Tools and Techniques", Journal of Applied Systems Analysis, Vol 10.

Lawler, E., (1986), "High-Involvement Management," Jossey-Bass, San Francisco.

Lawler, E., Ledford, G. and Mohrman, S., (1989), "Employee Involvement in America: A Study of Contemporary Practice," American Productivity & Quality Center, Houston.

Lyytinen, K., (1986), "Information Systems Development as Social Action: Framework and Critical Implications", Ph.D. thesis Department of Computer Science, University of Jyvaskyla, Finland.

Lyytinen, K. and Hirschheim, R., (1987), "Information Systems Failures: A Survey and Classification of the Empirical Literature", Oxford Surveys in Information Technology, Vol 4.

Lyytinen, K., and Hirschheim, R., (1988), "Information Systems as Ratinal Discourse: An Application of Habermas' Theory of Communicative Rationality", Scandinavian Journal of Management Studies, Vol 4, No 1/2, pp 19-30.

Lyytinen, K., and Klein, H., (1985), "The Critical Social Theory of Jurgen Habermas as a Basis for a Theory of Information Systems", in Research Methods in Information Systems, E. Mumford, R. Hirschheim, G. Fitzgerald, and T. Wood-Harper (eds), North Holland, Amsterdam, pp 219-236.

Mason, R. (1969), "A dialectical approach to strategic planning", Management Science, Vol 15, April, pp B403-B414.

Mattessich, R. (1978), Instrumental Reasoning and Systems Methodology: An Epistemology of the Applied and Social Sciences, D. Reidel Publishing Company, Dordrecht.

Mowshowitz, A., (1984), "Computers and The Myth of Neutrality", in Proceedings of the 1984 Computer Science Conference, Philadelphia, February 14-16, pp 85-92.

Mumford, E., (1984), "Participation - From Aristotle to Today," In Bemelmans, T. (Eds.) Beyond Productivity: Information Systems Development for Organizational Effectiveness, Amsterdam: North Holland, pp 95-104.

Ngwenyama, O., (1987), "Fundamental Issues of Knowledge Acquisition: Toward a Human Action Perspective of Knowledge Acquisition." Ph.D. thesis, Watson School of Engineering, State University of New York - Binghamton.

Polanyi, M., (1958), "Personal Knowledge: Toward a Post-Critical Philosophy," University of Chicago Press, Chicago, Illinois.

Popper, K., (1963), "Conjectures and Refutations," Routledge & Kegan Paul, London.

Radnitsky, G., (1970), "Contemporary Schools of Metascience," Akademiforlaget, Gothenburg, Sweden.

Rawls, J., (1971), "A Theory of Justice", Harvard University Press, Cambridge, Mass.

Rule, J., (1975), "Value Choices in an Electronic Funds Transfer Policy," Office of Telecommunications Policy, Executive Office of the President, Washington, D.C., October.

Searle, J. R., (1964), "How to Derive "Ought" from "Is"", Philosophical Review, Vol. 73, January. Reprinted in Dewey, R. and Hurlbutt, R. (eds.) (1977), An Introduction to Ethics, Macmillan, New York, pp. 452-463.

Searle, J. R., (1979), "Expression and Meaning," Cambridge University Press, London.

Toulmin, S., (1958), "The Uses of Argument", Cambridge University Press, Cambridge.

Vogel, D., Nunamaker, J.F., Martz, B., Grohowski, R. and McGoff, C., (1990), "Electronic Meeting System Experience at IBM," Journal of Management Information Systems, Vol 6 No 3, pp 25­43.

Wilensky, H., (1967), "Organizational Intelligence, Knowledge and Policy in Government and Industry", Basic Books, New York.

Zemke, R. and Schaaf, D., (1989), "The Service Edge: 101 Companies That Profit from Customer Care," New American Library, New York.