|Chapter 2 - Literature Review|
|Peter H. Jones - The Union Institute|
Chapter 2 – Review of the Literature
This discussion presents the literature covering the critical issues of the research, starting with a critique of technology in society and the values systems in science and engineering. Then focusing closer to the research domain, I address philosophies of design process and management, current practices in information systems and software engineering, and problems of values in systems and organizations.
The Research Context
This research fits within an interdisciplinary convergence of three broad disciplines: organizational psychology, design studies, and information systems. At this intersection we find interdisciplinary fields informed by all three areas, specifically participatory design (PD), organizational informatics, and innovation management. Knowledge management, social systems design, computer-supported cooperative work (CSCW), and work practice research also inform this intersection of interests.
Interest in technology’s social impact has been a key motivator of the field of participatory design, as PD has historically supported emancipatory research interests. The other referent disciplines also support social research orientations, but do not foster action research toward improving social welfare. Organizational informatics (Orlikowski, 1992, Kling, 1996) contributes analyses and conceptual models for evaluating the relationship between organizational behavior and information systems design and deployment, and orients research toward understanding social processes in computing environments. Innovation management and design management contribute research, case studies, and practices that allow understanding of the actual practice of design work and innovation in organizations. Organizational studies contribute a vast periphery of research examining organizational cognition and values systems (Weick, 1979, 1993), learning organizations (Argyris, 1992), organizational culture (Schein, 1985), and organizational design (Staw, 1984).
From these disciplines, no research trend describes specific management and design processes that embody organizational and personal values. Most research focuses on how systems design embodies organizational and designer values (Friedman, 1997, Kumar and Bjorn-Andersen, 1990), or how organizational structures and processes affect the acceptance and use of systems (Zuboff, 1986, Kling, 1996, Poltrock and Grudin, 1994). A related body of work from activity theory (Nardi, 1996, 1999, Engeström, 1996) analyzes the multiple users and owners in the organization context, using interpretive approaches from case research. The activity theory approach and methodologies directly inform this research.
Few studies have analyzed issues of organizational process and embedded values. The closest research directly addressing the problem evaluates the social impact of embedded design decisions in computing infrastructure (Hanseth and Monteiro, 1997, Star and Ruhleder, 1994). These studies share similar organizational and group behaviors with that of the product innovation domain in this study. Hanseth and Monteiro (1997) described how the process for instituting design standards for network infrastructure “inscribed” or embedded a set of preferred values to enforce desired behavior and use of the standard. Their case study analysis also addressed the implications of standards “lock-in,” which fixes standards in place for many years, often benefiting those setting the standard, without the possibility of participation from the many users affected. Hanseth and Monteiro also reinforce the findings from research on classification and coding schemes, where specific organizational interests are inscribed within an information infrastructure (Bowker, Timmermans, and Star, 1995).
Interpreting technology and social values
Pippin (1995) warns of the “greater concentration of a new sort of social power in fewer and fewer hands” being inconsistent with democratic society and socially relevant decision making. He predicts a deskilling of the labor force through automation, and suggests the technological emphasis simultaneously supports the technically efficient administration of rigid hierarchy. Pippin identifies issues such as job simplification, risks to worker safety to increase efficiency, and loss of autonomy as specific workplace costs, asserting these “organizational strategies” are “all relatively inconsistent with basic, post-Enlightenment ideals of self-respect, dignity, and autonomy.” To some extent, these issues arise from a misplaced emphasis on “policy issues as technical issues” denying public participation in the outcomes of far-reaching decisions (Pippin, 1995, pp. 43-44)
Feenberg (1995) further suggests we face significant limitations to democratic policy and process. In his view, technology decisions affect the conduct of all facets of contemporary life, and must be subject to evaluation and interpretation by those affected by the decisions.
“Technology is one of the major sources of public power in modern societies. So far as decisions affecting our daily lives are concerned, political democracy is largely overshadowed by the enormous power wielded by the masters of technical systems: corporate and military leaders, and professional associations of groups such as physicians and engineers. They have far more to do with control over patterns of urban growth, the design of dwellings and transportation systems, the selection of innovations, our experience as employees, patients, and consumers than all the governmental institutions of our society put together.” (Feenberg, 1995, p. 3)
Both writers offer sobering critiques, yet they also differ widely. While Pippin’s issues may not seem overwhelmingly moral on the surface, Feenberg’s position holds that democracy itself, or at least public policy, becomes obviated by technology decisions. This position echoes Ellul (1954), who advised the values referred to as “technique” held the potential to overcome all social conventions, including political and national institutions. Ellul held the problem was beyond moral, that technique (essentially the unquestioned values of modern technological society) affected human activity on a deeply pervasive level. “Technical activity automatically eliminates every nontechnical activity, or transforms it into technical activity” (1954, p. 83).
In looking at organizational values, for example, we might agree autonomy in the workplace does not call for interventions deemed appropriate for child or slave labor conditions. But we abdicate rightful authority in our unwillingness to address moral problems of technology in a democratic society. We leave both technology design and appropriate evaluation to the same body of “experts,” giving up public rights to solve ethical dilemmas in technology. Winner (1995) notes problems of moral evaluation of technology wherein ethical aspects are left to “experts” and are expected to be handled by some community. He cites the NSF’s programs on “ethical and value studies” in which sponsors officially designated “values experts,” expecting them to “eventually provide “solutions” to the kinds of “problems” whose features are ethical rather than solely technical. This can serve as a final tune-up for working technological models about to be rolled out the showroom door. ‘Everything else looks good. What are the results from the ethics lab?’” (Winner, 1995, p. 66).
But what should we consider appropriate values for technology? Monsma (1986) relates a structure of normative principles or technology values that map closely to those developed in the research. These were identified as cultural appropriateness, openness, communication, stewardship, delightful harmony, justice, caring, and trust. Monsma promotes the design of “culturally appropriate” technology, and places the ethical onus on designers. Designers, in effect, serve as society’s first guard for realizing humane values in the technological objects of daily life and work.
Other thinkers have opened up the domain of values in technology and design to the larger circle of culture. The hermeneutic perspective (Ricoeur, 1981, Gadamer, 1976, Habermas, 1972, Dreyfus, 1995) allows for both cultural historicity and the continual construction of new modes of perception and interaction. Technology and its embedded values become redefined through use, and its embedded values become re-interpreted through application. Technology is not neutral – but neither are technologies definable on their surface. Many issues only show up through application in the human “lifeworld”. Ethical problems are not always “built in,” but are frequently emergent, and sometimes only interpreted as problems through the lens of history.
Feenberg further interprets technology as a social object, which “ought to be subject to interpretation like any other cultural artifact, but it is generally excluded from humanistic study. We are assured that its essence lies in a technically explainable function rather than a hermeneutically interpretable meaning” (Feenberg, 1995, p. 22).
His argument raises a distinction of “allowable interpretation,” a notion at the core of conflicts between engineering and socially-sensitive design. In essence, what are we “allowed” to interpret about technology that will make a difference? By bracketing technological objects to allow interpretation only by function, engineering holds power over social meaning. Feenberg claims two hermeneutic dimensions that assist interpreting technological objects, that of social meaning and cultural horizon. Social meaning enables interpretation of objects based on shared values, e.g., the bicycle’s meaning as alternative transportation in the age of the automobile. Cultural horizon allows interpretation of the social future claimed by the object, its orientation and eventual manifestation in cultural values. The cultural horizon requires acknowledging corollary impacts, such as the automobile’s environmental and social impact on the future of shared culture.
Values in science, technology, and engineering
Many traditional scientists and research managers consider scientific study a “values-free” practice of investigating fact through objective measurement and impartial use of scientific method. Our educational systems are largely to blame for this feebling of science, and with most scientists not seeking a public voice, they themselves don’t give the public a chance to better understand the values and social impact of science.
In Science and Human Values, Bronowksi (1956) shows the values foundations from which science grows, and describes its essential quest to help humanity. In a statement appearing to contradict a humanist position, he asserts science has succeeded because “man masters nature not by force but by understanding.” The values of science derive not from its thoughtful members or from tradition, but from its very practice, which is the “creation of concepts and their exploration in the facts.” (1956, p. 60)
Bronowksi identifies several key values constructs found throughout science in history and practice. Throughout the treatise he identifies truth, honesty, respect for human dignity, trust, creativity, independence, sharing, and love of natural beauty and order as values apparent in scientific practice. Underlying this practice, he claims “first, of course, comes independence, in observation and thence in thought. … Independence, originality, and therefore dissent.”
The values of science afford a starting point for the discussion, as they inspire a research orientation. However, in the world of engineering we find a different notion of values. Its emphasis is decidedly on the practical, not the search for truth or “unity in variety.” Engineering disciplines train its practitioners to evaluate cost-benefit and functional necessity, and social values are typically considered only in the area of safety and human performance. Although engineering societies have developed ethics statements and have actively promoted the adoption of social values within engineering since the 1980’s (Didier, 1999), engineering as practiced remains culturally bound to the values of effectiveness, efficiency, and competence.
Winner (1995) critiques a commonly accepted premise of engineering, suggesting that rather than following codes of independent ethical choice, engineers’ values-in-use arise from employer business needs. On the surface, the technical professions enjoy an image of following a moral code for technology choice. The ethical codes of engineering societies certainly promote serving humanity and the public. Winner suggests it becomes a practical matter.
“The moral autonomy of
engineering and other technical professions is highly circumscribed. The
historical evolution of modern engineering has placed most practitioners
within business firms and government agencies where loyalty to the ends of
the organization is paramount. During the 1920’s and 1930’s there were
serious attempts to change this pattern, to organize the various fields of
engineering as truly independent professions similar to medicine and law,
attempts sometimes justified as ways to achieve more responsible control
of emerging technologies. These efforts, however, were undermined by the
opposition of business interests that worked to establish company loyalty
as the engineer’s central moral concern.” (Winner, 1995, p. 74)
Ricoeur (1979) developed the hermeneutic approach to social meaning in technology. Identifying the failure of engineering to interpret the meaning of technology, he suggests it as embedded in the engineering orientation itself.
“It might be objected that this is merely an initial disagreement over goals with no hermeneutic significance. Once the object is stabilized, the engineer has the last word on its nature, and the humanist interpreter is out of luck. This is the view of most engineers and managers; they readily grasp the concept of “goal” but they have no place for “meaning.” (1979, p.9)
Carey (1990) further offers hermeneutic evaluation as enabling understanding of language for constructing reality. Tools and technology are transformed by our use and interpretation as much as we are affected and transformed. When designing, speaking, or acting in organizational life we are not merely behaving from skill and routine. Our interpretations create new ways of seeing, then transformation. “It is to constitute a world, to bring a world into existence, and to simultaneously constitute a self. The artifacts of communication differ, as do the social practices they engender, but they are linked in a chain of transformation: a process whereby the world and the self is reconstituted.” (Carey, 1990, p. 23)
Medhurst (1990) describes the enculturation of values through technology, placing responsibility on both designers and users in their shared culture environment.
“It is not the tools that humans have created that constitute the problem. Instead, it is the way humans have conceptualized, communicated, and created various cultures with these tools; cultures that body forth values, attitudes, and incipient belief systems; cultures that privilege some and disenfranchise others; … cultures that often take on a life of their own, apart from conscious, human decision making; cultures that are in need of examination, analysis, and criticism.” (1990. p. xi)
Dreyfus (1995) advises resistance to the unquestioning acceptance of technology and technical values. Human agency must be considered over technical effectiveness within our practices of engineering and design. Dreyfus encourages researchers to adopt Heidegger’s position, to “affirm the unavoidable use of technical devices, and also deny them the right to dominate us, and so to warp, confuse, and lay waste our nature.” (Heidegger, 1966, p. 54) Finally, Dreyfus offers us a way out of technological control by gathering the humanity of the artifact, an approach that we might call “better living through hermeneutics:”
“We can break out of the technological understanding of being whenever we find ourselves gathered by things rather than controlling them. When a thing like a celebratory meal, to take, pulls our practices together and draws us in, we experience a focusing and a nearness that resists technological ordering. Even a technological object like a highway bridge, when experienced as a gathering and focusing of our practices, can help us resist the very technological ordering it furthers.” (1995, p. 102)
Software development and management values systems
Software development has been largely influenced by the engineering model of technology development. It has inherited engineering’s values and mechanical “building” approaches to development (an approach contrasted with Nardi’s (1993) descriptions of cultivating approaches to software design). Software design and implementation are not given to interpretations of meaning, but only of function. It seems software practice has an inherited legacy that constrains its contribution to organizational and cultural development.
Software development practice emerged from information systems traditions, which itself grew largely from the traditions of mathematics, accounting and engineering (Dahlbom and Mathiasen, 1997). The philosophy, education, and practices emerging from the engineering model of software have led to an emphasis on technological growth; the “building” model of software that satisfies corporate needs with continued progress toward information management (Pullinger, 1989). Engineering to corporate requirements can been considered somewhat responsible for organizational disregard for values in development processes (Kling, 1996). But software design and programming is not like building bridges; new software alters the work tasks of thousands daily. People’s tasks are not given consideration in the engineering model, a model that emphasizes the building of “functions” and not of work design. The predominant model of “building” focuses on generic functions, and explicitly avoids the social or workplace impact. Even when “social” factors are considered, they become misapplied due to their immersion in the engineering model (Kling, 1996).
Dahlbom and Mathiassen (1997) analyzed the trends and discussions in computing and scoped computing worldviews into three categories, identifying the roles of engineer, facilitator, or emancipator. The engineering role, albeit the dominant one, focuses on the artifact, using the building model of design. Facilitators see a focus on culture, and aim to evolve culture and systems together. Emancipators see a focus on organizational power, and use an intervention model of design. They propose the growing importance of information technology and its social impact requires computing professions and education to integrate deeper knowledge of the social and human use aspects of technology. “Simply put, we argue that rather than envisioning a new engineer with social skills in addition to technical skills, we have to change what we consider technical competence.” (1997, p. 86)
Researchers in the social dimensions of computing have clearly framed the special responsibility of computer professionals to steward social values in information technology (Huff and Finholt, 1994). Before the shift to the Internet model of computing, ethical concerns in systems design were frequently raised by researchers in a growing body of socially-oriented research and claims (Johnson, 1991, Borenstein, 1991, Kling, 1996, Friedman, 1997, Shneiderman and Rose, 1997, Bausch, 1997.) These advocates have raised ethical issues related to appropriate task design, software quality, social impact, and human values in systems design. They have significantly raised the visibility of these issues in the field, at least among researchers. However, we must continue to enculturate these issues through education, work experience, and challenges in real projects.
Other philosophical battles show between the disciplines of software engineering and computer science. West (1997) distinguished “hermeneutic computer science” as opposed to a predominant formalist model. He asks researchers to claim meaning as a component of system design and research.
“Formalists constitute the majority culture. The formalist paradigm in both philosophy and computer science is characterized by a belief in the value of centralization, control, hierarchy, predictability, and provability. … When applied to systems and computer science, the minority hermeneuticist paradigm centers on concepts of autonomy, multiple perspective, negotiated and ephemeral meaning, interpretation, emergence, self-organization, change, and evolution. Holders of this position would argue that any formal syntax … will fail to capture the semantics of the natural world.” (West, 1997, p. 115)
In fact, West’s article questioned the use of the terms computer “science” and “software engineering,” a significant critique to appear in computer science’s foremost journal. The influences of computer science philosophers such as Dreyfus and Winograd have paved the way for acceptance of an alternative paradigm, which aligns with the natural world instead of the deterministic world of mathematics and hardware. West proposes hermeneutic approaches for complex systems, which includes human-computer interaction, social systems, organizational interfaces, and collaborative software. He suggests that maintaining a deterministic paradigm in a world of complex systems is unproductive and out of date.
Hermeneutic approaches have been introduced in various research and practice dimensions, including reflection-on-action (Schön, 1983), ontological design (Winograd, 1986, 1995), participatory design (Ehn, 1993, 1997), and interpretive frames of reference (Orlikowski, 1992, 1994). Winograd’s original hermeneutic critique of artificial intelligence criticized the ubiquitous planning and problem-solving view of systems design. Winograd and Flores (1986) forwarded a model for design based on analysis of the ontological grounding of communicative behavior. However, his approach was later revised through interpretive formation, based on interpretivist approaches of situated activity (Suchman, 1987). Recent work (Winograd, 1995, 1996, 1997) has fully integrated the interpretive hermeneutic perspective, critiquing formalisms in human computer interaction and software design.
Of the Winograd concepts, the notion of “breakdowns” in particular reaches into this domain of values inquiry. Breakdowns emerge in action when the flow of interaction stops, typically due to a misdesigned element of the design interrupting the ongoingness of interaction. The breakdown becomes noticed when the tool itself becomes noticed rather than the work itself. An analysis of breakdowns can reveal values conflicts; breakdowns occur when emergent values clash through the use of a system. Interpretive and contextual approaches to system evaluation (Holtzblatt and Beyer, 1998) have been developed from this approach to identify and design to appropriate contexts and user value systems.
System development methodologies embed the models of software developers. Traditional formal and structured methods maintain distance from users and separate the designing roles into specific task categories. Traditional methodologies embed these practices of specialization and rational analysis in a problem-solving model. Selecting the methodology almost guarantees conformity with the roles and techniques associated with that tradition.
Muller and Kuhn (1993) describe the inherently political nature of system design in a published defense of participatory design. To the charge that PD seems to subordinate IS effectiveness to a social agenda, they point out that information systems already are affected by and serve a social agenda, that of customer satisfaction, management control, etc. They respond that any information system development can be seen as “an intensely political process,” and that IS design should be part of the overall design of work, including the social construction of work. Their questions reveal significant considerations to address in any analysis of organizational effectiveness of design.
“Whose social agendas are being served? Can the design be improved so that it serves the diverse social agendas of all its stakeholders? How can we bring all the stakeholders into the design process? What social and technological process can facilitate this mutual exchange and education?” (Muller and Kuhn, 1993, p. 18)
Participatory design, in confronting inequities of power and choice inherent in the design process, calls researchers and designers to make choice visible. Innovation and design work has always been political – it has the power to change perceptions, values, and lives. But it can also be philosophical, with the power to change how we think and make meaning.
Reflecting on computing’s 50th anniversary, Lanier (1997) articulates design’s possibility for integrating higher values, including art and aesthetics.
“Computer science is, alas, the only engine of culture that has not concerned itself with beauty. Why should we have? We didn't know we were making culture. We thought we were making invisible tools. We've been granted a surprise franchise as culture creators. In the next fifty years we have an opportunity, and a responsibility, to contribute in ways we never anticipated.” (1997, p. 12)
Organizations, Values and Design
Within an interdisciplinary study, relevant research extends indefinitely – each field offers its own branches from the problem, offering many perspectives on the same research problem. A useful review scopes the available fields to a manageable inquiry, leaving aside many fundamental works and assumptions to focus on closely related emerging research. Rather than reviewing each contributing discipline as a separate review section, I weave together research from across the literatures to address the research problems. Applicable research is reviewed in three cross-disciplinary sections: Organizational values diffusion, embedded organizational processes, and organizational models in innovation management
Organizational Values Diffusion
Two research problems must be considered with values-oriented research. First, what are the appropriate research models of values systems, and second, how do these values systems explain organizational behavior? The first problem is one of choosing or developing the most applicable model to reflect on values issues. The second is one of studying values diffusion, the process of values creation, adoption, and transfer within organizations.
Several values models are referenced across the organizational literatures (Rokeach, 1973, England, 1967, Vickers, 1972), indicating their acceptance and applicability to continuing research. Many researchers adopt Rokeach's definition, and have developed upon this well-accepted model of human values (Rokeach, 1973; Braithwaite and Law, 1985, Schwartz, 1994, George and Jones, 1997). Some researchers have used this prior work as a basis for studying or developing “universal” approaches to human values (Schwartz, 1994, Ellis and Hall, 1994). As defined by Rokeach (1973), values are “an enduring organization of beliefs,' that are "general plans employed to resolve conflicts and to make decisions." Rokeach’s values model shows personal choice based on appropriate behaviors (instrumental) or end states (terminal), both of which support personal or socially directed values.
Maslow (1965, 1971) developed a values model from the psychological model of the instinctual hierarchy of needs. Maslow distinguishes between deficiency values (D-values) and the “higher” values of being, B-values, which motivate individuals beyond merely personal value. Many of the B-values refer to almost Platonic ideal states, while many others represent non-controversial human and social values such as honesty, justice, and autonomy. Maslow’s work extended the notion of values to embrace a “fusion of facts and values,” and left a legacy of research questions and testable propositions that remain unanswered today.
Kohlberg’s six-stage developmental model of ethical reasoning (Kohlberg, 1969) offers a view similar to Maslow’s, essentially a developmental model with higher states (post-conventional) reached through learning and integration. Kohlberg describes levels of moral reasoning tested in research across cultures, but is more behaviorally based, evaluating the reasoning process rather than describing a model of normative values per se. Kohlberg’s model became a foundation for contemporary thinking in moral development, and has been tested against the changing view of normative ethics. Although notable criticisms of Kohlberg have focused on the appropriate distinctions in his six-stage model (Gibbs, 1979, Habermas, 1979), the more leveling critiques address biases in the Kohlberg approach. Snell (1996), among others, notes Kohlberg’s cultural biases toward Western values orientations, showing higher moral development levels for Western cultures. Compelling critique by Gilligan (1982) reveals inherent gender biases, wherein Kohlberg’s moral reasoning fails to consider an “ethic of care” as an emotionally-valid counterpart to the more intellectually-based “ethic of justice.” Although Gilligan supports the core aspects of Kohlberg’s model, contemporary research has attended to these biases and studies consistently find women scoring higher in assessments of moral development, suggesting inherent gender biases may not be differentiated in practice (Murray, 1986, Hunter, 1997, Kanny, 1997). Murray’s (1986) study suggested men and women alike tended to use a combination of justice and care in their moral reasoning. Thoma (1986) analyzed across 56 studies, revealing women outscore men at every age and education level on Kohlberg’s model. This was recently confirmed with White’s (1999) study of 480 Coast Guard personnel, where women significantly outscored men using Rest’s (1986) Defining Issues Test based on Kohlberg.
Snell (1996) also responded to criticisms against Kohlberg’s approach (cultural bias, does not reflect personal decisions in ethical dilemmas) with an inductive study. Evaluating criticisms of Kohlberg’s model, Snell derived a values model (Map of Theory-in-Use Values, or MTV) using a phenomenological approach to construct a new developmental model. Finding the MTV model closely resembling Kohlberg’s, Snell concluded that the criticisms against Kohlberg did not hold. However, ethical theories-in-use were found to be volatile and situational, where individuals would draw from ethical reasoning across the Kohlberg stages to resolve complex dilemmas.
These findings must also be resolved against the work of Argyris (1992), whose theory of organizational learning points to various defensive behaviors that obviate moral reasoning. Faced with high stakes and potential loss of control, managers consistently behave from values and theories in-use directly opposed to their espoused values. Research might resolve this inconsistency by testing whether managers that maintain congruence between espoused and in-use values tended toward the higher Kohlberg levels, and whether those with more incongruence rated within Kohlberg’s or Snell’s first three levels.
I reviewed these models for use in developing research instruments for the PDE, but also used them as standards for reflecting on the meaning of the current research. I discuss more specific values and organizational research in the following sections.
Values models in the organizational context
Maslow (1967) also pointed to the possibility of B-values driving an emerging form of organizational leadership, writing of B-leaders and B-organizations as a desirable future state. At its simplest, Maslow distinguished between a B-manager and D (deficiency-valued) manager as “between seeking for power over other people and for power to do the job well.” Maslow believed enlightened management was a necessary function to survive and compete, and that authoritative management handicapped an enterprise. Maslow respected the pragmatism required in the business environment, and considered the humanistic approach both practical and even patriotic. “If democratic, political philosophy means anything at all, then enlightened management can be considered under the head of democratic philosophy applied to the work situation,” (Maslow, 1967, p. 61).
In his evaluations of management behavior, England (1967, 1975) deployed a 66-point survey instrument to assess the personal value systems of corporate managers. The instrument assesses responses to the 66 pre-defined values positions, which are analyzed across five groupings relevant to organizational values. These five clusters assess values relating to: Goals of business organizations, Ideas associated with people, Groups of people, Personal goals of individuals, and Ideas about general topics. Organizational research adopting England’s model (Oliver, 1999, Lusk and Oliver, 1972) has continued to replicate England’s findings, showing extraordinary stability of manager’s personal values. Managerial attitudes and personal values systems have essentially remained unchanged since England’s 1967, indicating how American management may refer as much to a set of values as to skills and behaviors. Oliver (1999) concludes, “the enduring personal values of managers will continue to influence the economic future of corporate America. This in turn will influence the role of the American society in the global society” (p. 147).
Maslow’s enlightened management and England’s management values research portray a gulf in management thinking that has failed to resolve in over 30 years. Even with the increase in values discussions in management journals, research in North American technology organizations reflects established values and authoritarian power structures.
The organizational culture literature notes the relationships between organizational structure and values. Schein (1986) showed how organizational culture is described by its shared values and assumptions that underpin structure and actions. Further developing this relationship, Meyerson and Martin (1987) distinguished three approaches to organizational culture - integration, differentiation, and fragmentation - showing values alignment only in some organizations, but with many cultures exhibiting differentiation and ambiguity. Katsioloudes (1996)
Hinings, et al (1996) show a strong relationship between an archetypal organization (those fitting one of the consistent forms of organizations) and the values of the leadership. They found leadership values were consistent with that archetypal form, although they were not necessarily in unanimity among the elite. Their research suggests when an organizational design fits a known archetype, we might predict the prevailing values of the organization to match those of the known archetype.
Collins and Porras (1996) discuss core ideology and core values the organizational management. Understanding an organization as a complex social system requires a contextual view to be adopted to understand the core ideology in practice, and the core values of the organization. Formal statements of organizational values are insufficient to identify actual values systems in use by actors within organizations. They describe a model for vision-building that endorses two components: core ideology and envisioned future.
Core ideology embraces core values and core purpose. “Core values are an organization’s essential and enduring tenets - the values it would hold even if they became a competitive disadvantage; core purpose is the organization’s fundamental reason for being” (Collins and Porras, 1996, p. 65). An envisioned future then grows from these commitments, which then constrain the organization’s execution of this vision. The following two sections discuss research on values systems in the processes of innovation, design, and inquiry.
Values systems in organizational work practice
Multiple values systems converge in the social world of the workplace. We bring personal values into work environments that hold institutional values as primary. Fundamental values in-use of an organization influence relationships in the work environment, reinforcing the social environment’s perpetuation of these values (Argyris, 1992). Organizations espouse “people are our highest value,” yet in daily practice, the value in-use of “getting the job done” overshadows our relationships. We should expect workers to interact with workplace systems as if they embodied these values-in-use, since corporate information systems are designed as mediating tools for managing tasks and jobs. While no manager has to state the system is for “getting the job done”, “communicating rationally,” and “doing what you’re told,” the rigidity and specificity of most information systems communicates such purposes.
Faced with the values, norms, and standards of professional community, collegial workgroup, project team, department, management group, organization, and enterprise, we continually negotiate and anticipate situations. Although we can identify such a pluralism of values systems in the organizational environment, these “systems” do not exist as a nexus of agreement in the world. Our understanding of values remains subject to our biases of interpretation, and we have few social tools for evaluating behaviors against a framework of values. Instead, values are tacit, revealed through actions, choices, everyday behaviors, and assumptions. They show up within interaction, negotiation, communication, in the conflicts of individual and organizational pursuits. Like other forms of personal knowledge (Polanyi, 1966) they can be observed, but not easily articulated. In organizations, they emerge in contexts where individuals and small groups interact, make choices, and share concerns.
Organizations confront our personal values in numerous ways. Over most of the twentieth century, management practiced followed the tenets of Taylor’s scientific management, the historical remnant of industrialization. Work organizations following this general approach “rationalize” work processes, fragmenting skilled work into discrete tasks that can be measured and monitored (Hill, 1981). Human problems of this approach result from “a frustration of basic human need for work that fosters autonomy and self-expression” (1981, p. 48). Although high-technology workplaces may by necessity enlarge and enrich jobs to embrace a range of skilled knowledge work, their cultures foster little more in the way of autonomy, self-expression, or other social values (Poltrock and Grudin, 1994, Greenbaum, 1993, Zuboff, 1988), and as the current research shows.
Morgan (1986) showed that organizations can metaphorically reflect any of our social institutions, and how participants experience organizations in these ways at different times. Many of these metaphors are experienced as dominating, inhumane, and unsatisfying, such as the organization as machine, as prison, as political system, and even as a “brain.” Morgan holds “the emphasis placed in Western management on the achievement of specific objectives or ends forces the role of values as standards or guidelines for action into the background.” (1986, p. 93). This explains some of the mechanistic orientation of the North American organization compared to the Asian approach, which shows significant integration of intuition and cooperation.
This organizational values system can be shown in modern high-tech companies, and within even the most innovative and intellectual workgroups in these companies. Sachs (1995) showed the predominant work analysis and system development approaches at NYNEX followed values and principles extended from scientific management. Contrasting work analysis and design approaches used in a large-scale project, Sachs distinguished the values (assumptions) differences between the “organizational” view and “activity” view of work. The dominant “organizational” model of work held assumptions that designed routine fragmented work, standard tasks and environments, automated routines, and reduced social interaction. A reengineering approach to system design using these values resulted in a system (Trouble Ticket System) that reduced effective productivity and diminished transfer of skill and knowledge, in large part due to increased monitoring and control. Sachs identifies the implications to work process and design, suggesting system designers must pay attention to the social constitution of the workplace environment.
Values in information systems and software products
Across numerous areas of research Kling (1996, 1980) has pointed to the effect of computerization in organizations, and has contributed to the advancement of research of values issues in information systems. Kling notes the values choices implicitly made by organizations, to maximize profit and efficiency rather than enabling social goods such as jobs, customer service, or good working conditions. As a response to these social concerns within the organizational context, Kling (1996) proposed the study of Organizational Informatics, an interdisciplinary study of the relationship between the design and use of information technology (IT) and behavior and management in organizations.
Kling and Allen (1996) extended this notion, pointing out the inherent problems in systems design in diverse organizations, where cultural, political, and organizational activities comprise more of the work than technical work. They focus attention on two key hypotheses. Organizational behavior affects the design and implementation of IT, where values, interests, and positions in the organization affect how problems are framed, design choices, and available resources. Effective use of IT in organizations depends on more than adequate technology – job design, reward systems, and culture merge with technology to affect the use of IT in practice. These two hypotheses or “insights” carry the potential to uncover the operational values in technology choices.
As software systems evolve in sophistication, with more functions and “features,” their impact on work practices and business process expands. Highly integrated systems (multiple systems interdependently linked to coordinate large processes) automate corporate management and business processes such as accounting, inventory, human resources, and supply chains in product engineering. These information systems directly affect the tasks of users, managers, and “customers” of these systems, requiring more dependency on the systems to conduct work across the entire network. As systems grow to encompass more functions in their management of the business and in related jobs, the standardization of these systems progresses, by necessity. Contributing systems become infrastructure, and therefore hidden to users (Hanseth and Monteiro, 1996, Star and Ruhleder, 1994). Cost management systems once used just in accounting are now integrated with sales databases, human resources, and customer information. Enterprise Resource Planning (ERP) systems based on normalized software-based business process templates have become standard across global corporations, tying together human resources, sales, order, accounting, supply chain, and management information.
Once information systems evolve to integrate a number of business functions, the templates and original business practices and assumptions remain built-in for the duration of all the systems that use this information. Standardization and integration lock in the original design, making it difficult to change any one part of the tightly interwoven system (Frenkel, 1995). If the original design was based on accounting models, faulty assumptions or outdated policies, these design elements will remain and will even be propagated through the integration of systems. In effect, many systems, even those developed for positive social purposes, embed and lock-in the social hierarchy of the organization (Perin, 1991, Bowker, Timmermans, and Star, 1995).
Broader standardization of large software and information systems will lead inevitably to a greater proliferation of these built-in value systems, designed-in by software engineers and tacitly adopted by the organization purchasing the system. Standardization of software systems and their protocols for communication ensure that large data systems become locked-in to standard operating procedures in organizations (Smith, 1991). New software systems are then adapted to the existence of the dominant systems that coordinate the whole process (Lanier, 1995). Once the investment is made in software development integration on this larger scale, changes are only made that affect relatively minor tasks. The business is now forced to operate in line with the new systems, and fundamental changes to business process become difficult to implement, even to propose. Such change would now represent a major new development investment and a recognition that the original effort was, perhaps, flawed. This phenomenon shows up in the same way that many mature businesses evolve to “invisibly adapt” to their accounting procedures, to the extent that their business operations are driven by the assumptions and regulations of their accounting. Therefore, the constraints built into the business process (accounting) limit the evolution of organization and opportunity.
A form of standards lock-in identified by Wegenroth (1992) shows a widening gap between “professional” and “trivial.” Deeper, more complex levels of technical design and operation (as in microprocessor and functional chipsets) are accessible to and acted upon only by a handful of technology professionals. The design values and potential functions are hidden by the user interface, not revealed by it.
“The same technologies are, however, restructured at the level of the user interface and present themselves in a deceptively friendly form. If a new technology is met by suspicion and resistance in society, its acceptance is not won by reducing its complexity to make it intelligible and thus controllable by the general public, but by reengineering its interface to trivialize it.” (Wegenroth, 1992, p. 81)
Wegenroth also suggests the same values might orient the interfaces of information systems, not just embedded computers. A critique of participatory design suggests that user interfaces might be “retailored” through cooperative design to appeal to workers’ interests, yet still retain inherent constraints in the deeper structure. However, this notion remains speculative, since examples such as the Scandinavian UTOPIA project show the deep analysis of design values related to work practice. The UTOPIA team evaluated a software package developed by an American firm, but rejected it because it “contained entrenched forms of hierarchical work organization” which were considered deskilling and anti-democratic. “Rather than try to weed out the deep-seated authoritarianism of American computer programs, the UTOPIA project elected to start from scratch.” (1992, p. 81)
Participatory design is especially concerned with the social relevance of design processes, and much of the research from this field demonstrates the concern for the broader social meaning of systems and the inclusion of users in the design of systems (Greenbaum, 1993, Ehn, 1993, Greenbaum and Kyng, 1991). One of the tenets of PD is that system workers should be given better tools for their work instead of having their work mechanized (Bannon, 1991, 1995, Greenbaum, 1996). Another is that the user’s perceptions about technology in their work are as significant as the technical requirements for the technology.
The mid-1980’s UTOPIA project (Ehn, 1993, Bodker, Greenbaum, and Kyng, 1991) is frequently cited as an example of integrating workers fully in the design process for work-oriented technology. Democratic design values were incorporated from the outset, and the design process engaged union workers in designing tools for computer imaging, considered as much as possible a direct extension of their trade orientation toward “tools” to enable skill. Ehn (1993) describes their orientation as “mutual learning,” wherein system designers learn about the domain and workers learn about the technology affordances appropriate for their design participation. Learning from the UTOPIA research continues, and its general success as a process exemplifies the possibility for cooperative design approaches. However, few new projects of UTOPIA’s scale show in the literature, indicating a possible reluctance of organizations to embrace participatory or democratic approaches. Rational and economic interests predominate in system design, and evidence shows that values questions are ignored (Herbsleb and Kuwana, 1993).
Human values in system design has recently become a broader concern in design research, in computing and information systems design (Friedman 1996, 1997), organizational issues in computing (Kling, 1996), and analysis and design processes (Wood-Harper, Corder, Wood, and Watson, 1996). Friedman’s case studies have analyzed the effects of bias on constituents, where obvious and very subtle values biases show up in use of systems. Kling’s research on social design of computing shows how various organizations are affected when information systems are deployed without regard to social impact, and how existing organizational behaviors affect the design and use of systems. The participatory design orientation, as described previously, develops soft methods for engaging user participation throughout design as a means of democratic social involvement in the work processes affected by computing.
Friedman and Nissenbaum (1996) identified three distinct types of bias found within systems and design, termed preexisting, technical, and emergent bias. Preexisting biases show up in systems design when values biases identified within individuals, communities, or organizations carry over into the system, such as racial or socio-economic biases built into loan approval systems. Technical biases draw from biases inherent in the technology that propagate through systems using the technology, such as algorithms, programming imperfections, or formalizing human interpretations and thus removing their context. Emergent biases show up in use, such as when a system designed for one class of users is transferred to a population with significantly different values, or new social knowledge changes the meaning of an original intent. A case published by Tang (1997) shows how a technical bias became inserted into a workstation product based on a preexisting bias of the design team. An economic choice was made to use a software-based microphone switch instead of a physical on/off switch, with the design team knowing the costs to privacy and trust. All three types of bias can be shown in this example.
Where Friedman’s orientation has studied the impact of bias and social morality in the computing products, and to some extent the design practice, Pullinger (1989) looked at the ethical practice of computing professionals. He identified five areas of relationship between computing and ethical practice, among them including ethical practice, ethical problems arising with computer design and use, and moral problems arising from computing but not related to it. Particular to the current research is the notion of ethical problems in the computing environment that are not specific to the system design itself. Ethical problems in organizations will reflect themselves in design issues, but may not show up in the product.
Smith (1991) shows how design values show up in process and product. In a case study of banking systems, Smith faults the product vendors of new banking information systems for widely distributing systems with inherent deskilling processes incorporated. Banks in the U.K. adopting these systems reorganized branch offices, changing the practices of both managers and tellers, increasing specialization of jobs and tasks, and reducing access to the public which they served. Even as many industries (automotive, process, manufacturing) were moving away from “Taylorization” of jobs, the banking industry made a shift toward the fragmentation and specialization of work characterized by scientific management. Smith argues much of this was caused by the adoption of banking automation, and concludes that the banking industry missed the opportunity to integrate new systems with effective organizational practices.
“Contrary to scientific management, efficiency actually improves and control is made easier if the ‘labor process’ is as coherent s possible. There should be a presumption in favor of skills, pride in the job, staff flexibility, apprentice-based careers, … and intuitive knowledge.” (Smith, 1991, pp. 389-390)
To summarize, values problems in information systems and products show up in several areas. Values are inherent in business processes and through software design are carried into new systems. Management biases control the types, format, and distribution of information, biasing views of organizational performance. Staff professionals managing process decisions influence the overall design practices used in developing management information systems. These experts should be considered invested with special obligations to consider all organizational stakeholders. The emergence of values conflicts therefore starts with the software design process itself, and propagates with teams working together to develop software products and systems. These considerations led to my offering two broad propositions guiding this research into values in innovation and design processes.
Systems are designed with explicit and implicit goals and values. Some values and goals originate with the designers, some with producers, and some with purchasers.
management software incorporates or inscribes value systems that embed
these goals. Value systems may be biased, favoring some goals over others.
These include efficiency behaviors over exploratory or interpretive
behaviors, productivity (output over time) over autonomous tasks,
production over creativity, alignment with rules over alignment with
personal goals, and management values over worker values.
Approaches to values inquiry
A wide range of
perspectives of design process and values shows across the literature.
Many researchers appear to be in agreement, and yet little convergence has
formed around any favored models or theory. Ecological and activity models
appear less criticized than structural or traditional development process
models, yet ecological approaches to organizations cannot be tested with
“formal” methods. Assuming a research interest in values embedded in
product design and product organizations, how might we
interpret values congruence within designed artifacts?
Values congruence can be understood as the alignment of a system’s design goals with explicitly stated values - values drawn from the customer’s desired experience or a shared organizational vision. Congruence might also be derived by extracting the behaviors and values found in system goals or the interface and specifying the organizational values driving the requirement. Values congruence may not be an external measure, but instead based on the design rationale for product features and affordances.
Values congruence can be related to the process of design evaluation (McDonnell, 1997) where the fit of a design alternative is assessed. Alexander (1964) noted the problem described as “misfit” between a designed artifact and the known requirement. McDonnell furthers this by explaining how “what is known to be required refers not only to the problem or need as framed by the designer in explicit terms…” which includes design documentation, strategy, and plans. The required evaluations for fit also “refers to the demands which are implicit in the designer’s norms and professional practices.” McDonnell develops a theory for congruence based on implicit values entrained through the influence of practice and discourse. “Implicit influences stem from norms and agreements among the professional group about their aims and objectives, and their expectations and values. These are termed their design commitments” (1997, p. 468).
Broadbent (1988) addresses values congruence by framing design itself as an ethical program, wherein the designer “filtrates”or tests design outlines against the program. Broadbent requires designers adopt five ethical design commitments; 1) design of form must suit the function; 2) materials must create a comfortable environment for people, 3) symbolism must be appropriate, 4) costs must be reasonable, and 5) the environmental effects must be positive. Such simple ethical guidance envisions the larger issues involved when making design decisions, yet should also focus attention on the core ideology and core values of the team, customer, and product. This serves product design by aligning these values with both the designing organization and the customer.
Deep and intractable misfits in social systems can be revealed in values incongruence, ranging from the professional differences between members of different organizations to the social differences between organizations with divergent cultures. Those experiencing such conflict will express it through interpretation of a situation, and almost certainly not as a values conflict. Few professional standards and methods enable designers to document the impact of values misfits, although the methods of Ehn et al. (1997) seem to address this problem. These methods may not yet be viewed as development metrics or even management tools. However, few frameworks for addressing values conflicts in design have been brought forth in academic or systems management literature for serious discussion.
Along these practical
lines for inquiry, Muller, Wharton, McIver and Laux (1997) urged the
inclusion of socially responsible research in the research agenda for
human computer interaction (HCI). While HCI has focused extensively on
problems of effective user-oriented design and software usability, lesser
attention has been given to socially oriented (and non-technological)
problems. They point out accessibility, and information and communication
poverty as two areas requiring research attention. They recommended
advancing design values other than productivity and efficiency in systems
development. Alternative values systems endorsed by Muller et al included
1) quality of work product, 2)
quality of work life, and 3) quality
of communication. These values systems present new benefits to
organizations and society not offered by productivity or efficiency.
“More broadly, our field may be concerned with the quality of methods or practices we bring to the design process, and especially to working with users. In this regard, we may wish to characterize our own practices in terms other than productivity, such as human-to-human communications clarity, communications accessibility, process safety (e.g., for low status workers to contribute their ideas to a design or an evaluation), or quality of democratic processes.” (Muller, Wharton, McIver, and Laux, 1997, p. 155)
They further point to the lack of disciplinary agreement for theory and framework for values-based inquiry, suggesting use of activity theory or grounded theory to construct domain-related theories for design and inquiry. Suggesting new design research approaches might also avoid the unintentional incorporation of the values and needs of the mainstream users, which may exclude other, non-mainstream users.
The social systems design approach integrates forms of values inquiry, and in published processes such as Interactive Management (Warfield, 1994) and Ackoff’s (1974) systems approach. The social systems approaches explicitly define relevant outcomes of the design process, which are not merely focused on deliverables or conventional notions of success. Christakis (1997) points to values convergence and conflict as design functions, considering both the values at stake for participants, and the values orientation of the inquiry process itself. An emancipatory interest in design assumes participants will reveal their values at-stake and collaborate to foster solutions that support a shared social interest. Christakis identifies the Habermas (1987) concept of communicative action as enabling stakeholders to build a shared lifeworld in their common domain. When design solutions draw from design and decision responsibility, the emancipation derives from shared solutions that avoid narrowly-defined interests.
Social systems design incorporates values inquiry throughout the design process, as described by Christakis (1997). These include 1) democracy in the design and decision process, in which all participants have a voice and a vote, 2) full inclusion of all those with matters at stake, enabling true participation and consensus informed by pluralism and the knowledge from all key roles, 3) making “transparent to the stakeholders the normative content of their decisions,” (p. 4, and Ulrich, 1983) thereby evaluating the values implications of courses of action, and 4) balancing idealism and action, within the orientation that solutions have a “basically human bias.”
Christakis also notes the thread of normative evaluation in the systems theory literature, referring to Jackson (1995) where three categories of beliefs (and therefore, beliefs of normative value) emerge within the stakeholders’ group process. These include unitary, pluralist, and conflictual beliefs. “Stakeholders can be in a unitary relationship if they share values and interests; they can be in a pluralist relationship if their values and interests diverge but they share enough in common; and they can be in a conflictual relationship if their interests diverge significantly” (Christakis, 1997, p. 20). Social systems design seeks to accommodate all situations, and is oriented to the most typical situation of pluralist.
The embedded functions of values and tacit processes are reviewed in the following sections.
Embedded Functions in Organizations and Design
Organizations have been studied as social networks (Staw, 1984), as networks of commitment (Flores, 1982), as collective cognitive networks (Weick, 1979), as knowledge networks (Spender, 1994, Blackler, 1995), as cultures (Schein, 1985), as individuals within a social framework (Argyris, 1992), as ecologies (Trist, 1976, 1983, Nardi and O’Day, 1999), and as social systems (Ackoff, 1974, Banathy, 1997). Organizations provide a valuable resource for social research; for many, the modern organization constitutes our primary social place and for collective activity. Studies of organizational phenomena therefore encompass a wide range of interests, perspectives, and models.
An interpretive review of organizational literature should consider the independent contribution of disciplines, as well as interpretations that cross or combine disciplinary perspectives. Morgan’s organizational metaphor concept (1986) identifies and expands on eight perspectives for organizational analysis, each of which can be traced to a separate discipline and literature. The following review identifies relevant contributions from organizational theory, organizational psychology, information systems, innovation and design management, social systems theory, and labor studies that apply to each topic. Therefore, references are drawn from these literatures to draw forth the interdisciplinarity of the concept.
Embedded organizational processes
Organizations “embed” the activities of its members, and organizational participation requires engaging the embedded context of economic and social activity (Dacin, Ventresca, and Beal, 1999). The sociology of embeddedness refers to collective knowledge and activity (Polanyi, 1944) that become embedded within the organizational context, or as commonly called “institutionalized.” Theories of embeddedness explain social activity in economic networks (Granovetter, 1985, Oliver, 1996), contingency of social, political, and economic forces within organizations (Zukin and DiMaggio, 1990), embedded authority in organizational structure, and embedded organizational knowledge (Lam, 1997), and constraints against organizational learning (Oliver, 1996).
Although the embeddedness concept has been exploited in organizational sociology, it has not been developed in the same terms within information systems and organizational behavior. The closest developments appear as organizational and management theory, addressing the phenomena of organizational attachment (Lee and Mitchell, 1994), organizational defensive routines (Argyris, 1992), and organizational routines (Nelson and Winter, 1982), with other research associating with these foundations.
Dacin, Ventresca, and Beal (1999) reviewed embeddedness in organizational management, identifying almost 250 references published between 1991 and 1998. The rapid growth of this literature also shows an interdisciplinary interest in the embedded functions of organizations. Embeddedness contains and crosses the perspectives of management strategy (Kogut and Zander, 1996), social structure (Granovetter, 1985), cognitive embeddedness (Zukin and DiMaggio, 1990), nested organizations (Van de Ven, 1992), and organizational ecology (Hannan and Freeman, 1989, Amburgey, Kelly, and Barnett, 1993, Baum, 1999). However, even this extremely broad review neglected the contributions of organizational psychology, design studies, organizational knowledge research, and distributed cognition. There appears to be much latitude for the explicit involvement of these disciplines and their contribution on the group behavior and individual psychology of embeddedness.
The central paradox of organizational embeddedness arises when we consider both its competitive advantage and its hold on organizational effectiveness. Research shows embedded capabilities and knowledge as a productive facility of competitive advantage (Teece, Pisano, and Schuen, 1997). Dynamic capabilities of the organization become embedded into organizational processes, enabling the firm to share and recreate unique knowledge that extends and maintains strategic advantages against competitors. Saviotti (1998) shows how organizations cumulatively embed knowledge, increasing its path dependency and creating barriers to its expropriation. Knowledge embedded as tacit components or processes reduces it “appropriability” by competitors. Organizational routines, as embedded practices, also improve productivity by encoding practices into routine sequences that can be performed with partial knowledge, yet contribute substantially to productivity and value creation (Nelson and Winter, 1982).
However, embedded knowledge and processes resist deconstruction. Organizational learning (Argyris, 1992, Chawla and Renesch, 1995,) and situated learning theories (Lave and Wenger, 1991, Van Oers, 1998) both suggest the need to evaluate learning contexts and to deconstruct local learning. Without the capability to “unlearn” old routine and “relearn” new capabilities, embedded knowledge and processes become significant organizational constraints (Oliver, 1996). Embedded organizational routines are notoriously resistant to change, due to their indefinably significant tacit components and their distribution across multiple individuals and social groups. Furthermore, organizational processes become part of the status quo of the organization, and become appropriated by the hierarchy (Zuboff, 1988) to maintain authority. Zuboff suggests embedded organizational processes facilitate the maintenance of power relationships, strengthening the status quo and reinforcing continued use of the routines.
Ciborra (1998) critiqued the management science approach to organizational systems, advocating researchers to inquire deeply into the daily phenomena of organizations, to understand truth as opposed to applying method and structure. By the “unveiling of what lies hidden behind the current phenomena of work, organization, and information,” responsible research must acknowledge the limits of models and methods. This view suggests that any framework or model of organizational values must be suspect of idealization, or at least might tend attention away from actual phenomena by focusing on abstractions. Embedded values may not be amenable to direct observation, but may be accessible through “questioning and thinking,” to cope with “the management of complex organizations and technologies,” a research style suggested by the findings (Ciborra, 1998, p. 15).
Ecological models of human and organizational behavior address embeddedness from social and developmental perspectives, including organizational ecology (Trist, 1989, Hannan and Freeman, 1989), nested environments (Bronfenbrenner, 1989), and inscription of behavior (Hanseth and Monteiro, 1997, Latour, 1991).
Ecological systems theories approach the problem of embedded organizational context from developmental perspectives (Bronfenbrenner, 1979, 1989) and from human-computer interaction research (Flach, Hancock, Caird, and Vicente, 1995, Vicente, 1999). Bronfenbrenner’s ecological systems model considers a system of interaction between the human and environment. Nested relationships occur among actors within their local social environments (microsystem), associate through linkages across environments (mesosystems), and with external environments (macrosystems). These levels create the context for an organizational ecology, with societal customs and values creating the context for culture. Organizational networks manifest as nested environments, consisting of both hierarchy and communities, establishing independent paths for actors within hierarchies of formal power relationships and within informal community participation.
Ecological psychology theory advanced the concept of affordances (Gibson, 1979), originally proposed as characteristics of objects that call attention to preferred behaviors, enabling appropriate interaction with the physical world. A door handle is said to “afford pulling,” and a door plate visually affords pushing. Vicente (1999) published a standard definition as “a goal-relevant description of the world that describes an opportunity for Action defined with respect to the capabilities of a particular Actor” (1999, p. 238). Vicente’s research in human computer interaction (Vicente, 1999, Flach, Hancock, Caird, and Vicente,1995) notes the complexity of analyzing complex tasks and designing affordances into human-machine systems that are “psychologically relevant.” Variances of domain knowledge in complex systems allow high variability for features that can be considered affordances for appropriate and relevant action.
Extending the ecological notion of affordances further, Gaver (1991) presents affordances in software user interface design, revealing a possible fit in ecological theory for its applicability to social and cognitive interaction:
“The actual perception of affordances will of course be determined in part by the observer’s culture, social setting, experience and intentions. Like Gibson I do not consider these factors integral to the notion, but instead consider culture, experience, and so forth as highlighting certain affordances.” (1991, p. 81).
Therefore, affordances are highly embedded in nature, and are perceptible to some extent based on experience and culture.
Activity theory (Engeström, 1999, Leont’ev, 1978, Vygotsky, 1978) distinguishes embeddedness in activity structure, mediation and operational activity within a subject’s environment. Activity structure refers to the three levels of activity described by Leont’ev (1978) as activity, action, and operation, which embed intentionality through the corresponding orientations of motive, goal, and instrumental condition. Actors pursue goals intrinsic to each level, being embedded goals as they are not distinguished consciously in action. Mediation refers to instrumental behavior using tools and artifacts to effect desired work results, specifically allowing actors “by the aid of extrinsic stimuli, to control their behavior from the outside” (Vygotsky, 1978, p.40, italics in the original). Using instruments to mediate activity embeds the goals of desired control over the environment, using “extrinsic stimuli” as affordances for action. Operational activity refers to the level of operation within the activity hierarchy, which embed knowledge and behavior tacitly, as operations are low-level actions performed without conscious specification of the task.
Nardi and O’Day (1999) further link ecological psychology to activity theory through the concept of information ecology. Defining ecologies or (interdependent environments) of information use, they show how technologies coevolve with actors and activities in information spaces. The ecological approach provides a means of understanding the role of core values in the workplace or other environments. Values are considered “not as immutable, clearly defined objects, but as negotiated processes.” They are embedded in technologies and in the environment of social action. “We bring our values to bear in designing and using technology. The key constituents of an information ecology – people, practice, values, and technology – exist in relations of interdependence.” (1999, p. 60). This activity theory-based approach toward design and social engagement calls for understanding the fit of technology in the ecology of work and social life.
“Information ecologies are systems of parts that fit together well – and the idea of ‘fit’ must be understood in terms of social values and policies, as well as tools and activities. If the practices that evolve in a sociotechnical system are efficient and productive, but fail to uphold the ideals or ethics of the people involved, the system will be subject to considerable stress.” (Nardi and O’Day, 1999, p. 68)
Morton (1996) summarizes practical findings focused on information systems design in the organizational context. Citing six implications of research in organizational transformation, Morton specifies several findings with relevance to embedded process. One, information technology enables fundamental changes to work process, such as removing distance and time barriers to communication and coordination. Two, IT facilitates integration of business functions within and between organizations, creating massive networks of interdependent organizations and resources. Within these networks we find the opportunity for innumerable embedded processes, which cannot all be articulated or made visible. Networks, as with infrastructure, create affordances for inscription of values and interests as expressed through those managing the network (Hanseth and Monteiro, 1997). Morton offers that IT facilitates strategic opportunities for organizations to redesign their operations and mission. In other words, the opportunity for IT deployment offers organizations choices that direct future work practice and promote some functions of culture while perhaps negating other possibilities in the organization. Although Morton presents a fairly benign view of this finding, other cited research (Zuboff, 1988, Sachs, 1995) suggests consequences when opportunities for organizational reassessment are avoided in traditional organizations, allowing the hierarchy by default to inscribe or embed power relationships further into the new systems and networks. Morton also states that successful IT deployment requires changes to organizational structure and management. New forms of organizational structure and process have yet to be institutionalized in large corporations that make most of the major IT investment. The matrix organizational form continues as the most progressive organizational option considered in most companies, a form that creates new power relationships as much as it maintains existing ones. Internal IT departments to do not typically work with organizational specialists to design appropriate organizational structures and interventions to better enable the intention of the technology. Since IT managers make the deployment decisions with senior managers, the consideration of organizational impact and change is usually only given superficial treatment, if considered at all. Managers trained in a metrics-driven, Taylorist tradition see the opportunity for change as one that reinforces productivity and decreases costs and resources. And IT systems are often considered sufficient in themselves to change the organization in some expected direction, regardless of the total lack of evidence for the claim (Kling, 1996).
The next section follows with research showing how values become embedded within organizational practices for innovation and design.
Embedded values and power in organizational practice
Activity theory, situated action (Suchman, 1987), and distributed cognition (Hutchins, 1994) approaches all address organizational embeddedness in the context of work practice and use of technology. Situated action understands embeddedness as the unplanned, undetermined interaction between people and their actions within material and social circumstances. References embedded in language are “indexical,” that is, understood within a shared context but often meaningless outside of the situational context. Skilled behaviors arise within situations, unrehearsed, but drawing from experiences embedded as tacit knowledge. To some extent, behavior itself is embedded in the situation, unavailable to reflection during the time of acting.
Schön’s (1983) distinctions of reflection-in-action and reflection-on-action acknowledge this situational embeddedness of expert knowledge in context. Reflection-in-action surfaces tacit knowing while in the performing context. Knowledge reveals itself as embedded in the context, brought to the reflective actor in the situation. Reflection-on-action affords learning from the situational performance, by revealing to conscious reflection the knowledge and skill extracted from the embedded context.
Embedded knowledge should not be seen just as skilled performance and functional or content knowledge. Values and attitudes are also considered embedded knowledge structures (Polanyi, 1967, George and Jones, 1997), and are embedded in both personal and collective contexts. Although espoused values are by definition not embedded, the values in-use (Argyris, 1992) embedded in daily organizational and personal activity constitute the more useful construct in our analysis. The values in-use concept affords explanation of inconsistent behavior, allowing analysis of conflict, learning, and other organizational behavior.
Analysis approaches such as contextual design (Holtzblatt and Beyer, 1998) specifically analyze values within an organizational context for information system (IS) design, and recommend representing values constructs as analyzed by a “cultural model.” Contextual design focuses on design constraints from values of individuals and workgroups, and represents both formal organizational policy and implicit values. This approach recommends orienting design to values choices, based on a positive or negative assessment of the value to the organization.
However, values choices have long-range and far-reaching implications that are not all evident within the scope of IS projects (Friedman, 1997). Positive and negative assessments of values will not foresee and resolve emergent values conflicts that arise in system use. Values-sensitive design considers the various contexts of values impact. Values can be considered embedded within technological objects (Ehn, Meggerle, Steen, and Svedemar, 1997), within the organizational structure (Crosby, Bitner, and Gill, 1990), within human actors only (Searle, 1983), and distributed among people, tools, and organizational environments (Nardi, 1999). In each distinction, values are “located” where they are studied and observed, so we might consider that values fundamentally persist and negotiate with all social interaction.
The organizational disruption caused by IT intervention is a significant design consideration. Attempts to realign values risks organizational discord, particularly “behind the scenes.” Hodas (1996) noted how technologies, as social constructions, are necessarily value-laden. “Any practice (and a technology is, after all, a set of practices glued together by values) that threatens to disrupt this existing structure will meet tremendous resistance at both adoption and implementation stages” (p. 200).
Kumar and Bjorn-Anderson (1990) theorize how methodologies inscribe behaviors within IT organizations, by “incorporating into the design process the ontological assumptions about what constitutes reality” (p. 530). Their research traces the values systems of system designers through their choice of methodology, and suggests values are embedded into the organization through socialization of methodology. They further suggest the values systems embedded by methodology might lead to systems embedding these values, which “may not be acceptable in cultures with value orientations different from the one in which the system was designed” (Kumar and Bjorn-Andersen, 1990, p. 535).
Bodker and Pedersen (1991) focused on workplace cultures, the distinct cultural environment formed by workgroups with larger organizations. Reinforcing how organizational reality is socially constructed, they evaluated the artifacts used by workplace cultures in design, and asserted that core values of the culture manifested in these objects and artifacts of the workplace. Physical manifestations such as office layout, decoration, work tools, and dress contribute to the meaning of the culture. Symbols such as stories, expressions, and anecdotes reinforce the culture, and work practices and routines “dramatize” the culture. Bodker and Pedersen point out relevant phenomena and problems of studying culture, including the tendency to become “culturized” during study or participation, whereupon the outside perspective is lost and the culture becomes invisible and untenable. They suggest the ethnomethodology approach of Garfinkel (1965) to focus on the anomalies and exceptions, and to observe how the organization responds to these deviations of meaning.
The literatures cite numerous instances of embedded organizational power relationships in systems development. Tsivacou (1997) distinguishes a model of power based on the capacity to exercise command, rooted in language. From an organizational perspective, power is managed through communicative action, generated within a shared organizational understanding of power.
“Human beings in organizational systems are also subject to this dual ability of communicative distinctions. The results are as follows: a) due to commands – received during daily organizational life – human beings are deracinated from their social background and they are reduced to individualized subjects charged with specific tasks; b) at the same time, as participants, they shape a new wholeness, that of an organization” (Tsivacou, 1997, p. 27).
Markus and Bjorn-Andersen (1987) identify how the various activities and values held in IT departments maintain power over end users, and how maintaining control over the tools of technology manages this relationship. Attewell (1996) discusses how the management of information systems has affected management culture, perpetuating an illusion of control while maintaining power relationships over user communities. The costs for the perceived productivity and sense of control can be enormous, with the paradox of productivity showing up in the trade-off between automating manual practices and handling the burden of increased information management. “MIS give managers the sense that they are in control, that they know what is going on within their jurisdiction without actually having to look at the shop or office floor” (Attewell, 1996, p. 233).
Although power relationships may be built into the process, traditional success is not guaranteed by maintaining control of IT development. Saetnan (1991) describes the failure of hospital IT systems where organizational factors were disregarded. Sachs (1995) again refers to the clumsy approaches of reengineering that favor control to such an extent that a new automated work process is rendered less productive than the original manual process. Schaiken (1991) goes further in showing the human costs to managerial control through information systems. Referring to the industrial management literature, suggesting worker productivity relates to their motivation and ability to use their abilities, he states “These abilities are especially important in the case of computer-based technologies, which – in reality if not in theory – depend intimately upon workers for smooth functioning” (Schaiken, 1991, p. 299).
Values and Organizational Models in Innovation Management
Sharrock and Anderson (1996) show how the organizational setting for innovation and design affords the availability of a design space, or constrains the ability of designers to pursue innovation. Evaluating a fast-paced project within a large product organization, their case showed how innovation of design process enabled creation of the design space. Faced with external demands for a product design, the project team improvised on the formal product development processes. Acknowledging embedded organizational and design values, they described the organizational culture as “with any work group, composed of the patterns of normative activity and the value systems espoused by those who identify with it.” Yet, their improvisation and problem-solving was based not on organizational or project management goals. Instead, embedded activity and knowledge drove the practice. “In the course of actual designing, …, this knowledge is deployed not as procedural rules or even as rules of thumb, but as ways of making design sense of the issues on hand, and therefore deciding just what to do” (Sharrock and Anderson, 1996, p. 440).
The case study reveals how innovation of the design space through “loosening up” the normal design practices benefited the product innovation process. By cutting corners, trading off problems with opportunities as conditions changes, creatively assessing requirements, and other informal practices, they continuously assessed the design space (or ecology) for decisions and informal approaches, yet always toward meeting project goals.
Creating the space for design practice can be related to the ecological design of organizational environments. In effect, Sharrock and Anderson made use of “affordances” in their environment, in other words, their available opportunities for adopting different practices that fit the design problem. By experimenting with new practices, such as informal workarounds to the standard and ignoring formal processes, they established a precedent for changing practices to improve innovation, as well as for innovating the process itself. McDonnell and Gould (1998) articulate a similar process in their study on emergent strategy. Their case focused on organizational strategy for information systems, in which they designed systems and supporting conditions to enable strategic thinking to coevolve with emergent situations in the business. They used a high level definition of vision, values, and goals as a framework for design, and developed prototypes of information systems specifically meeting those goals in the framework. They described this approach as “the encouragement of a culture which could sustain continual improvement through ongoing development of information systems” (1998, p. 159). In this approach to strategy, they rejected static master plan approaches, instead engaging participation in creating a framework of shared values. “This is used to inform short term choices about what to do and where to focus resources, but avoids suppressing the adaptive behavior necessary for longer term well-being.” (McDonnell and Gould, 1998, p. 161)
The purpose of system design should be assessed against an envisioned future and core ideology (Collins and Porras, 1996, Christakis, 1997). Systems designed to fix workers in assembly-line automation, for example, can be viewed as misaligned with an organization that values human experience, social responsibility, and excellence in service. Social systems designers specifically evaluate designs within both the social context of development and the context of system usage. Banathy’s framework for social system design options brings attention to the relationship of values alignment across the four dimensions of scope, type of system, intra-system relationships, and external systems. Within any configuration of system design, Banathy raises the consideration of triggering questions for design such as: “what are particular design configurations that respond to the vision we created? What values do we want to realize in the system we want to design?” (Banathy, 1996, p. 130)
“The collectively agreed upon values, called core values, will guide the designers to make design choices throughout the inquiry. For example, values, motivating beliefs, desires, and moral concerns may express aspirations of attaining higher inner quality of life, human dignity and human betterment, social and economic justice, and individual, social, and ecological ethics. In summary, designers ask: 1) what values would support/justify a particular option configuration? 2) what would be the implications of selecting a particular design configuration? 3) what core values lead to the selection of a particular option configuration?” (Banathy, 1996, p. 131)
Information systems can have as their purpose the furthering of management control into specific areas of work, or they can purposely eradicate repetitive, dispiriting work. Each management decision for design and deployment of IT can be seen to entail a concomitant purpose and related values construct. These issues within social systems are key to understanding the social impact of a design (Kling 1996). Therefore, innovation and design decisions should be made with caution toward organizational impact and the potential embedding of values that diminish the organization’s potential for innovation in the long run.
Organizational and processual research show how values orientations surface within organizational processes. Processual research studies in the domain of information systems and innovation management offer a context for understanding phenomena in software and systems development processes. Orlikowski (1991, 1993), and Orlikowski and Robey (1991) have shown how information systems methodologies, and development tools require incremental or radical organizational change. When information technology is introduced within organizations, the process change is experienced as an intervention, requiring adjustment of values related to technology use.
Interventions of process or technology have been identified as “encounters” (Robey and Newman, 1996), group interactions that punctuate design and development over long-term projects. Encounters are accepted and established points in time where organizational members meet and expect to make decisions or review contributions. Some encounters are designed into the innovation process, such as formal design and review sessions and technical walkthroughs. Other encounters arise as required by the project or management. Robey and Newman identify encounters as “opportunities for actors to challenge established practices” (1996, p. 33).
Research has shown differences in interests and values of project managers in information technology organizations, so values orientations should not be viewed as having general application. Tractinsky and Jarvenpaa (1995) evaluated values differences for global and domestic (geographically-specific) IT project managers. Significant agreement was found among all project managers for the importance of decision making and control in projects, with economic issues ranking toward the middle and cultural diversity and balance of power ranking universally low in importance. These values stem directly from the requirements of the project management role, and suggest a common adherence to a management paradigm of rational decision making and central project control.
Both participatory design and design studies focus inquiry on design process as an important research direction. Design researchers recognize the responsibility in design activity to surface questions of value, from understanding authentic user needs to ensuring that designs further positive social purposes. Social meaning informs the design process and designed artifacts, and is shown to contribute to stakeholder goals and economic returns.
However, development teams in practice typically consider systems development as a pure technical effort, with an implicit assumption of values-free design. Implicit goals and embedded value systems will not normally be surfaced or questioned, as they are not considered relevant to the requirements and to the satisfactory completion of the development task. This approach characterizes the predominant state of practice, termed by Kling (1980) as rational, by others as technical or Taylorist (Smith, 1991).
Kling (1996) discussed the social design choices available in the design process, and identified “social arrangements” in the domains of technology selection, work organization, equipment access, infrastructure, and control. He contrasted the social design approaches of business process reengineering (BPR) with sociotechnical systems (STS) (Mumford, 1983, Zuboff, 1988), noting published organizational successes using STS approaches (Sachs, 1993, Jewett and Kling, 1990). These two approaches to design processes embed significantly different assumptions regarding the nature of work, of business value, organizational effectiveness, and systems design. The apparent values in these design processes transfer directly from the methodology to the designed system, showing up in the organization of work, job design, task allocation, and user interfaces of the systems. The approaches significantly differ in organizational participation. BPR, for example, is recognized as a top-down process design approach involving minimal user participation (Jones, 1998). STS and participatory design facilitate all aspects of design with user and stakeholder participation, and are characterized by a work practice rather than a business process perspective. Design practices require attention as significant influences toward engaging appropriate participation, or conversely for hiding biases and maintaining control.
Participatory design and related approaches (ethnomethodology, contextual design, activity theoretical approaches, design rationale) recommend broad orientations toward designing as well as specific methods to facilitate interpretive design. These orientations support the social context of design, by designing directly with users and facilitating participation from all stakeholders. These socially-oriented processes encompass plurality, and inherently confront the values emergent in the pluralistic community of participants.
Participatory design principles are based on humanistic values that differ significantly from traditional business and engineering models. Miller (1993) identifies two essential assumptions. One, front-line workers and customers are considered intelligent contributors to the design process when the organization facilitates their involvement, respects their expertise, and shifts responsibility to them for their actions. Two, contrary to traditional (Taylorist) beliefs, effective design emerges from the bottom up as well as from top down. Customers and workers understand the work context, and are better able to identify designs that fit their practices, and to recognize features that may have failed in the past.
Participatory design underpins these methods with the socially responsible involvement of system users as full stakeholders and participants in the design process. This approach considers how system design and delivery must not only meet user’s needs, but also afford satisfactory work lives when systems are joined to jobs. Greenbaum and Kyng (1991) discuss this distinction of PD as opposed to other design orientations.
“This focus on world view reminds us that the issue of selecting problems within the workplace is heavily laden with cultural, political, and economic values. As computer system designers we can no more jettison our past than we can ignore the traditions of computer system users.” (1991, p. 9)
Greenbaum and Kyng further contrast the cooperative approach of PD differs from traditional models of problem-solving and development, as a way to frame the distinctions of PD. These pairs of distinctions highlight differences between PD and traditional design and development approaches: (Greenbaum and Kyng, 1991, p. 16)
Group interaction, teams
Naturalistic studies also show that design teams working in the context of their practice often ignore goals and values and focus on technical requirements. Herbsleb and Kuwana (1993) describe the activities of software teams in acquiring knowledge and building shared models of reference in three development organizations. The primary focus of most design inquiry was to clarify the requirements of participating managers and clients. Designers also pursued how requirements might be realized by asking about user scenarios, to understand practical purposes of desired functionality. Questions regarding values or even rationale were rarely raised. They rarely sought to understand why decisions or requirement were constructed as they were, or to evaluate alternative approaches. One of the explanations for their unexpected finding was that although questions of value might be important, they may have been inferred from the requirements and scenarios, and understood tacitly from the context of the design meeting.
Research focusing on the values carried by system designers through process reveals the significance of individual designers in advocating effective processes that accommodate values inquiry (Walz, Elam and Curtis, 1993). Kumar and Bjorn-Andersen (1990) theorized that the prevailing values systems of system designers contributed to overly rationalist and technical values in information systems. System designer values stem from their professional and organizational contexts, as well as educational and individual backgrounds. They found differences in design and development practices and associated values systems between North American and Scandinavian organizations, attributing the differences largely to culture. The values systems tested were categorized by technical, economic, and socio-political values and then related to information systems products (information systems and supporting artifacts) and development processes (work design, task allocation, and social participation in design). Their results suggested that the social-democratic value positions of Scandinavian designers did not contribute to diminishing technical and economic concerns. Kumar and Bjorn-Andersen further stated that design methodologies carried built-in biases reflecting the values systems of their originating cultures, a position consistent with sociotechnical systems (Mumford, 1983) and social systems design (Ackoff, 1974, Banathy, 1996).
Singley and Carroll (1996) explored the relationship of the psychological context to system design, based on a process articulated as the “task-artifact cycle” (Carroll and Rosson, 1991). This process uses a utilitarian approach to endorsing humane values. “Within the constraints of the design task, try to eliminate or mitigate the claims involving negative psychological consequences while preserving or enhancing the claims involving positive psychological consequences.” (1996, p. 243)
In this heuristic for assessing design claims, Carroll shows how new designs often change the psychological context of the task itself. Therefore, claims for the design become based on the new design artifact, and not on the original task it revised. As each iteration moves further away from the original task (or non-automated task), the relationship to the psychological context becomes more remote. Carroll’s design rationale approach provides modes of reasoning about the impact of psychological principles and assessment on design, and also describes methods useful for guiding design in light of the psychological context and related values.
Erickson (1995, 1996) shows how stories encapsulate the richness of experience and bring with them multiple levels of meaning found in everyday life situations. Values are embedded in stories reflecting a shared experience with the design team. Without directly expressing the value meanings, accepting the intent of a story accommodates the storyteller’s value. Knowledge management practice has also recognized the inherent power of stories and informal conversation as accepted practices for sharing tacit knowledge and reinforcing cultural values (Nonaka, 1991, Leonard-Barton, 1995).
Research in design studies draws attention to the problems of design values in a consumer and market-driven culture. Papanek’s (1996) interdisciplinary research discusses the importance of starting with values and designing from an understanding of sustainable values. Drawing from community planning, human sustainability, psychology, and industrial design, Papanek proposes a holistic engagement of the living, organic world with the requirements of industrial and social design. A similar orientation can be found in Margolin (1998), whose action research in design promotes creating the culture of sustainability. Margolin cites obstacles to sustainability as “crisis of will” and “crisis of imagination.” “Too few examples of projects that are socially directed serve to stimulate or inspire designers. While such projects do exist, they are, for the most part, closed out of academic design courses and professional publications.” (Margolin, 1998, p. 89)
Jones (1992) argues that industrial designers should focus on the design process more than the end use of the product, and recommends software designers adopt this approach. Jones endorses the responsibility of designer knowledge, and advocates designers building a large repertoire of methods to afford a range of approaches to design problems, increasing the likelihood of drawing forth meaningful design solutions. In a more academic perspective, Krippendorf (1996) also views the design process as a “making sense of things,” and shows how meaning integrates with artifact through the understanding of contexts. In his analysis of affordances in design, Krippendorf distinguishes how contexts must be created to afford visual perception and user interaction. His design “process” starts with affordances, not values. Instead, Krippendorf sees values systems referring to unalterable fixed dispositions, and argues they cannot be considered reliable indicators of motivation for individual behavior. His modeling of ecological design as process integrates functions of the natural world – competition, cultural complexes (dependencies), and autopoeisis (self-regulation). However, Krippendorf’s approach to product semiotics does allow for the understanding of design values as symbolic and contextual, facilitated by cognitive models and models of the environment. “Both models realize that form follows meaning, which is shorthand for saying symbolic strategies, not physics, govern the collective use and assembly of artifacts into cultural systems.” (Krippendorf, 1996, pp. 182-183)
Ehn, Meggerle, Steen, and Svedemar (1997) analyzed the semiotic models behind designed artifacts, developed from a theory of knowledge encompassing “objective,” “social,” and “subjective” worlds. Further analyzing by structure, function, and form, they intertwine these common dimensions with the values dimensions of control, ethics, and aesthetics. Designed artifacts and IT systems were evaluated through this model, revealing novel approaches for identifying values and evaluating software usability. As a metaphorical analysis, they compared software design with automobiles, the reference implied in the article’s title. Expanding the perspective to include design methods supporting the semiotic orientation, they showed how methods for user engagement, such as participative observation, concept interviews, and semi-structured interviews, drew forth values statements describing the experience of use.
Bratteteig and Stolterman (1997) focus on design supporting desired vision and its values in a social world. The creating of vision is considered the most important process in design, as this creates the context for appropriate values and meaning. They suggest IT systems necessarily following a social understanding of designed artifacts and environments.
“Design of information systems is similar to many other design area because the purpose of the design is not just the designed artefact itself, but changes in the range of possibilities for action in the social organization that will use the artefact.” (1997, p. 292)
Drawing from the domain of consumer products, Allen and Ng (1999) found that product choices were positively influenced by human values. Their model of human values coupled Rokeach and other traditional models with recent research about product meanings and their judgment (e.g., McCracken, 1988, Richins, 1994). When products afforded symbolic meaning and judgment was affective, human values directly influenced consumer decisions. An indirect relationship was found when products were utilitarian and judgments were made by rating components, or piecemeal judgment. Consumer software products could apply to either category, but certainly corporate software products would be considered utilitarian by most users, and would theoretically be indirectly influenced.
Mapping anthropomorphic qualities and human traits to objects, especially interactive artifacts such as computer systems, may be a very common occurrence. People even interact with computer systems following human social conventions, as shown in experiments where computers simulate human interaction (Nass, Steuer, and Tauber, 1994). Designed artifacts, tools, and computer software are shown to be experienced as possessing “character,” in the sense of attributing a unified set of qualities to the interaction (Janlert and Stolterman, 1997). However, attributes or “traits” of objects are believed easier to identify than values or beliefs, although the attribution of values may be embedded in language used in describing interactions. “In those cases where a stable belief or values actually does have a very broad impact, we are likely to find closely associated characteristics that will summarize. The practical difference between valuing forcefulness and having forcefulness as a characteristic, will diminish as the impact of the value on behavior broadens and deepens” (Janlert and Stolterman, 1997, pp. 306-307). This presents a useful insight into the relationship between familiarity (breadth and depth of impact) and attribution of value. As users become more familiar with software by using it, their propensity for attributing qualities such as hostility or control to the product may decrease.
Toward Theory and Intervention
The power of a values-based design approach shows up in several areas. The values approach clearly positions the organizational purposes of the system, enabling the system to be held accountable to meeting the larger purposes agreed to and acknowledged by stakeholders. This approach reflects upon the ethics of design, which is typically unsupported by North American development practice. A values-based design should allow a natural diversity of values to emerge and be integrated within the development team and larger organization. Although a values-based design does not guarantee humane design, it might enable organizations to understand the impact of design decisions from a human values perspective. And if humane values are seen as important within the organization, they can be made visible to influence system design. As Grudin (1993) pointed out, organizational factors guide and constrain the development process more than design and development issues. And Norman (1998) offered the position that corporate reorganization might be a necessary precondition for ensuring user-centered design, by fostering horizontal teams and organizational structures to manage innovation processes. Following Norman, if we want values-centered design, we might reorganize the company to fit the claimed values.
Multiple values systems and multiple motivations collide in any design effort. However, to establish more general awareness or even acceptance of values-based design, and for deploying a values orientation into the larger concerns of business, different approaches must be planned. Business planners and project managers are not traditionally concerned with methodology, per se, except to the extent that standard methods support efficient work practices and enable consistent quality. However, key to the concerns of product industries are (at least) two values-based activities - customer acceptance and public brand image. These two channels can be used to leverage the desirability of values thinking within the product organization, and between them and the customers.
By focusing more attention on what the product’s “higher purposes,” and intentional values, customer awareness of the values will create a feedback loop that strengthens the desirability of maintaining those values. For example, automobile manufacturers that value safety, minimal environmental impact, and comfort can clearly specify how these goals are to be met in new cars. By exceeding world-class safety standards, promoting high-efficiency engines and recyclable materials, Volvo and Subaru both can fairly use these themes within advertising and back up the promises in the products themselves. I find it interesting that cars easily lend themselves to this type of values analysis, while software remains elusive to clear interpretation - perhaps computing remains too technologically-rooted, and not yet customer-driven to the point of simpler understanding of these relationships. Rather like Ford’s mass-produced cars in the 1930’s - the technical values of reliability and affordability were much more important than social values such as environmental impact or traffic safety. Once the car moved from utilitarian appliance into the standard conveyance for working citizens around the world, design values became more important to the buyer, and identifying the product with values systems became more apparent. Perhaps software will evolve in much the same way, especially with the heightened interest and attention to Microsoft during the antitrust cases moving through the courts. Interviews and surveys with my PDE participants show how readily Microsoft software was identified with questionable values and suspect intentions. With a more critical public awareness of Microsoft’s market aggression, consumers may be less inclined to identify with the control aspects of a Microsoft “software philosophy,” and will begin to identify values characteristics in many software packages.
With a clearer understanding of the relationships between organizations, values problems, and system design processes we might identify the values positions of Microsoft and other providers of information and communications technologies. Models of valuing will enable choices based on values other than efficiency and control. Software and product designers will have tools for making values impact assessments stemming from their design decisions. Awareness of values impacts in the design process will conceivably migrate to other processes in organizational management, leading to awareness of organizational and social impacts of values broadly across institutions.
Copyright © 2000, Peter H. Jones