Category: Systemic thinking

  • Systemic Problem Solving

    Wicked problems need a systemic approach to problem-solving that investigates and models problems from a human-activity perspective before attempting to define solutions. In IT design, we tend to define requirements for a solution with very little inquiry into the problem. Systemic approaches spend more time on inquiry – the solution just follows from the definition of the current problem. I emphasize “current problem” because a systemic approach recognizes that wicked problems are too complex to define or resolve all at once. We therefore need a divide-and-conquer approach to problem-solving, as shown in Figure 1.

    Figure 1. A Systemic Problem-Solving Approach

    Any effective approach to solving wicked problems must be iterative. It must identify subsets of the problem, investigate these, model how various elements of the problem are related, and then develop a shared vision across stakeholders for how to resolve this subset of problems, before taking action to implement the negotiated solution. At each stage, you may need to revisit aspects of the problem determined at a previous stage:

    • As you explore and model problem elements, you may need to perform more investigation in order to understand these elements better;
    • As you develop a shared vision for the solution and agree how to resolve these aspects of the problem, you may need to explore problem elements in more detail and revise the models of how they are related, both to other problem-elements and to the business environment and goals;
    • As you take action to resolve the problematic situation, you may need to revise your scope or focus, as the feasibility or implications of change become clear. At this point, you may need to revisit the shared vision and priorities with stakeholders, to resolve these constraints;
    • Following implementation of the agreed solution, you will need to investigate and appreciate the situation now. Making changes means that the “big picture” problem will have changed.

    Any systemic changes have waves of consequences, only some of which can be anticipated and modeled in advance. Other consequences only become apparent after a time-lag, at which point remedial action may be required. We therefore need an iterative planning approach – one that models how the organization works now, at the start of each cycle of change. This is shown in Figure 2.

    Figure 2. An Iterative Approach to Systemic Change Management

    The spiral follows the stages of analysis shown in Figure 1. Each cycle breaks off a different subset of problems, which are modeled, defined, and refined in the second stage. Of course, there is feedback between stages – as shown in Figure 1 – because you need to explore stakeholder perspectives in more detail as you produce problem cause-and-effect models, or rework the clustering of problems into different “sub-systems” of work activity, as stakeholders explore the implications of clustering problems in this way. The third stage produces a set of IT and work-design prototypes that allow stakeholders to explore the implications of defining problems in this way. Once they are agreed that this is how they wish to change things, the fourth stage implements changes across the organization and evaluates the outcomes. We then come full-circle, to investigate what problems we face now

  • Thinking Systemically

    Why are systems changes counter-intuitive?

    The human mind is not adapted to understand the consequences of a complex mental model of how things work. There are internal contradictions between future structures and future consequences, that become difficult to balance when multiple mental models are involved. Most people are “point thinkers” who only see part of the big picture. It is easy to misjudge the effects of change, when you only base your arguments on a subset of cause and effect. Systemic thinkers analyze interactions between various factors affecting a situation, to understand cycles of influence that affect our ability to intervene in changing the situation.
    An example of two counter-intuitive sets of influences is shown in the “systemic” view of a state-run social welfare system, given in Figure 1.

    Image showing interconnections between elements of local society affected by raising social welfare payments: business attractiveness, response of wealthy, affect on tax-base, and impact of no. of people seeking work
    Figure 1. A Systemic View of A State-run Social Welfare System

    We have two opposing “vicious cycles” of influence, in this model:

    • The left-hand cycle reinforces the argument that raising welfare payments encourages “entitlement” and disincentivizes work.
    • The right-hand cycle supports a counter argument, that raising welfare payments attracts job-seekers and increases employment, raising tax base revenue, to provide a net benefit (as well as humane assistance).

    Unless a complete view of the whole system of interrelated processes is obtained, well-intentioned changes have unintended consequences. It is necessary to understand the system as a whole, rather than individual effects between factors, to understand these “vicious circles” of cause-and-effect. Tweaking one element will affect others, with knock-on effects as the two cycles interact. The only way to ensure a specific outcome is to intervene to break the cycle of influence (change the relationship between factors), or appreciate the interaction effects, so you can predict the outcome.

    Typically, systems requirements analysis fails because of two reasons:

    1. The analysis is not sufficiently systemic – it reduces a complex system down to a subset of activities and work-goals that can be understood.
    2. The analysis is too focused on what can be computerized. It does not analyze what needs to change, but what the IT system should do to support an [implicit] set of changes.

    Using Systems Thinking

    We can start by analyzing the systems of human-activity: what people do to achieve various purposes in their work. The weakness with typical work or IT analysis methods is that these over-simplify the analysis, picking one business goal or individual purpose to focus on and attempting to merge in all the other purposes that people adopt when evaluating their work. For example, when I worked with a UK charity to evaluate their use of IT, I discovered that the managers administering the charity stores had multiple objectives, many of which conflicted in the detail of how they were achieved (the following is just a subset):

    1. To maximize income for the charity by selling goods through the charity stores – this is used for both UK and foreign charity assistance
    2. To ensure consistent pricing of goods, so customers did not complain
    3. To assist residents of poorer areas by ensuring a supply of reasonably-priced clothes, especially warm coats and footwear in winter, and cool workwear in summer (charity is most effectively supported at the source)
    4. To maximize donations of high-quality goods
    5. To minimize the handling cost of donated goods that are poor quality and cannot be sold
    6. To provide a community-oriented work environment in the stores and donation-sorting facilities
    7. To support projects in other countries that assist low-income communities by importing and selling hand-crafted goods.

    It can be seen that objectives 1, 2, and 3 conflict. Maximizing income often meant differential pricing, so residents of affluent areas paid more, and residents of poorer areas could afford the clothes they needed. Low-quality or damaged clothes were often donated and cost the charity a lot of effort to dispose of, or sell to clothing recyclers, who would shed and recycle the fibers into new clothes or other products. Support of community development projects in low-income countries meant that the charity often had to subsidize imported craft goods. So this set of objectives required a lot of nuanced decision-making around pricing, distributing, and selling goods in the stores. There was no single algorithm that could be applied to guide the charity’s IT systems. In almost every case, the answer to “how do you decide how to do X” was “it depends … .”

    There is no answer that frustrates system analysts more, as IT requirements are predicated on a single goal!

    Soft Systems Analysis

    Typically, we define a single goal that conflates the multiple, often conflicting, objectives of the system of organizational work. This complicates, rather than simplifies the design of work-systems, as it excludes support for the multiple other purposes that people aim for in their work. Many times, purposes conflict with each other (like a healthcare system aiming to both manage costs and optimize health improvement outcomes). We need to be nuanced in designing systems that balance support for various objectives. This nuanced design requires a systemic approach, where we consider what human-activities need to be performed for each outcome, before reconciling these with support for other change objectives. This requires a recursive (spiral) approach to design, where we periodically “complicate” our thinking – and then ask “so what do we do now, understanding this new information?”

    To deal with nuanced decisions, and systems of work that have multiple, conflicting objectives, we can use Soft Systems Analysis. Soft Systems are related processes of human activity, divided up in ways that makes the processes within each subsystem appear to accomplish a single purpose.

    By breaking up work processes in this way, we will end up with multiple systems that contain the same processes. Typically, systems analysts attempt to prioritize objectives, designing each process for one purpose of work. The whole point of thinking systemically is to hold these multiple purposes in mind, supporting the real-world conflicts that managers and others face in their day-to-day work by giving them the information they need to make nuanced decisions.

    Soft Systems Methodology (SSM) takes a divide-and-conquer approach to analyzing a problem-situation. We represent the problem-situation with as little extraneous structure as possible, as a set of interactions between people-doing-things.

    • We separate the subsets of activity performed for an identifiable purpose
    • We model each subset as an “ideal world” process-flow.
    • Comparing each to the real world allows us to define actionable changes, which recognize organizational, political, and economic constraints.
    • Finally, we prioritize the resultant changes, to produce a plan of action.

    There is a more complete explanation of Soft Systems Analysis on my design website, ImprovisingDesign.com

  • On Realizing The Relevance of Actor-Network Theory

    On Realizing The Relevance of Actor-Network Theory

    A recent emphasis on sociomateriality appears to have entered the IS literature because of discussions by Orlikowski (2010) and the excellent empirical study of Volkoff et al. (2007). Now that people have been sensitized to the literature on material practice, actor-network theory is classified as “tired and uninformative” [1]. Which leads me to wonder just how many IS academics have actually read the actor-network theorists? Or pondered how this applies to technology design?

    Long before people started discussing socio-material “assemblages,” Bruno Latour (1987)and John Law (1987) were discussing how technology developed by means of “heterogeneous networks” of material and human actants, the combination of which directs the trajectory of technology design and form. Latour (1999) suggests that he should recall the term “actor-network,” as this is too easily confused with the world-wide web. Yet actor-networking – in the sense of a web of connectivity, where heterogeneous interactions between diverse individuals, between virtually-mediated groups, and between individuals and material forms of embedded intentionality – is exactly what is going on in today’s organizations.

    In addition, Michel Callon’s (1986) work on how the “problematization” of a situation in ways that aligns the interests of others leads to their enrolment in a network of support for a specific technological frame. Once support has been enrolled, such networks endow irreversibility, which makes changes to the accepted form of a technology solution incredibly difficult. So we have paradigms that are embedded in a specific design. Akrich coined the term “script” to define the performativity of technology and the term was adopted by the other leading actor-network theorists [2]. This thread of work articulates incredibly deeply the ways in which technology design directs its users (and maintainers) into a set of roles and worldviews that are difficult to escape. We must “de-script” technology to repurpose it to other networks and other applications – which is much more difficult than one would suppose, given the embedded social worlds that are carried across networks of practice with the use of common technologies (Akrich 1992).
    So what does actor-network theory give us? It provides a conceptual and practical approach to understanding and modeling why design takes specific forms – and what needs to be “undone” for a design to be conceived differently than in the past [3]. It provides a rationale for understanding technology as a network actor in its own right, influencing behavior and constraining discovery. The assumptional frameworks for action embedded in – for example – a software book-pricing application will direct the evaluation of price alternatives in ways that reflects the model of decision-making adopted by the software’s author. This results in the type of stupid automaticity that recently saw an Amazon book priced at $23,698,655.93 (plus $3.99 shipping). The cause of this pricing glitch was traced back to an actor-network of two competing sellers, unknowingly connected via their use of the same automated pricing software [4].

    Finally, I want to observe that a lot of the recent “materiality of practice” literature has identified new phenomena and new mechanisms of actor-networks. For example Knorr Cetina (1999) has sensitized us to how epistemology is embedded in socio-technical assemblages, Rheinberger (1997) has demonstrated how some technical objects are associated with emergence while others enforce standardization and Henderson (1999) demonstrates how the use of specific representations can conscript others around an organizational power-base. But I would argue that these effects can be understood by using Actor-Network Theory as one’s underpinning epistemology – and that exploring actor-network interactions continues to reveal ever newer mechanisms that are relevant to how we work today. I would strongly recommend Bruno Latour’s latest book, Reassembling The Social.

    Notes:
    [1] I have to declare an interest here – this comment was contained in a review of one of my papers … 🙂
    [2] As Latour (1992) argues: “Following Madeleine Akrich’s lead (Akrich 1992), we will speak only in terms of scripts or scenes or scenarios … played by human or nonhuman actants, which may be either figurative or nonfigurative.”
    [3] One of my favorite papers on the topic of irreversibility in design is ‘How The Refrigerator Got Its Hum,’ by Ruth Cowan (1995). Another good read is the introduction to the same book by MacKenzie and Wajcman (1999).
    [4] The amusing outcome is recounted by Michael Eisen, at http://www.michaeleisen.org/blog/?p=358

    References:
    Akrich, M. 1992. The De-Scription Of Technical Objects. W.E. Bijker, J. Law, eds. Shaping Technology/Building Society: Studies In Sociotechnical Change. MIT Press, Cambridge, MA, 205-224.
    Callon, M. 1986. “Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay.” J. Law, ed. Power, Action, and Belief: a New Sociology of Knowledge? Socioogical Review Monograph 32. Routledge and Kegan Paul, London, 196-233.
    Cowan, R.S. 1995. “How the Refrigerator Got its Hum.” D. Mackenzie, J. Wajcman, eds. The Social Shaping of Technology. Open University Press, Buckingham UK, 281-300.
    Henderson, K. 1999. On Line and on Paper: Visual Representations, Visual Culture,and Computer Graphics in Design Engineering. MIT Press, Harvard MA.
    Knorr Cetina, K.D. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Harvard Univ. Press, Cambridge, MA.
    Latour, B. 1987. Science in Action. Harvard University Press, Cambridge MA.
    Latour, B. 1992. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” W.E. Bijker, J. Law, eds. Shaping Technology/Building Society: Studies In Sociotechnical Change. MIT Press, Cambridge MA.
    Latour, B. 1999. “On Recalling ANT.” J. Law, J. Hassard, eds. Actor Network and After. Blackwell, Oxford, UK 15-25.
    Law, J. 1987. “Technology and Heterogeneous Engineering – The Case Of Portugese Expansion.” W.E. Bijker, T.P. Hughes, T.J. Pinch, eds. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. MIT Press, Cambridge MA.
    MacKenzie, D.A., J. Wajcman. 1999. Introductory Essay. D.A. Mackenzie, J. Wajcman, eds. The Social Shaping Of Technology, 2nd. ed. Open University Press, Milton Keynes UK, 3-27.
    Orlikowski, W. 2010. “The sociomateriality of organisational life: considering technology in management research.” Cambridge Journal of Economics 34(1) 125-141.
    Rheinberger, H.-J. 1997. Experimental Systems and Epistemic Things Toward a History of Epistemic Things: Synthesizing Proteins in the Test Tube. Stanford University Press, Stanford CA, 24-37.
    Volkoff, O., D.M. Strong, M.B. Elmes. 2007. “Technological Embeddedness and Organizational Change.” Organization Science 18(5) 832-848.

  • Organizational Coordination

    Organizational Coordination

    I have been working for a while on comparing the results from some very complex research studies of collaborative design in groups that span disciplines or knowledge domains. I was stunned to realize that I had different types of group activity depending on the sort of organization.

    By “organization,” I mean the way in which work is organized, not the sort of business they are in. I noted three types or organization, that seem to respond to collaboration in different ways:

    • Tightly-coupled work organizations rely on well-defined work roles and responsibilities to coordinate tasks across group members. When people in this sort of group have to make decisions, they partition these decisions, based on expertise. Because they all know each others’ capabilities and roles, they don’t have to think about who-knows-what: this is just obvious. This type of organization falls down when people don’t perform their role reliably. For example, if the whole system relies on accurate information coming into the group, someone who misinterprets what they observed can undermine the whole group system.
    • Event-driven organizations rely on external crises and pressures to coordinate group action. People in this sort of group have strongly-defined roles in the wider organization that take precedence over their role in the group — for example in management taskforce groups, business managers tend to prioritize their other work over problems that the group needs to fix. When people in this sort of group make decisions, they partition these decisions according to who-claims-to-know-what, who has time to do the work, and who knows people connected to the problem. They get to know each others’ capabilities over time, but this is a slow process as priorities and decisions are driven by external events, rather than a shared perception of what needs to be done. This type of organization falls down when decisions or actions that were put on a back burner because of another crisis inevitably become a crisis themselves because they were not followed through.
    • Loosely-coupled organizations rely on ad hoc work roles and cooperation among group members. This type of group is commonest in business process change groups, professional work-groups, and community groups, where people are there because they share an interest in the outcome.  When people in this sort of group make decisions, they partition these decisions according to who can leverage external connections to find things out and who has an interest in exploring what is involved. People often share responsibilities in these groups, comparing notes to learn about the situation. This type of organization falls down because it is hard to coordinate. So shared tasks are performed badly because someone knew something vital that they failed to communicate back to the group.
    Why would we care about these different types of organization? Well these structures affect how we approach problem-solving and design. If we (process and IS analysts) need to work with one of the tightly-coupled work-groups, we need to identify who has the decision-making capability for what. It would not occur to a tightly-coupled group member that anyone would not realize who to go to for what. If we need to work with an event-driven group, we have to realize that our work will not be a priority for them -- it must be made a priority by gaining an influential sponsor who can kick a$$ within the group(!).  If we work with a loosely-coupled group, we need to engage the interest of the group as a whole. Working with individuals can lead to failure, as this type of group makes decisions collaboratively, not on the basis of knowledge or expertise.
    Coordinating group work can be like taming wild horses

    Why would we care about these different types of organization? Well these structures affect how we approach problem-solving and design. If we (process and IS analysts) need to work with one of the tightly-coupled work-groups, we need to identify who has the decision-making capability for what. It would not occur to a tightly-coupled group member that anyone would not realize who to go to for what. If we need to work with an event-driven group, we have to realize that our work will not be a priority for them — it must be made a priority by gaining an influential sponsor who can kick a$$ within the group(!).  If we work with a loosely-coupled group, we need to engage the interest of the group as a whole. Working with individuals can lead to failure, as this type of group makes decisions collaboratively, not on the basis of knowledge or expertise.

    I have a fair amount of evidence for this line of thought and I am pursuing other factors that make these groups different. More to follow …

  • Double Loop Learning in Design

    Double Loop Learning in Design

    Double-loop learning occurs when we question the values, assumptions and recipes-for-success that we typically apply to a situation. This type of paradigm-shift is essential when the business environment, or the context of work changes.
    Typically, we learn how to do something well and we keep on applying that recipe-for-success. It is called expertise. We are proud of the knowledge and experience that led to our becoming an expert and so we tend not to question this. But when things change, expertise can become a handicap.

  • Design as a trajectory of goal-definitions

    Design as a trajectory of goal-definitions

    The focus of IS design has moved “upstream” of the waterfall model, from technical design to the co-design of business-processes and IT systems.  This focus requires an improvisational design approach.  IT-related organizational innovation deals with wicked problems

    Wicked problems tend to span functional and organizational boundaries as business process and information management problems are intertwined.  There are clusters of interrelated problems:  these cannot be defined objectively because the problem is defined differently, depending on who you ask.  IS designers cannot analyze this type of problem in isolation – we need to involve diverse groups of stakeholders in negotiating suitable problem definitions and boundaries for change.  But wicked problems also involve distributed knowledge, where understanding of the problems is stretched across (rather than shared between) stakeholders. 

    So design goals evolve, as designers and stakeholders learn more about the context and the problems facing the organization by engaging in incremental change.   This is often approached by means of agile design methods. But our lack of understanding about how to establish a “common language” for this type of design means that information system innovation tends to be pretty hit-and-miss. Most design initiatives spend more time arguing about process definitions than achieving change. We need a new approach that focuses on the co-design of business (process) and IT systems: a collaborative process that involves problem stakeholders as collaborators in analyzing change. This is the basis of improvisational design.

    Goal Emergence in Design

    The collaborative design of system solutions for wicked problems seems to follow a trajectory of goals, as the group’s understanding of the design progresses. The key to making (and evaluating) progress is understanding what triggers the changes in goal-direction.

    From my research studies, it seems that goal changes are triggered by breakdowns in individual buy-in to the group’s consensus definition of the design vision. Both the breakdowns and the most important parts of the vision are concerned with how the design problem is structured and defined — not (as we usually assume) how the designed system will work. Of course, the solution is important: individual group members constantly test their understanding of the problem against the emerging solution, then realize that the design goals need to change. But it is the consensus problem-vision that drives design goals.

    An important implication of this design model is how to manage design effectively. We need to keep influential decision-makers in the loop, when design goals are redefined, or they just see the start and end points. The natural response is “what took you so long?”. Managing external expectations is key to design success.

    This blog discusses how we design information system solutions for real-world problems.