Le blog d'Archiloque

Book review: “The unaccountability machine: why big systems make terrible decisions — and how the world lost its mind”

The book’s title is a bit odd, and the subtitle could make you expect a catastrophic or even messianic tone, but it’s not this kind of book.

Dan Davies mentions that at first he thought he had found a great idea around unaccountability and wanted to write a book about it, but, as he worked on it, he realized he was wrong and changed his mind.

Instead, it’s a solid book about cybernetics and systemics. A solid but witty and sad with a dash of hope, which means it feels strongly related to The Systems Bible[1].

The book is about three topics:

  1. The work and life of Stafford Beer, involving operational research and management cybernetics

  2. How large human systems like companies or organisations works, from a systematic point of view

  3. Economy, economic theories and economists

Dan Davies rotate from one topic to another through the chapters, slowly increasing the complexity of the systems he describes. At first the transitions are sometimes a bit stiff, but as the book goes the different parts becomes intertwined.

Actually cybernetics is more than a feedback loop

Before reading the book, I knew cybernetics involved decision trees and feedback loops and little robots, but I didn’t know that some of them worked on management.

Stafford Beer’s model of organisation involves 5 “systems”, one above the other, this is a crude summary

  • System 1 is the day-to-day operations

  • System 2 enforces rules

  • System 3 tries to coordinate things toward a purpose

  • System 4 is taking policy decisions

  • System 5 is in charge of defining the identity of the organisation

As I’m spending too much time with software, I’m tempted to see these systems as layer, and indeed they have some parallels with the hierarchical charts of organisations, but they are not identical to them because the same people should be involved in several systems.

The idea of the systems is how to deal with the quantity and variety of information, so just enough of the right kind of information circulate through each system, so the organisation is able to take the decisions it needs without being flooded.

Black boxes, doing what they do

Stafford Beer coined an expression famous among people working with system: “The purpose of a system is what it does”, aka POSIWID. POSIWID means that the best way to understand a system is to look at what is does.

It can sound trivial or even stupid, but it means that looking at what a system does is a better approach to understand it, and to improve it, than studying the intention of people who operate — or think they operate — the system.

An example of this is studying system’s components as black boxes, which means that their inner working is not visible, which means focusing that is coming in and out of the box, instead of how things work in the box.

Accountability sinks

The author define an accountability sink as something (a person or an automated system) in charge of taking decisions plus policy that prevents to appeal these decisions.

These sinks absorb negative emotions triggered by the decisions, and thus break the information flow, preventing feedback loops.

The book’s title “unaccountability machine” is about the type of thoughts or system design that produce these accountability sinks at large scales.

It’s where the author link cybernetics and economists, or at least some kinds of economists, who provided organisations — for profit companies but also public institutions — arguments to justify becoming unaccountability machines.

Stafford Beer, on the other end of the spectrum, worked for Salvador Allende’s Chile to try to create a public decision system based of his theories. Pinochet’s coup stopped the experimentation.

What I’m leaving the book with?

The book provides a set of theoretical tools, but no ready to use model that you can apply to an organisation to make it functional, it would be against the core idea of cybernetics.

The book was not groundbreaking to me, but it provided me with a cohesive framework where I could fit lots of observation and insights. I just hope I won’t be insufferable about it.

For a professional perspective, I was very saddened by discovering Stafford Beer’s model only now.

Saddened because I think that the book’s model should be easy to taught to software people. And on the other hand, what agile people are taught about feedback loop and organisations is nearly always limited to doing a team-sized retrospective after each iteration. What a waste.

Some quotes (actually lots of them)

Systems don’t have motivations, so they don’t have hidden motivations. If the system consistently produces a particular outcome, then that’s its purpose. But on the other hand, systems don’t make mistakes. Just as it’s impossible to get lost if you don’t know where you’re going, a decision-making system does what it does and then either lives with the consequences or dies of them.

Nearly all the commands and constraints which afflict the modern individual, the decisions which used to be made by identifiable rulers and bosses, are now the result of systems and processes.

For a while, in the early days of this profound social change, people tried to understand what was going on. The study of decision-making systems was a big thing — they called it “cybernetics”. But for a number of reasons, it never took off.

The most important reason was that the new science was a victim of its own technological success. The key figures in the early days of cybernetics — people like Norbert Wiener, John von Neumann and Claude Shannon — are almost all much more famous as pioneers of computing. In trying to invent a mathematical language to describe their problem, they quickly made a lot of discoveries relating to the representation of information and the operations that could be performed on it. These discoveries turned out to be extremely useful in the design of electronic circuits. Consequently, there was a huge and rapid brain drain away from the abstract study of decision-making systems and into the new industry of manufacturing computers.

The way in which the economics profession ignored most of the work done in information theory is striking. It ignored its own traditions relating to uncertainty and decision-making, instead ploughing ahead with a frighteningly simplistic view of the world in which everything could be reduced to a single goal of shareholder value maximisation.

That’s the purpose of making policies — to reduce the amount of time and effort spent making decisions on individual cases. However, it’s also the root cause of this sort of problem. When you set a general policy, you either need to build in a system for making exceptions (and make sure that it is used), or you need to be confident that all the outcomes of enforcing that policy will be acceptable.

Academic politics is notoriously vicious, and academic careers tend to intersect a lot — what goes around comes around, and people need to collaborate. In that sort of environment, a system in which academics directly assessed each other’s promotion cases would cause all sorts of interpersonal problems; it would be difficult to work productively with someone if you were known to have previously judged their research to be less excellent than one of their peers.

So although the citation index is in all probability a bad measure that seems to lock the universities into an expensive and unsatisfactory publishing model, the outsourcing of the academic performance measurement system is a solution rather than a problem. It redirects potentially destructive negative emotions to a place where they can be relatively harmlessly dissipated.

If you trace back many important decisions of the last few decades, you will regularly come up against the uncomfortable sensation that the unacknowledged legislators are relatively junior civil servants who put placeholder numbers in spreadsheets, which are later adopted as fundamental constraints; to do otherwise would mean someone having to risk being criticised for making a decision.

The principle of diminishing accountability: unless conscious steps are taken to prevent it from doing so, any organisation in a modern industrial society will tend to restructure itself so as to reduce the amount of personal responsibility attributable to its actions. This tendency will continue until crisis results.

If a management consultant is capable of achieving anything, doing so will involve explaining to a group of people that the way in which they are organised is stopping them from achieving the goals that they are aiming for as individuals.

This management theory is based neither on control nor on delegation, but on accountability between the parts of a business. People were able to make decisions and change their mind, as long as they could justify those decisions to anyone else who was affected. And it was a system of accountability based on the flow of information; the higher functions of the system were responsible for creating the goals, ensuring their consistency with each other and with the resources available. They then communicated their sense to the operating levels and set the system in motion. And then the system repeated, with the results of the initial plans forming the information set used to revise them.

Knowing a great deal of detail about a subset of a system has a habit of increasing your confidence in your opinions disproportionately from their reliability.

Here’s the thing: working inside a corporation (or any large organisation) is the quickest way to realise that you have only a partial understanding of how it works. You find yourself involved with decisions, but you know that you make them on the basis of collective understanding in line with policies, with regard to the sensitivities of other divisions, based on the information provided, and so on. There are amazingly few occasions in everyday business life when someone makes a specific and important decision as a creative act. The daily grind of working life is the selection of the option that looks least obviously disastrous, according to a set of criteria laid out in a plan that was produced elsewhere.

  • Information only counts if it’s being delivered in a form in which it can be translated into action, and this means that it needs to arrive quickly enough.

  • Systems preserve their viability by dealing with problems as much as possible at the same level at which they arrive, but they also need to have communication channels that cross multiple levels of management, to deal with big shocks that require immediate change.

And this story repeats itself through the history of management science; almost every classic of the literature seems to have described a way of adapting systems to a more complicated world, and then to have become obsolete itself. If you look past the slogans and think about what things like “management by objectives”, “focus on core competences” and so on actually mean, they are all different ways of advising executives to restructure their businesses so that they don’t generate complexity faster than it can be managed. Meanwhile, the management consultancy industry prospered by selling the myth that it might be possible to economise on management capacity by renting it when you needed some rather than paying it a salary.

But because the solutions are often simple, the work is surprisingly unpleasant. An effective consultant is likely to spend most of their time telling people obvious things that they don’t want to hear. That’s a difficult combination; while not particularly intellectually stimulating, it’s emotionally taxing. It’s not surprising that so many people find doing this intolerable, and consequently let their ethics slip. Telling your client what they want to hear is a better way to get repeat business; the problem won’t go away and the person commissioning the work will still like you.

To recapitulate, the basic problem is that systems in general need to have mechanisms to reorganise themselves when the complexity of their environment gets too much to bear.

Outsourcing is a contractual relationship, and contracts (like debts) are typically all-or-nothing affairs. Either a requirement has been fulfilled or it hasn’t. The typical outsourcing contract narrows the bandwidth of the communication channel to either “everything is going more or less as anticipated” or “it’s stopped working and we need to find out why”. If it’s the first of these two cases, everything’s fine. If it’s the second, the continued stability of the system will depend on how much information-handling capacity can be brought to bear to address whatever problem has arisen.

Since the outsourcing communication channel is designed to spend most of its time transmitting “everything’s OK”, it’s difficult to guess how much additional bandwidth needs to be allocated to carry messages like “but the following conditions are changing which might affect things in the near future” — let alone how much might need to be allocated at short notice when it starts to say “everything’s no longer OK”. If you have targets to make, it will always be tempting to cut out a bit of spare capacity.

A funny thing about ideology is that it’s difficult to confine it to where it’s useful.

A company that sells goods and services for profit can never completely sever the connection which takes information from its customers; the people who buy the thing have the ability to refuse to do so. In many cases, people who interact with the state don’t have even have the ability to transmit that single bit of information because they can’t shop elsewhere; they can complain if they like, but they interact with the service representative, the paradigmatic accountability sink.

Every decision-making system set up as a maximiser needs to have a higher-level system watching over it.


1. Or maybe all books written by people working on systemantics witty and sad with a dash of hope?