Auditing culture, and how metrics proliferate
Because “independence” is a desired trait in auditors, the more important the audit in question, the more likely that the auditor is not, and can’t be, part of the organisation they are auditing. This means that them not being familiar with the politics of the organisation is considered good, as they are not influenced or part of these politics. The cost of this is that now they can’t account for the social dynamics in the audited organisation, making them likely to act to the benefit of those with structural and social power.
This means they have to fall back to auditing the industry “best practices”, and local adaptations are viewed, at best with suspicion, if not considered an outright flaw. It also means that the audited organisation, if they want to be able to pass an audit, need to adopt these best practices, regardless of them making sense for this organisation in particular. An external auditor coming to an organisation and finding well-made, locally-adapted processes is going to, at most, be able to tell that these are not the standard practices that are considered good.
This is exacerbated by the time limits the auditor faces. The neutrality of the auditor is limited, the longer they are embedded in the organisation, the more likely it is they will be accused of bias and having violated their impartiality. They have to hurry, they can’t fully understand the processes as it would be required. Therefore, both, the auditor and the audited default to “safe” processes and practices, no matter if these are any good for the organisation, or not.
This problem also plays out with executive hiring and senior leadership. Often, these are not brought up from inside the company, but hired from big, respected firms that have a reputation for Knowing Their Stuff. These are often hired specifically to fix situations, and so they are placed in exactly the same role as an external auditor: they have limited time, resources, and most crucially, understanding, but they are under the acute pressure to act and to be seen as deserving of their reputation as competent and decisive.
So, they standardise. They, now in a position of structural power, re-organise and restructure, until it resembles what they know and know how to manage. This is the second prong of how auditing culture propagates, and, so is my hypothesis, why most large companies have a very similar corporate culture.
Transparency is surveillance
Transparency forces people to conceal their actual reasons for action and invent different ones for public consumption. Transparency forces deception. I work out the details of her argument and worsen her conclusion. I focus on public transparency — that is, transparency to the public over expert domains. I offer two versions of the criticism. First, the epistemic intrusion argument: the drive to transparency forces experts to explain their reasoning to non-experts. But expert reasons are, by their nature, often inaccessible to non-experts. So the demand for transparency can pressure experts to act only in those ways for which they can offer public justification. Second, the intimate reasons argument: in many cases of practical deliberation, the relevant reasons are intimate to a community and not easily explicable to those who lack a particular shared background. The demand for transparency, then, pressures community members to abandon the special understanding and sensitivity that arises from their particular experiences. Transparency, it turns out, is a form of surveillance. By forcing reasoning into the explicit and public sphere, transparency roots out corruption — but it also inhibits the full application of expert skill, sensitivity, and subtle shared understandings. The difficulty here arises from the basic fact that human knowledge vastly outstrips any individual’s capacities. We all depend on experts, which makes us vulnerable to their biases and corruption. But if we try to wholly secure our trust — if we leash groups of experts to pursuing only the goals and taking only the actions that can be justified to the non-expert public — then we will undermine their expertise. We need both trust and transparency, but they are in essential tension. This is a deep practical dilemma; it admits of no neat resolution, but only painful compromise.
When we trust experts, we permit them to operate significantly out of our sight. There, they can use the full extent of their trained sensitivities and intuitions — but they are also freed up to be selfish, nepotistic, biased, and careless. On the other hand, we can distrust experts and demand that they to provide a public accounting of their reasoning. In doing so, we guard against their selfishness and bias — but we also curtail the full powers of their expertise. Transparency pressures experts to act within the range of public understanding. The demand for transparency can end up leashing expertise.
According to the empirical research, both incentivized guidance and internalized guidance do occur — most clearly in cases with metrified targets. Espeland and Sauder say that when the USN&WR started publishing its rankings, administrators were often incentivized to pursue the rankings, but treated it as a complex trade-off decision, weighing the financial importance of the rankings against their loyalty to other legal values. But within a decade, most administrators seem to have fully internalized the rankings as their primary goal. It sets the terms of their success. Elsewhere, in my discussion of gamification, I’ve offered an account of why we might internalize such metrics, in a process which I call “value capture”. When we internalize such metrics, we shield ourselves from the painful complexities of dealing with values in their true, subtle, and inchoate form. Life becomes something like a game, with a clarity of value and purpose. And we gain, not only internal clarity, but external clarity — it becomes effortless to justify ourselves to others, when we have all gotten on the same value standard. This is why gamification is so effective. When something like Twitter offers us a simple score for our communicative acts, we can get the bliss of games by taking that score as our main goal in the activity. These explanations work for any sort of metric or gamification, not just ones created for transparency. But we can see at least one pathway by which public transparency could lead to internalized guidance: transparency generates metrics for public use, and then experts internalize those metrics for the pleasures of value clarity.
Bring your own disaster
After my last post, someone suggested that having employers be able to restrict keys to machines they control is a bad thing. So here’s why I think Bring Your Own Device (BYOD) scenarios are bad not only for employers, but also for users.
There’s obvious mutual appeal to having developers use their own hardware rather than rely on employer-provided hardware. The user gets to use hardware they’re familiar with, and which matches their ergonomic desires. The employer gets to save on the money required to buy new hardware for the employee. From this perspective, there’s a clear win-win outcome.
But once you start thinking about security, it gets more complicated. If I, as an employer, want to ensure that any systems that can access my resources meet a certain security baseline (eg, I don’t want my developers using unpatched Windows ME), I need some of my own software installed on there. And that software doesn’t magically go away when the user is doing their own thing. If a user lends their machine to their partner, is the partner fully informed about what level of access I have? Are they going to feel that their privacy has been violated if they find out afterwards?
But it’s not just about monitoring. If an employee’s machine is compromised and the compromise is detected, what happens next? If the employer owns the system then it’s easy — you pick up the device for forensic analysis and give the employee a new machine to use while that’s going on. If the employee owns the system, they’re probably not going to be super enthusiastic about handing over a machine that also contains a bunch of their personal data.
There is no “software supply chain”
There is a lot of attention on securing “software supply chains.” The usual approach is that you want to try to avoid security issues in your underlying components from impacting customers of your product; and when they do, you want to be able to respond quickly to fix the issue. The people who care about this class of problem are often software companies. The class of components that are most concerning these companies are ones where unpaid hobbyist maintainers wrote something for themselves with no maintenance plan.
This is where the supply chain metaphor — and it is just that, a metaphor — breaks down. If a microchip vendor enters an agreement and fails to uphold it, the vendor’s customers have recourse. If an open source maintainer leaves a project unmaintained for whatever reason, that’s not the maintainer’s fault, and the companies that relied on their work are the ones who get to solve their problems in the future. Using the term “supply chain” here dehumanizes the labor involved in developing and maintaining software as a hobby.
Securing the supply chain of nothing
If you read this guidance and implement it into your enterprise organization, you will end up securing the supply chain of nothing. Your engineering organization will dismiss you as an ideologue, a fool, or both; else, you will have constrained software engineering activities to such a degree that it makes more sense to abandon delivering software at all. In fairness, the most secure software is the software never written. But, in a world whose progress is now closely coupled with software delivery, we can do better than this document suggests.