Chosen links

Links - 14th May 2023

Interhooking, interlocking, nonmaterial

Social platforms and communities and sub-community interactions online are a messy combination of pseudo-place and repeating event, and this can make them difficult to think about clearly. I’m finding it useful to spend time wrangling with the way interpersonal patterns and physical/architectural patterns self-replicate and buzz and either reinforce or dampen each other. Alexander and brown’s adjacent senses of pattern and replicating order have been simultaneously clarifying and complicating for me, in ways that feel fruitful for longer-term work.

The cult of the founders

Since I’m on the topic of Max Weber, religion and technology already, here’s a half-developed theory of Elon Musk that I’ve been nurturing for a while. I’ve trotted it out informally at a couple of meetings, and I’m not completely convinced it is right, but it’s prima facie plausible, and I’ve gotten some entertainment from it. My argument is that Musk is doing such a terrible job as Twitter CEO because he is a cult leader trying to manage a church hierarchy. Relatedly — one of SV’s culture problems right now is that it has a lot of cult leaders who hate the dull routinization of everyday life, and desperately want to return to the age of charisma.

The underlying idea is straightforward, and is stolen directly from Max Weber — see this handy Weber on religion listicle for the background. Weber thinks that many of the stresses and strains of religion come from the vexed relationship between the prophet and the priest.

Prophets look to found religions, or radically reform them, root and branch. They rely on charismatic authority. They inspire the belief that they have a divine mandate. Prophets are something more than human, so that some spiritual quality infuses every word and every action. To judge them as you judge ordinary human beings is to commit a category error. Prophets inspire cults — groups of zealous followers who commit themselves, body and soul to the cause. Prophets who are good, lucky, or both can reshape the world.

The problem with prophecy is that ecstatic cults don’t scale. If you want your divine revelation to do more than rage through the population like a rapid viral contagion and die out just as quickly, you need all the dull stuff. Organization. Rules. All the excitement — the arbitrariness; the sense that reality itself is yielding to your will — drains into abstruse theological debates, fights over who gets the bishopric, and endless, arid arguments over how best to raise the tithes that the organization needs to survive.

Deskilling on the job

Years ago, Madeleine Elish decided to make sense of the history of automation in flying. In the 1970s, technical experts had built a tool that made flying safer, a tool that we now know as autopilot. The question on the table for the Federal Aviation Administration and Congress was: should we allow self-flying planes? In short, folks decided that a navigator didn’t need to be in the cockpit, but that all planes should be flown by a pilot and copilot who should be equipped to step in and take over from the machine if all went wrong. Humans in the loop.

Think about that for a second. It sounds reasonable. We trust humans to be more thoughtful. But what human is capable of taking over and helping a machine in a fail mode during a high-stakes situation? In practice, most humans took over and couldn’t help the plane recover. The planes crashed and the humans got blamed for not picking up the pieces left behind by the machine. This is what Madeleine calls the “moral crumple zone.” Humans were placed into the loop in the worst possible ways.

This position for the pilots and copilots gets even dicier when we think about their skilling. Pilots train extensively to fly a plane. And then they get those jobs, where their “real” job is to babysit a machine. What does that mean in practice? It means that they’re deskilled on the job. It means that those pilots who are at the front of every commercial plane are less skilled, less capable of taking over from the machine as the years go by. We depend structurally on autopilot more and more. Boeing took this to the next level by overriding the pilots with their 737 MAX, to their detriment.

To appreciate this in full force, consider what happened when Charles “Sully” Sullenberger III landed a plane in the Hudson River in 2009. Sully wasn’t just any pilot. In his off-time, he retrained commercial pilots how to fly if their equipment failed. Sully was perhaps the best positioned pilot out there to take over from a failing system. But he didn’t just have to override his equipment — he had to override the air traffic controllers. They wanted him to go to Teterboro. Their models suggested he could make it. He concluded he couldn’t. He chose to land the plane in the Hudson instead.

Had Sully died, he would’ve been blamed for insubordination and “pilot error.” But he lived. And so he became an American hero. He also became a case study because his decision to override air traffic control turned out to be justified. He wouldn’t have made it. Moreover, computer systems that he couldn’t override prevented him from a softer impact.

Sully is an anomaly. He’s a pilot who hasn’t been deskilled on the job. Not even a little bit. But that’s not the case for most pilots.

And so here’s my question for our AI futures: How are we going to prepare for deskilling on the job?

Sensenmann: code deletion at scale

Automatically deleting code may sound like a strange idea: code is expensive to write, and is generally considered to be an asset. However, unused code costs time and effort, whether in maintaining it, or cleaning it up. Once a code base reaches a certain size, it starts to make real sense to invest engineering time in automating the clean-up process. At Google’s scale, it is estimated that automatic code deletion has paid for itself tens of times over, in saved maintenance costs.

The implementation requires solutions to problems both technical and social in nature. While a lot of progress has been made in both these areas, they are not entirely solved yet. As improvements are made, though, the rate of acceptance of the deletions increases and automatic deletion becomes more and more impactful. This kind of investment will not make sense everywhere, but if you have a huge mono-repository, maybe it’d make sense for you too. At least at Google, reducing the C++ maintenance burden by 5% is a huge win.

Re: openSUSE Release Engineering meeting 03.05.2023 - MicroOS KDE

I will point out that contributors don’t generally come out of nowhere. Most contributors start out as users (barring a few exceptions). If you’re antagonistic to user feedback, that discourages the user-to-contributor conversion funnel, which is already small to begin with.

That is a concept that has absolutely zero empirical data to support it.

If you look at openSUSE’s own statistics, there is absolutely no evidence that more users can lead to more contributors.

Our highest contribution numbers where when we had the least users. All of our periods of user growth have seen either a decline or stagnation in our contributor numbers.

Looking outside of our little bubble, this is not an isolated phenomenon.

Fundamentally, Projects that work hard to appeal to contributors, gain contributors.

This is totally separate and unrelated to growing a userbase.

Lets stick with the MicroOS Desktop as an example.

I know that starting with GNOME as the default/recommended/most developed Desktop Environment immediately turned off a significant number of the openSUSE userbase.

Regardless of that, the GNOME flavour of MicroOS Desktop has a happy and healthy growing contributor base, because we worked on that.

We haven’t really worked on gathering new users (I kinda plan that push once the thing is Release quality), but regardless the GNOME MicroOS Desktop has a happy and healthy growing userbase, and is being touted in places far outside of the regular openSUSE bubble as being “the future of the Linux Desktop”.

This experience would actually imply the opposite of our tautology — the experience of the MicroOS Desktop is that focusing on contributors, and giving them the environment to build precisely what they want to build, is now what is attracting users.

Testing a new encrypted messaging app’s extraordinary claims

How I accidentally breached a nonexistent database and found every private key in a “state-of-the-art” encrypted messenger

Early computer art in the 50’s & 60’s

Computing and creativity have always been linked. In the early 1800’s when Charles Babbage designed the Analytical Engine, his friend Ada Lovelace wrote in a letter that, if music could be expressed to the engine, then it “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.

My original vision for this article was to cover the development of computer art from the 50’s to the 90’s, but it turns out there’s an abundance of things without even getting half way through that era. So in this article we’ll look at how Lovelace’s ideas for creativity with a computer first came to life in the 50’s and 60’s, and I’ll cover later decades in future articles.