Chosen links

Links - 22nd January 2023

Producing HTML using string templates has always been the wrong solution

There are broadly two approaches used to generate (X)HTML in web applications today:

  • string templating systems;

  • AST-based representation, transformation and serialization;

Historically, the majority of server-side web applications have tended to produce HTML output using string templating systems. String template approaches remain vastly more popular than AST-based approaches.

This is unfortunate, as I’ve come to the conclusion for some years now that the use of string templating systems to generate HTML is fundamentally the wrong approach.

So you want to solve Python packaging: a practical guide

Fundamentally these users do not want to think about this shit, and they’re a huge group of users. You know who does think about this shit? Web developers, and every time someone comes along with "Python packaging sucks and someone should fix it" they’re a web dev.

That’s because web devs have different expectations. They expect to work with a packaging tool. They expect to find and install dependencies. They don’t expect to work with a ton of native dependencies. They don’t have the same problems.

The great divide

The divide is between people who self-identify as a (or have the job title of) front-end developer, yet have divergent skill sets.

On one side, an army of developers whose interests, responsibilities, and skillsets are heavily revolved around JavaScript.

On the other, an army of developers whose interests, responsibilities, and skillsets are focused on other areas of the front end, like HTML, CSS, design,

Game design coding perverts love dwarf fortress

The reason all the game design coding perverts love dwarf fortress is because tarn Adams succeeded where every single one of us has failed: he made a weird procgen system that does solely what he wants to do and is never focused on User Friendliness and rather focused on “can I simulate an entire world’s history” and cares more about what weird shit he wants to add then about best coding practices and is the sort of project where you spend 3 months researching plants to try and figure out what makes them edible so that you can decide how to generate a garden better without ever considering how the user will interface with the garden

and people loved it and bought it

everyone wants to do exactly this sort project of project where they never have to consider "game design" or "win conditions" or such dumb things and instead be called cool and sexy for the way that they made weather randomly come about because of their atmospheric conditions simulation

only he, god among game devs, has succeeded at making his weird personal expiriment sexy like that and we’re all jealous and in awe

I spent my New Years Eve in VR (unsuccessfully)

This will be relevant in the story I’m about to tell. You can’t just enter a VRChat space. Even events intended to be public involve misdirection, Discords pointing to other Discords, and fliers with coded language and implied access mechanisms. I think of the elaborate rituals gay subcultures used in eighteenth-century cruising routes to ensure the person they were propositioning was a fellow queer and not a police informant, except instead of trying to solicit gay sex, they are trying to have a normal human interaction. Maybe some of them have gay sex afterward, I don’t know. It doesn’t seem to be the focus.

Use the wrong tool for the job

I’m inclined to think that the benefits of familiar tooling is enormous, such that you’re often better using a wrong-but-familiar tool than a right-but-exotic tool. And there’s a problem with that. What you’re familiar with is highly circumstantial. If a team wants to make a slick web app, but all they know is Fortran, then they have a good argument for writing the app backend in Fortran.

…And then, if they need a yaml parser or a pdf generator or a job queue, then those should be in Fortran too. Over time, all language ecosystems will develop all possible language tooling, even if the tooling is for a purpose the language is completely unsuited to. Because no matter how wrong a tool is for the job, proper familiarity can turn it into the right tool.

(Obviously there’s a limit to this, which is why some webtech companies have migrated their Ruby and Python codebases to things like Go or TypeScript.)

This also changes what the right tool for the job is. If you ask me, Python isn’t that good as a language for scientific computing, both due to performance impedance and because of how hard it is to pipeline data. But because it has the large data ecosystem, it’s more appropriate now for that kind of work. Other languages now have more impedance because you’d need to reproduce those libraries.

(Almost) everything about storing data on the web

Imagine you’re building a web application and you want to store some data locally on the user’s device. This could be because this data is only needed on the client-side, or it could be because you want to offer some kind of offline access to the your app’s content.

To do this, you’ll need to store data on the device that’s running your app. You might be asking yourself:

  1. Can I actually store all the data I need?

  2. Can I trust the browser to really persist it?

  3. How much space do I have?

The answers are better than you might think:

  1. Can I actually store all the data I need? Sure, you can store all kinds of data, and a lot of it too.

  2. Can I trust the browser to really persist it? Yes you can.

  3. How much space do I have? A lot, in most cases!

But as always, the devil is in the details! Let’s try to dive into these details.

Meetings for an effective eng organization

Some engineers develop a strong point of view that meetings are a waste of their time. There’s good reason for that perspective, as many meetings are quite bad, but it’s also a bit myopic: meetings can also be an exceptionally valuable part of a well-run organization. If you’re getting feedback that any given meeting isn’t helpful, then iterate on it, and consider pausing it for the time being. It may not be useful for your organization yet, but don’t give up on the idea of meetings.

Good meetings are the heartbeat for your organization.

If you look hard enough, you can find narrow exceptions, but the vast majority of well-run organizations anchor around a few valuable meetings, and they’re likely the best starting point for your organization as well. (As an aside, if you want to read a second, excellent take on meetings, I highly recommend Steven Sinofsky’s, which is exceptionally good.)

The most effective communication within an organization is between peers or between managers and their direct team. Your technology strategy is best communicated in a written document. The clearest plan is tracked in a ticket tracker or a document. None of these ideal approaches are large meetings, which isn’t too surprising: large meetings are rarely the best communication solution for any particular goal. However, they are a remarkably effective backup solution when there are gaps in your default approaches.

Finally, I find that many organizations split meetings prematurely because there are one or two individuals who behave badly in important meetings. For example, you may have an antagonistic engineer who joins every technical spec review meeting to advocate for a particular technology. It’s extremely valuable to solve that problem by holding that engineer accountable to a higher communication standard. You cannot scale large meetings without holding participants responsible for their behavior.

Reaching peak meeting efficiency

When you bring together a team of talented and diverse individuals, the only way they will come to operate as a team is by spending time talking, listening, and understanding the perspective individuals bring to contribute to a larger whole. Unless everyone hired shares the same background and experiences, there’s no way a group of people can converge to a high-performance team without meeting, sharing, and learning together. No amount of ping-pong, email, or shared docs can substitute for meeting.

If you believe that you reached a decision on a really contentious topic in a meeting and jumped to “close” at the very end of the time, then one can probably say with certainty the decision will be revisited shortly. Chances are someone with key input wasn’t present or didn’t get a chance to contribute and will find a way to either re-open the decision or provide information to someone who will.

Nothing good ever came from voting at a meeting. Just don’t ever vote. Companies are not democracies, and you also do not want to memorialize winning and losing “sides” of an issue. If a person’s position isn’t clear, ask questions, but cornering them only raises the stakes and reduces accountability.

Complex acts of knowing – paradox and descriptive self awareness

The first age, prior to 1995, sees knowledge being managed, but the word itself is not problematic, the focus is on the appropriate structuring and flow of information to decision makers and the computerisation of major business applications leading to a technology enabled revolution dominated by the perceived efficiencies of process reengineering. For many, reengineering was carried out with missionary enthusiasm as managers and consultants rode roughshod across pre-existing “primitive” cultures with the intent of enrichment and enlightenment that too frequently degenerated into rape and pillage. By the mid to late-1990s a degree of disillusionment was creeping in, organisations were starting to recognise that they might have achieved efficiencies at the cost of effectiveness, they had laid off people with experience or natural talents, vital to their operation, of which they had been unaware. This is aptly summarised by a quote from Hammer and Champy, the archpriests of reengineering “How people and companies did things yesterday doesn’t matter to the business reengineer”. The failure to recognise the value of knowledge gained through experience, through traditional forms of knowledge transfer such as apprentice schemes and the collective nature of much knowledge, was such that the word knowledge became problematic.

Their work derived in the main from the stuffy of innovation in manufacturing processes where tacit knowledge is rendered explicit to the degree necessary to enable that process to take place; it did not follow that all of the knowledge in the designers' heads and conversations had, should or could have been made explicit. In partial contrasts, early knowledge programmes attempted disembody all knowledge from its processors to make an organisational asset.quote

Most knowledge management in the post-1995 period has been to all intents and purposes content management.

We can always know more than we can tell, and we will always tell more than we can write down. The nature of knowledge is such that we always know, or are capable of knowing more than we have the physical time or the conceptual ability to say. I can speak in five minutes what it will otherwise take me two weeks to get round to spending a couple of hours writing it down. The process of writing something down is reflective knowledge; it involves both adding and taking away from the actual experience or original thought. Reflective knowledge has high value, but is time consuming and involves loss of control over its subsequent use.

We only know what we know when we need to know it. Human knowledge is deeply contextual, it is triggered by circumstance. In understanding what people know we have to recreate the context of their knowing id we are to ask a meaningful question or enable knowledge use. To ask someone what he or she knows is to ask a meaningless question in a meaningless context, but such approaches are at the heart if mainstream consultancy method.

The issue of content and context, which runs through all three heuristics, is key to understanding the nature of knowledge transfer. To illustrate this we can look at three situations in which expert knowledge is sought:

  1. A colleague with whom they have worked for several years asks a question, a brief exchange takes place in the context of common experience and trust and knowledge is transferred.

  2. A colleague who is not known to the expert asks the same question. The discourse is now more extensive as it will take longer to create a common context, and when knowledge transfer takes place it is conditional: “phone me if this happens” or “let’s talk again when you complete that stage” are common statements.

  3. The expert is asked to codify their knowledge in anticipation of potential future uses of that knowledge. Assuming willingness to volunteer, the process of creating shared context requires the expert to write a book.

At the highest level of abstraction, where I share knowledge with myself there is a minor cost; I may keep noted but no one else has to read them. On the other hand, if I want to s hare with everyone the cost becomes infinite, as the audience not only needs to share the same language, but also the same education, experience, values etc. In practice there is a very narrow zone between lower and upper levels of acceptable abstraction in any knowledge exchange.

For many years stock was held on the factory floor in anticipation of need at a high cost and risk of redundancy. Eventually it was realised that this was a mistake and significant levels of stock were pushed back to suppliers entering the factory on a just-in-time (JIT) basis, thus minimising costs. Second-generation knowledge management made all the same mistakes. In the third generation we create ecologies in which the informal communities of the complex domain can self organise and self manage their knowledge in such a way as to permit that knowledge to transfer to the formal, knowable domain on a JIT basis.

Gigantic attack — resolving combat in one roll

So tonight I’m going to be writing the first draft of a mechanic that is meant to replace an entire combat round with a single roll.