As a brief aside, I would argue that simply knowing about Hyrum’s Law turns it into a self-fulfilling prophecy. In other words: As the number of developers aware of Hyrum’s Law increases, so too will the number of developers using it as an excuse to exploit every observable behavior. If that number ever reaches 0, assume I am dead.
The lesson I found the most striking is this: there’s a direct correlation between how skilled you are as a chess player, and how much time you spend falsifying your ideas. The authors find that grandmasters spend longer falsifying their idea for a move than they do coming up with the move in the first place, whereas amateur players tend to identify a solution and then play it shortly after without trying their hardest to falsify it first. (Often amateurs, find reasons for playing the move — “hope chess”.)
Call this the “falsification ratio”: the ratio of time you spend trying to falsify your idea to the time you took coming up with it in the first place. For grandmasters, this is 4:1 — they’ll spend 1 minute finding the right move, and another 4 minutes trying to falsify it, whereas for amateurs this is something like 0.5:1 — 1 minute finding the move, 30 seconds making a cursory effort to falsify it.
Complacency hits especially hard in king and pawn endgames, when king moves that seem “good enough” can in fact be losing or drawing, and you really have to calculate each and every move very exactly to figure out a win. You have to be unreasonably thorough and check every single move, to a degree that seems quite pedantic.
I notice this in people who are good at science, too: they are really, really thorough and check every edge case.
They conclude, tentatively, that perhaps it is only possible to accelerate proficiency between senior apprentice and junior journeyman levels (or between senior journeyman and junior expert levels). They present the following stylised growth curve:
Perhaps, the authors say, overall mastery still takes 10 years in the field, and there’s nothing we can do about that.
But what we do know is this: the set of successful accelerated training programs that currently exist enable accelerated proficiency, not accelerated mastery.
Learning is the active construction of knowledge; the elaboration and replacement of mental models, causal stories, or conceptual understandings.
All mental models are limited. People have a variety of fragmentary and often reductive mental models.
Training must support the learner in overcoming reductive explanations.
Knowledge shields lead to wrong diagnoses and enable the discounting of evidence.
Reductive explanation reinforces and preserves itself through misconception networks and through knowledge shields. Flexible learning involves the interplay of concepts and contextual particulars as they play out within and are influenced by cases of application within a domain.
Therefore learning must also involve unlearning and relearning.
Therefore advanced learning is promoted by emphasising the interconnectedness of multiple cases and concepts along multiple conceptual dimensions, and the use of multiple, highly organised representations.
Complexity, in Luhmann’s model, is the amount of aspects or concerns that have to be considered for potential actions any one actor could take at any one point. This generally is considered infinite outside of the system, and the whole point of the system boundary is to define a space where it is manageable, by reducing the amount of actions possible to a point where it is possible to act.
This is generally done by agreements on goals, values, functions, delegation, and so on: If we agree that I’m cooking dinner tonight, the number of potential actions for what I’m doing this evening are now much smaller, and most of them should include cooking dinner.
It’s important here that complexity inside a system is variable and influenceable, you can choose how complex the inside of your system is. You can make it very simple and the next action is always very clear, the trade-off is that it can’t represent the environment/rest of the world very well, you have disregarded many dimensions and aspects of the environment to get to your simple system.
The reward for taking on more complexity is being closer aligned with the environment and therefore being able to react to changes in the environment (i.e. having less adaptation problems), the reward for taking on less complexity is being able to decide on a reasoned course of action.
Google’s Core Web Vitals initiative was launched in May of 2020 and, since then, its role in Search has morphed and evolved as roll-outs have been made and feedback has been received.
However, to this day, messaging from Google can seem somewhat unclear and, in places, even contradictory. In this post, I am going to distil everything that you actually need to know using fully referenced and cited Google sources.
While most of us are used to this system and its quirks, that doesn’t mean it’s without problems. This is especially apparent when you do user research with people who are new to computing, including children and older people. Manually placing and sizing windows can be fiddly work, and requires close attention and precise motor control. It’s also what we jokingly refer to as shit work: it is work that the user has to do, which is generated by the system itself, and has no other purpose.
The dichotomy of Humbleness and Expressiveness dominates esolang aesthetics, with the emphasis on personal style, virtuosity of code, and elegance, all of which are discouraged in mainstream code. By sidelining practicality, the dominant motivation of most language design, they become a space for experimentation and play with pure idea and challenge the seemingly unalterable, unquestionable givens of traditional programming conventions. However, not every language adopts this Less Humble aesthetic. Some esolangs point to other possibilities of the medium.
Esolangs as a medium began with a series of languages that realize the potential for personal expression and for elegance within chaos. This basis has given newer esolangers a foundation on which to raise new questions, challenge base assumptions of who languages are designed for and how they should be used. As more programmers, poets, and artists move into this medium, the questioning and confrontational spirit of the early esolangs finds new articulation.
Here’s how I draw pictures in my text editor. One thing to notice is that there are no menus, dialogs or conventional UI elements. I’ve been trying to mimic the feel of paper and pen. I want to be able to draw at a moment’s notice, but I don’t want any reminders that I could draw. I don’t want any widgets constantly on screen just for the moment when I might start drawing.
Have you ever heard someone say that a disk or memory is a “bunch of bits”?
I’m not sure about this idea’s origin, but it’s a pretty good idea. It reduces the mystery of computers. For example, it rules out the theory that inside of my computer is a very flat elf.
No, inside are bits, encoded on electrical components.
Yet, computers are still pretty mysterious. What are these bits? What do they mean? Can we play with them, parse them, make sense of them?
In this post, I will show you that, yes, absolutely we can! For your entertainment, I am going to stick my hand into my computer, pull up a bunch of bits, and we will examine and make sense of them.
What bits, exactly, should we explore? For this exercise, let’s pick apart how a disk-backed file is represented on disk.