Anything can be a message queue if you use it wrongly enough
In theory, all you’d need to do to save money on your network bills would be to read packets from the kernel, write them to S3, and then have another loop read packets from S3 and write those packets back into the kernel. All you’d need to do is wire things up in the right way.
Hoshino is a system for putting outgoing IPv6 packets into S3 and then reading incoming IPv6 packets out of S3 in order to avoid the absolute dreaded scourge of Managed NAT Gateway. It is a travesty of a tool that does work, if only barely.
Modern software quality, or why I think using language models for programming is a bad idea
Bugs, shoddy UX, poor accessibility — even when accessibility is required by law — are non-factors in modern software management, especially at larger software companies.
The rest of us in the industry then copy their practices, and we mostly get away with it. Our margins may not be as enormous as Google’s, but they are still quite good compared to non-software industries.
We have an industry that’s largely disconnected from the consequences of making bad products, which means that we have a lot of successful but bad products.
C isn’t a programming language anymore
My problem is that C was elevated to a role of prestige and power, its reign so absolute and eternal that it has completely distorted the way we speak to each other. Rust and Swift cannot simply speak their native and comfortable tongues — they must instead wrap themselves in a grotesque simulacra of C’s skin and make their flesh undulate in the same ways it does.
C is the lingua franca of programming. We must all speak C, and therefore C is not just a programming language anymore — it’s a protocol that every general-purpose programming language needs to speak.
So actually this kinda is about the whole “C is an inscrutable implementation-defined mess” thing. But only insofar as it makes this protocol we all have to use an even bigger nightmare!
The curse of scalable technology
Trying to work out how much advice can be generalised is extremely hard. This is compounded by the fact that people who have experience in a lot of different projects often do not have in depth knowledge, or knowledge spanning a really long time period. I know from experience that conclusions I’ve come to after 2 or 3 years on a project are different to after 1 year, and they might change again after 5 or 10 years. So it may be that the most experienced people (judging by breadth) are actually the least qualified to advise others, due to lack of depth — but also the least aware of that!
And then you have the problem that many people with a lot of experience are pretty silent about it, and you have no idea how many they are (because they are not vocal about their existence either!) Further, the most vocal might not be the best qualified to help with your situation. For example, I know from at least 2 data points that it’s entirely possible to run a multi-million dollar business that has a main database containing much less than 100 Mb of data. But I don’t know how common that is, and I suspect you will probably hear a lot more from companies that have a massively different profit-to-data ratio.
When I think too much about this, I feel I am stuck between two extremes: wild and unjustified extrapolations from the tiny bit of experience I have gained so far on the one hand, and failing to learn from anything on the other. The latter seems much worse — none of us would be alive today if we reasoned “well just because a lion ate my friend, it would be unscientific and unjustified to jump to the conclusion that this lion might eat me”.
Rewriting the Ruby parser
The tree redesign has ended up being one of the most important parts of the project. It has delivered something that Ruby has never had before: a standardized syntax tree. With a standard in place, the community can start to build a collective knowledge and language around how we discuss Ruby structure, and we can start to build tooling that can be used across all Ruby implementations. Going forward this can mean more cross-collaboration between tools (like Rubocop and Syntax Tree), maintainers, and contributors.
Reduce friction
Some leaders are not worried about wasting time, but are instead worried that devoting brains to this work will slow teams down. They admit that current processes are full of friction, but claim that they have to finish whatever they’re in the middle of before they should try to fix things. They think that reducing friction is a distraction from the real work. This approach is short-sighted. The best time to reduce friction for your team was the moment it came into being, and the second best time is now.
Is this worth the cost? It depends on the situation! Your goal is to identify your team’s work habits and work environment and identify things that are slowing everybody down without buying you something worthwhile.
Work on internal tools is highly-leveraged: every one of your developers will write better software when their tools are good. It is worth devoting senior engineering brains to them. It is worth devoting your brain to them if there is nobody else. Your job, o fellow technical leader, is to make your team successful at building the widgets your organization wants to build. We must do the things nobody else can do.
Lessons from Metafont
When we consider Knuth’s reasons for writing Metafont the way he did, and compare them to the reasons for Metafont’s failure, a couple of tradeoffs emerge.
The first tradeoff is between experimentation and fun on the one hand, and the tendency of tooling to obscure its purpose on the other. Building systems that are more powerful than they need to be is not only fun, but often a prerequisite for devising experiments that can take us in a new direction. The difficulty of the matter comes in deciding when and how much we should automate beyond immediate business requirements.
The second tradeoff is between ignoring the users and the failure of the model on the one hand, and Knuth’s prioritisation of analysis and idealism on the other. The insistence on capturing the “intelligence” of a letter in abstract formulae is what made his program so difficult to use. In other words, Knuth’s approach was the exact opposite of user-centric design. He did not want to conform his software to his users, he wanted to transform the way that his users thought about font creation.