mkcert: valid HTTPS certificates for localhost
mkcert is a simple by design tool that hides all the arcane knowledge required to generate valid TLS certificates. It works for any hostname or IP, including localhost, because it only works for you.
Here’s the twist: it doesn’t generate self-signed certificates, but certificates signed by your own private CA, which your machine is automatically configured to trust when you run
mkcert -install
. So when your browser loads a certificate generated by your instance of mkcert, it will show up with a green lock!
The “developer experience” bait-and-switch
The “developer experience” bait-and-switch works by appealing to the listener’s parochial interests as developers or managers, claiming supremacy in one category in order to remove others from the conversation. The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.
Shifting the conversation away from actual user experiences to team-level advantages enables a culture in which the folks who receive focus and attention are developers, rather than end-users or the business. It naturally follows that teams can then substitute tools for goals.
The state of software security in 2019
Static checkers -- compilers — and dynamic checkers (e.g. Address Sanitizer and the rest of the LLVM sanitizers) have advanced very far in the past 20 years. What was once bleeding-edge research now comes for free with off-the-shelf compilers. This is fantastic!
And unfortunately, the problems that I find the most vexing — the abuse category generally — are not in my area of greatest expertise. My heart is really in the language problem: meaningful interfaces, ergonomic and safe libraries, memory safety, and type safety. But it’s the abuse that makes my heart sick.
STAMPing on event-stream
“Who did this?” is the wrong question. “How did this happen?” is the wrong question. A better question is “why was this possible in the first place?”.
An accident isn’t something that just happens. Accidents aren’t isolated failures. Accidents aren’t human error. Accidents aren’t simple. Accidents are complicated. Accidents are symptomatic of much deeper, more insidious problems across the entire system.
This is the core insight of Leveson. Instead of thinking about accidents as things with root causes, we think of them as failures of the entire system. The system had a safety constraint, something that was supposed to be prevented. Its controls, or means of maintaining the constraints, were in some way inadequate.
The purpose of a postmortem should be to prevent future accidents. We don’t just stop the analysis once we find a scapegoat. Sure, we can say “Tarr transfered it over”, but why did that lead to an accident? Why did he want to abandon it? Why was he able to transfer it over? Why did nobody notice he transferred it? Why was a single dependency able to affect Copay? Why was a random internet dev so critical in the first place?
Leveson aggregated all of her safety approaches under the umbrella term STAMP. We’re going to analyse the attack via STAMP and see if we can get better findings than “Tarr don’t software good”.
This is what most people focused on, despite being the most superficial bit. Problem: Tarr gave access rights to an internet rando. Solution: tell people to vet internet randos. This would presumably be enforced by demanding maintainers have better discipline.
This places additional responsibilities on the open source maintainer. One law we see time and time again is “you cannot fix things with discipline”. First of all, they simply don’t work: see all the data breaches at professional, “responsible” companies. Also, discipline approaches do not scale. This problem happened because a single contributor for a single package made an error. At the time of the attack, Copay had thousands of package dependencies. That means that thousands of maintainers cannot make any mistakes or else the system is in trouble. And even if they all have perfect discipline, this still doesn’t prevent dependency attacks. A malicious actor could seed a package and use it later, or steal someone else’s account.
Abstraction tiers of notations (part 1)
Abstractions from different tiers have different learning and usage cost. The higher tier abstractions are more taxing to use and more difficult to learn than those of lower tiers. However, these higher-tier abstractions allow decomposing more complex task in manageable pieces. The lower tier abstractions have lower learning and usage cost, but they support lesser complexity. Depending on the situation, these factors could have different weight.
Thus, targeting the highest tier possible is not a sure-win strategy.
One of the good solutions to this trade-off is designing languages that support abstractions from different tiers. For example, Java forces to use class abstraction (tier 4) even for simplest programs. On the other hand, Groovy allows writing programs using a sequence of actions as the script (the tier 2-3 on top level). So, it is possible to choose a high-level abstraction tier suitable of the specific task and not to pay the cost of higher-tier abstractions.
Readings in database systems, 5th edition: large-scale dataflow engines
In a sense, MapReduce was a short-lived, extreme architecture that blew open a design space. The architecture was simple and highly scalable, and its success in the open source domain led many to realize that there was demand for alternative solutions and the principle of flexibility that it embodied (not to mention a market opportunity for cheaper data warehousing solutions based on open source). The resulting interest is still surprising to many and is due to many factors, including community zeitgeist, clever marketing, economics, and technology shifts. It is interesting to consider which differences between these new systems and RDBMSs are fundamental and which are due to engineering improvements.
Today, there is still debate about the appropriate architecture for large-scale data processing. As an example, Rasmussen et al. provide a strong argument for why intermediate fault tolerance is not necessary except in very large (100+ node) clusters. As another example, McSherry et al. have colorfully illustrated that many workloads can be efficiently processed using a single server (or thread!), eliminating the need for distribution at all. Recently, systems such as the GraphLab project suggested that domain-specific systems are necessary for performance; later work, including Grail and GraphX, argued this need not be the case. A further wave of recent proposals have also suggested new interfaces and systems for stream processing, graph processing, asynchronous programming, and general-purpose machine learning. Are these specialized systems actually required, or can one analytics engine rule them all? Time will tell, but I perceive a push towards consolidation.
Finally, we would be remiss not to mention Spark, which is only six years old but is increasingly popular with developers and is very well supported both by VC-backed startups (e.g., Databricks) and by established firms such as Cloudera and IBM. While we have included DryadLINQ as an example of a post-MapReduce system due to its historical significance and technical depth, the Spark paper, written in the early days of the project, and recent extensions including SparkSQL, are worthwhile additional reads. Like Hadoop, Spark rallied major interest at a relatively early stage of maturity. Today, Spark still has a ways to go before its feature set rivals that of a traditional data warehouse. However, its feature set is rapidly growing and expectations of Spark as the successor to MapReduce in the Hadoop ecosystem are high; for example, Cloudera is working to replace MapReduce with Spark in the Hadoop ecosystem. Time will tell whether these expectations are accurate; in the meantime, the gaps between traditional warehouses and post-MapReduce systems are quickly closing, resulting in systems that are as good at data warehousing as traditional systems, but also much more.
Looking back at Postgres
The highest-order lesson I draw comes from the fact that that Postgres defied Fred Brooks' “Second System Effect”. Brooks argued that designers often follow up on a successful first system with a second system that fails due to being overburdened with features and ideas. Postgres was Stonebraker’s second system, and it was certainly chock full of features and ideas. Yet the system succeeded in prototyping many of the ideas, while delivering a software infrastructure that carried a number of the ideas to a successful conclusion. This was not an accident — at base, Postgres was designed for extensibility, and that design was sound. With extensibility as an architectural core, it is possible to be creative and stop worrying so much about discipline: you can try many extensions and let the strong succeed. Done well, the “second system” is not doomed; it benefits from the confidence, pet projects, and ambitions developed during the first system. This is an early architectural lesson from the more “server-oriented” database school of software engineering, which defies conventional wisdom from the “component-oriented” operating systems school of software engineering.
Another lesson is that a broad focus — “one size fits many” — can be a winning approach for both research and practice. To coin some names, “MIT Stonebraker” made a lot of noise in the database world in the early 2000s that “one size doesn’t fit all”. Under this banner he launched a flotilla of influential projects and startups, but none took on the scope of Postgres. It seems that “Berkeley Stonebraker” defies the later wisdom of “MIT Stonebraker”, and I have no issue with that. Of course there’s wisdom in the “one size doesn’t fit all” motto (it’s always possible to find modest markets for custom designs!), but the success of “Berkeley Stonebraker’s” signature system — well beyond its original intents — demonstrates that a broad majority of database problems can be solved well with a good general-purpose architecture. Moreover, the design of that architecture is a technical challenge and accomplishment in its own right. In the end — as in most science and engineering debates — there isn’t only one good way to do things. Both Stonebrakers have lessons to teach us. But at base, I’m still a fan of the broader agenda that “Berkeley Stonebraker” embraced.