Translating Java to Kotlin at scale
Contrary to popular belief, we’ve found it’s often safer to leave the most delicate transformations to bots. There are certain fixes we’ve automated as part of postprocessing, even though they aren’t strictly necessary, because we want to minimize the temptation for human (i.e., error-prone) intervention. One example is condensing long chains of null checks: The resulting Kotlin code isn’t more correct, but it’s less susceptible to a well-meaning developer accidentally dropping a negation.
Migrations: the sole scalable fix to tech debt.
As a result you switch tools a lot, and your ability to migrate to new software can easily become the defining constraint for your overall velocity. Given their importance, we don’t talk about running migrations very often; let’s remedy that!
Interim note 3: text-based media in the age of showmanship
Microblogging creates meaning through showmanship and performance.
Design token-based UI architecture
Design tokens are design decisions as data and serve as a single source of truth for design and engineering. Utilizing deployment pipelines, they enable automated code generation across platforms, allowing for faster updates and improved consistency in design. Organizing tokens in layers — progressing from available options to tokens that capture how they are applied — ensures scalability and a better developer experience. Keeping option tokens (e.g. color palettes) private reduces file size and supports non-breaking changes. These benefits make design tokens particularly well-suited for organizations with large-scale projects, multi-platform environments or frequent design changes.
How Stripe architected massive scale observability solution on AWS
Stripe formulated a four phase approach to this migration:
Dual-write metrics to the legacy time-series database (TSDB) and AMP
Translate assets (alerts and dashboards) from the source language into PromQL and Amazon Managed Grafana Dashboards
Validate the results of the translated assets and the dual written metrics in an iterative fashion
Migrate users' mental models to a Prometheus compatible mental model
A practitioner’s guide to wide events
The tl;dr is that for each unit-of-work in your system (usually, but not always an HTTP request / response) you emit one “event” with all of the information you can collect about that work. “Event” is an over-loaded term in telemetry so replace that with “log line” or “span” if you like. They are all effectively the same thing.
This is where I find a lot of developers get tripped up. The idea sounds good in theory, and we should totally try that one day! But I have this stack of features to ship, that bug that’s been keeping me up at night, and 30 new AI tools that came out yesterday to learn about. And like… where do you even start? What data should I add?
Like anything in software, there are a lot of options for how to approach this, but I’ll talk through one approach that has worked for me.
We’ll cover how to approach this in tooling and code, an extensive list of attributes to add, and I’ll respond to some frequent objections that come up when discussing this approach.
The fascinating security model of dark web marketplaces
The boom and bust cycle has triggered a sort of evolution, with each new marketplace learning from the flaws of the previous one. This market is the culmination of this evolution — at least for now. The intent of this article is to shed light on its security model as a technical curiosity, without romanticizing or otherwise commenting on the products it sells.
Programmer migration patterns
With all that said, now I can finally make a point about python 2 vs 3. They are very similar languages, yet somehow not the same. In my opinion, that’s because they occupy totally different spots in this whole programmer migration chart.
Python 2 developers came from a world of C and perl, and wanted to write glue code. Web servers were an afterthought, added later. I mean, the web got popular after python 2 came out, so that’s hardly a surprise. And a lot of python 2 developers end up switching to Go, because the kind of “systems glue” code they want to write is something Go is suited for.
Python 3 developers come from a different place. It turns out that python usage has grown a lot since python 3 started, but the new people are different from the old people. A surprisingly large fraction of the new people come from the scientific and numerical processing world, because of modules like SciPy and then Tensorflow. Python is honestly a pretty weird choice for high-throughput numerical processing, but whatever, those libraries exist, so that’s where we go. Another triumph of python’s easy integration with C modules, I guess. And python 3 is also made with the web in mind, of course.