Hootsuite: in pursuit of reactive systems
Internally, we’re still trying to define what a microservice is and what the scale of that ought to be. Today, most of our services focus on accomplishing just one set of highly related tasks. Our goal is to have each of these services own some part of our domain model. Data services, for example, would each control their own data, and nothing else would have access to that. To get to that data, you would have to go through the data service itself.
Then we would also have functional services, which are the services that essentially glue business logic onto the data part of the logic. But in terms of the size of these things, I’d say we’re still trying to figure that out, and we haven’t come up with any hard and fast rules so far.
Joining data from multiple Postgres databases
With the heyday of bigdata and people running lots of Postgres databases, sometimes one needs to join or search data from multiple absolutely regular and independent PostgreSQL databases (i.e. no built in clustering extensions or such are in use) to present it as one logical entity. Think sales reporting aggregations over logical clusters or matching click-stream info with sales orders based on customer ID-s.
So how to solve such ad-hoc tasks? One could of course solve it also on application level with some simple scripting but let’s say we only know SQL. Luckily PostgreSQL (plus the ecosystem) provide some options out of the box and there are also some 3rd party tools for cases when you for example can’t use the Postgres options (no superuser rights or extensions can be installed). So let’s look at the following 4 options:
dblink extension
postgres_fdw extension
Presto distributed SQL engine
UnityJDBC virtual driver + SQuirrelL SQL client
IncludeOS: a unikernel for C++ applications
IncludeOS is a project to create a C++ API for the development of unikernel-based applications. When an application is built using IncludeOS, the development toolchain will link in the parts of the IncludeOS library required to run it and create a disk image with a bootloader attached. An IncludeOS image can be hundreds of times smaller than the Ubuntu system image for running an equivalent program. Start times for the images run in the hundreds of milliseconds, making it possible to spin up many such virtual machine images quickly.
When an IncludeOS image boots, it initializes the operating system by setting up memory, running global constructors, and registering drivers and interrupt handlers. In an IncludeOS unikernel, virtual memory is not enabled, and a single address space is used by both the application and the unikernel library. Therefore there is no concept of system calls or user space; all operating system services are called with a simple function call to the library and all run in privileged mode.
The unikernel is also single-threaded, and there is no preemption. Interrupts are deferred when they happen, and attended to at every iteration of the event loop. The design suggests user programs also be written to follow the asynchronous programming model, with callbacks installed to respond to operating system events. For example, a TCP socket can be set up in a user program and a callback inside the application handles the connection when a third party attempts to connect.
An advantage of IncludeOS’s minimalist design is the reduction of the attack surface for the application. With a self-contained application appliance, there are no shells or other tools that would be helpful to an attacker if they manage to compromise the application. Additionally, the stack and heap locations are randomized to discourage attackers.
IncludeOS does not implement all of POSIX. It is the opinion of the developers that only parts of POSIX will be implemented, as needs arise. It is unlikely that full POSIX compliance will ever be pursued as a goal by the developers. Currently, there are no blocking calls implemented in IncludeOS, as the current event loop model is the favored way to use it. IncludeOS also lacks a writable filesystem at this point.
Networks all the way down & Networks all the way down, part 2
This is not an isolated incident: if you go over the list of peripherals from last time, you’ll find something similar for every single one of them. Ethernet technology has essentially been completely redesigned between 1989 and 2014, yet the only part of the Ethernet standard that’s both device-independent and directly visible to software – namely, Ethernet frames – has remained essentially unchanged (except for the addition of jumbo frames, which are still not widely used). SAS and SATA may use a very different encoding and topology than their parallel counterparts, but from a software perspective the changes are evolutionary not revolutionary. Regular PCI and PCIe are essentially the same from the perspective of the OS. And so forth. Probably the most impressive example of all is the x86 architecture; a modern x86 processor can (and will) go to substantial lengths to pretend that it’s really just a fancy 8086, a 286, a 386, or a slightly extended AMD Athlon 64, depending on the operating mode; the architecture and implementation of current x86 processors is nothing like the original processors it’s emulating, but while the implementations may change substantially over time, the interfaces – protocols, to stay within our networking terminology – stay mostly constant over large time scales, warts and all.
Note how much this runs counter to the common attitude that software is easy to change, and hardware difficult. This might be true for individuals; but across computing as a whole, it’s missing the point entirely. The more relevant distinction is that implementations are easy to change, and protocols difficult. And once a protocol (or API in software terms) is firmly established and widely supported, it’s damn near impossible to get rid of. If that means that USB thumb drives have to pretend they are hard disks, and need the capability to understand the file system that’s stored on them to do a good job; if that means that Ethernet devices have to pretend that it’s 1989 and all nodes on the network are connected to one wire and identified by their MAC address, rather than connected by point-to-point links via network switches that understand IP just fine; if that means that "`x86” processors have to pretend they’re not actually out-of-order cores with an internal RISC-ish instruction set, hundreds of registers and sophisticated "`decoding” (really, translation) engines; then so be it. That is the price we pay to keep things more or less in working order – except for that occasional instance when all our sophisticated abstractions break, the underlying reality shines through, and that mounted network share is revealed to not actually be on the local machine at all, by producing errors that no app knows how to handle.
William Gibson has a theory about our cultural obsession with dystopias
To what extent do you see World War II as a real-life apocalypse? We think of it as a victory, but nothing in human history can match its devastation, after all. Are we therefore living in a post-apocalypse?
I suppose that’s a matter of perspective. In many and various ways, WWII was the change-driver for much of the world we live in. But traditionally, in our culture, apocalypse has been imagined as a single, unitary, short-term event. In that sense, I don’t think something that also results in 70-some years of unprecedented socioeconomic well-being for millions of people can count as an apocalypse, even though it was also, at the same time, quite apocalyptic for many of the less fortunate.
Dishonored’s party level rewrote the rules of stealth games
Unfortunately, non-lethal or ghost playthroughs have largely become the way to play stealth games. Assassin’s Creed forces a restart if you’re caught. Deus Ex: Human Revolution grants far more XP to non-lethal players than others. Sneaky Bastards, a magazine for stealth enthusiasts, claims that there is no stealth genre, merely action games with stealth, because nothing lives up to their ideal of what stealth should be (Justin Keverne’s great counterpoint in the comments of the linked article is well worth a read). The genre is in danger of becoming homogenous, because people increasingly treat it as if there is only one “true” way to play. An entire genre can’t sustain shooters with just one kind of gun or action games with just one kind of attack, likewise, stealth can’t sustain just nonlethal or ghost runs.
Dishonored works so well because it rejects the notion of there being a “right” way to play a stealth game. It’s a game about tools, a game about choices. It celebrates you. Dishonored treats stealth as a matter of motive rather than physical awareness. Because the essence of video games is interactivity, it stands to reason that the best games are the ones that give the player the most options, and Dishonored is all about those options. Keep your true purpose hidden, eliminate the target assigned to you, and leave, but feel free to sign the guestbook and leave your mark, saying "`I was here”.