The Superoptimisation Trifecta

Penned on the 1st day of October, 2021. It was a Friday.

Back in May of this year, I wrote about a novel concept called syndication, where the content of a digital social space is duplicated candidly in order to decouple it from the infrastructure on which it is transmitted. Like most of my good ideas it has since evolved into a more complete, workable model. This is what you will read about today: a general prescription for sustainable internetworking of information which I have come to call superoptimisation.

Background & problem

This work began in my journal, when I had gained my first exposure to such Atlassian products as Confluence. I did not find much power in using these tools at all – things were very poorly organised, even as we tried to keep things together; there was this Google-like searchability answer to the organisation problem that never worked well and had no programmability about it; and WYSIWYG was the only mode of editing.

When I was younger I had a volume of experience editing MediaWiki sites, and setting up hosting to run my own. I still manage a couple of wikis to this day. 2010 was a very different time for software! Bots were a staple of any large wiki project. People still strived for competency in writing wikicode, at least if they wanted to be a decent editor. The whole thing was easily integratable with such things as IRC, RSS, and other open web standards. Anybody could program it, and for places like Wikipedia, they did.

I don’t know where all of this power disappeared to, but I do know that it used to be this way and it most definitely is not like that now. This is a problem for companies that thrive on individual competency and skill – everything is being slowly reduced to a grueling lowest common denominator that makes even early-stage startups as sclerotic as Microsoft. I don’t want to work this way.


The idea underpinning solutions to this is fairly straightforward. Data on the internet needs to be divided into two categories, according to its nature: heavy data, and light data. An example of light data is Git repository information – it is light because it is practical to syndicate, duplicating and decentralising the data to maximise its reach in the network. By contrast, heavy data could be something like stars or follows on a major website – it is heavy because it is not practical to syndicate as much as it is practical to aggregate, adding together the data and softly decentralising it to maximise human visibility of the information on the network.

With light data syndicated and heavy data aggregated, the only thing left to do is integrate all of the data. This is where MediaWiki bots like the ones I described above come into play. This is where IRC bots can help. It’s also where protocol bridges between IRC, Discord, Matrix, and other protocols come into use. It is very practical to bridge this information over into everywhere so the most people can see it.

This model is the complete opposite of the typical data-centric startup model that monetises personal information and tries to sell ads. No more walled gardens where troves of user information get sold to FAANG on bid, and later get compromised by hackers. This is completely disempowering, and this model of superoptimisation is completely empowering. It is of interest to any company or group that values competency, knowledge, and transparency with itself about its own processes. It is light to be shed upon byzantine bureaucracy.

Until next time,
Άλέξανδερ Νιχολί