The Injustice of Complexity in Computing

Penned on the 4th day of April, 2020. It was a Saturday.

There has been a troubling trend in the way we do computing in the last few decades. Our computers have been becoming increasingly complicated at an exponential level for a long time, and the benefits expected from the cost of this complexity have not manifested at all. In its stead, the layman sees computers becoming increasingly simple and toy-like. Creativity and productivity with computers seems to be following the inverse of its increasing complexity. This is an absolute disaster for everyone who uses computers.

For at least the last ten years, computers, tablets, smartphones and TVs alike have all been approaching the same boring singularity of Star Trek. Magic tablets and big screens. This is imaginatively bankrupt, but it is a gross injustice to the public because it has been forced onto them, denying computers from meaningfully existing by any other form. No hobby electronic can reach critical cultural mass without answering to the market that started this mess to begin with. We’re not as inspired to dream, because the market has been reconfigured to make it cost too much for us to experiment. We must be able to experiment without excessive risk if we are to be able to innovate and achieve new things.

Ten years ago, a ThinkPad would have more IO ports than you could count on two hands. Even if you were an experienced user you might not be able to name all of them. Nowadays, its just a MacBook by any other name. The IO spread is minimal. You might still have an ethernet port if you’re lucky. It all goes through the sacred USB-C Thunderbolt ports, no matter what it is. If you dug into it, you’d notice USB-C, Thunderbolt and such are vastly more complex than any particular standard like Ethernet or such would be by itself. Why is everything becoming the same overcomplicated, interchangeable goop?

Those developing these products are not concerned with the possibilities their customers have of using the computer creatively. Instead, they’re completely preoccupied with what new imaginary things their latest programming idea is going to be able to do. They have become singularly focussed on compute and chassis, at the blatant expense of everything else. Most people do not know what they are missing, when all it was was the things that never got made.

This perversion of computing towards the bottom lines of their progenitors is materially obvious in hardware. Less obvious, and perhaps more lethal to any dream, is how this same phenomenon takes place with software.

Software development has become so ridiculously complex that the kinds of programming in vogue today seem like child’s play to veteran programmers. Upon closer inspection, they become baffled, as they never seem to run out of information to get about the API they were trying to use. Eventually they resign themselves as the ‘solution’ is so hopelessly complicated they could never use it. As someone who isn’t a month past the age of 22, I can’t describe how silly I feel to experience this. I didn’t get a bachelor’s in the 1980s. I never worked for aerospace or anything. But that’s how dated I felt personally to read about Ethereum for a couple hours.

The main entrée of this fetishization of complexity is in none other than the World Wide Web. At the beginning, things were simple. Network programmers were expected to use TCP/IP, and deal in terms of ports, sockets, sending data as octets, and so on. But complexity came on fairly quickly. Some of it was justified; the technical challenges of scaling and distribution were real problems that until Web 2.0 filled in its soles, nobody had definitively solved. Other aspects of it was simply because they could. These days, ‘because we can’ complexity comprises the vast majority of programming on the Web. It’s so out of hand that we’re repeating the history of Internet Explorer, as Google engineers sit atop their golden mountain and let personal preference dictate the behaviour of Chromium over the standards every other browser uses. They know it doesn’t matter, because nobody can overcome the system they built and defy their will. It’s too complex to co-opt. It’s too complex to fix. It’s too complex to hand off to a standards body – have you seen what the W3C has been like lately? This is all deliberate. The only winning move is not to play.

This is a new paradigm of human rights in software that has eclipsed the previous era of Free vs. Proprietary Software. The coming decades will be coloured by a struggle for the freedom, not of our developers, but of our very computers, to do what we expect them to do. All of the major technology companies are hedging their fortunes on the ocean of code that is impossible to not continuously maintain. The fact that you have to go to them when something breaks is how they are winning this struggle. The fact that even programmers, the people who supposedly command the computer, are as shanghai’d as the general public about this is how they are winning. They treat computer programs like living things and cash in on the de facto custody they have over it.

Computers are not living things. Computers are machines. If we are not the ones telling them what to do, somebody else is. If we are not knowledgeable enough to be able to tell them what to do, somebody else is. Look closely at who those people are, and you may find that they are startingly powerful. Our computer software is deliberately, if unwittingly, engineered to be perpetually unstable, creating a class of middleman developers-turned-politicians. Technical leads, or whoever leashes them, have a propriety over their software that transcends licensure entirely.

Computers are complicated enough as it is. Have you ever tried to program the Game Boy Advance? To be frank, anything more complex than that probably can’t be understood by a single person. It’s a fascinating console I’ve studied for years. I stuck with it because it empowers me to an extent that no graphics framework or game engine on the PC ever could.

The big selling point behind game engines is their wide array of platform support. Under the hood, they’re veritable jungles of spaghetti code that barely runs and no one who ever worked on the development team in the product’s entire history could definitively tell you how. This might be called a catastrophe, but thanks to the market people have been convinced to overlook the price of this poor craftsmanship. They’ll download 80 megabyte mobile apps to play a clicker game because they don’t care and don’t have a choice. But they’re still paying for it.

The thing nobody finds intuitive about the GBA is that it’s just as cross-platform, and is vastly simpler to reason about. Think about it: emulators brought it to every platform you can think of and more. Of course it has controller support. You never have to worry about saves. Sure, it’s limiting, but a lot of games don’t need more than it can provide. Come to think of it, the emulators are relatively simple, too. They’re implementing a pretty simple console, after all.

If there was a modern retake on a minimally usable console for game development, it could conceptually handle 99% of games that have ever existed. It could provide a basic 3D environment. It could provide save files. It could provide removable media. It could even connect to the internet. And it wouldn’t be a thousand times more complicated than the GBA in doing so. It might seem strange in that it may not have an OS, and it might befuddle some at its deliberately primitive APIs, but people will carry on. This doesn’t exist because most employable programmers in the world would never make something that is so inconvenient for themselves as this.

Most developers in major firms, and indeed many developers in the wild, have a chronic problem with problem solving. They are infamous for creating tools that are essentially solutions looking for problems. Ethereum is the grand pinnacle of such a problem-free solution. The thing they never seem to be able to reckon with is that the general public doesn’t care about their code. At all.

Many stabs have been taken to make a competitor to YouTube. Some have failed for a lack of competency or skill. The rest of them failed because they had too much competency or skill. They had developers who were so turnt, so full of themselves, they were convinced by God that the YouTube killer must exist on the blockchain, because of course it would, and six months later out comes a website that was supposed to be a video sharing service but is labeled “digital content marketplace” instead. This actually happened. At the end of the day, people don’t care about the technicals. All they wanted was a better-managed platform for their content.

But what’s in it for a game developer to target the Game Boy Advance instead of some other cross-platform tool, anyway? Well, there’s no choice forced onto you as to whether you want to have good code or a product releasing 20+ years from now. I can make a game that you will like, and I can do it quickly, and it will not fall apart next week, for either of us. I can ship physical copies without worry. You can have a game that is truly yours to cherish, explore, complete, disassemble, and most importantly dream about, as I once did with Pokémon when I was a teenager. I see this as an absolute win.

An apprehensible machine lets us make better code quickly, simply because it’s simpler. This is a win for human understanding, which is a win for creativity too because it’s a win for the human imagination. It’s a stake in the vampyre of the Star Trek magic-tablets-and-nothing-else future we’re headed towards. Computers can be tools for humans first and foremost, if we truly want them to be.

Until next time,
Alexander Nicholi