Share: Facebook icon - Twitter icon - LinkedIn icon

The most important computer related innovations in the last 20ish years (post-internet) - in my view

Published Sat Feb 25 2023

Tags: opinions


Today I want to share with you my favorite computer related innovations in our post-internet world. With all the bad things happening in the world, it is good to look back at the good that has happened. I like to look at innovations related to computer to cheer myself up. This will off course be very heavily opinionated, so your list might be very different.

Some of you are probably expecting to see Cloud Computing, Blockchain and AI on this list. Those people will be dissapointed (though AI is briefly mentioned). The reason is that while Cloud Computing and Blockchain might be innovations in some respects, I don't see them as big as what I mention in this article. Cloud computing is basically just Timesharing 2.0, as we just run our stuff on other peoples computers. Yes, it empowers us to create stuff without having to have all the hardware and operations responsibility ourselves, so it will probably be useful for many business big and small in the coming years. I just don't see it as the big silver bullet some people do. Similarly, blockchain is more of a economic and financial innovation. I hope we can one day use it to limit government intervention and remove central banks, but there are also other tools for doing that. (my views here should probably not come as a shock after seeing that I have an article on Ayn Rand on this blog).

Highly programmable GPUs - massively parallel floating point operations for the masses

Graphics Processing Units (GPUs) started their life as fixed function state machines. You sent it fixed commands, and it obeyed. With their parallel nature, it wasn't too long until someone decided to make them more programmable. What if we could run custom programs on GPUs? That is how shaders came to be. With shaders, developers could program custom logic for how their vertices (points) and pixels were calculated. This still went through a pipeline of operations, but parts of them became programmable. (This pipeline included calculating which 3D points were visible, removing non visible ones, etc.). If you want to learn more about how graphics processing pipelines work, I suggest reading this article (it is older, but still valid). The programmability increased through the years, which lead to using GPUs for general purpose applications (i.e, not just graphics). This is called General Purpose GPU (GPGPU). More simulations and massively parallel computations were not possible for more people at home. This lead to innovations in simulation algorithms (e.g, fluid simulations using Smoothed Particle Hydrodynamics), graphics algorithms (e.g, screen space ray tracing, and volume rendering), and more. It also empowers AI (which we discuss later), and has made it faster to train neural networks and do massive calculations. (much of AI and machine learning are calculations, not conditional logic).

Some of you may wonder: why don't we just do the same thing for our CPUs if GPUs are so fast? The answer is that the devices work a bit differently, even though they both have some degree of parallelism. CPUs have threads as well, but they are designed to make each thread as fast as possible, not necessarily for throughput (completing as many threads operations as fast as possible). This can be a bit confusing at first. To me it helps looking at the hardware. A CPU has the option to do memory access (RAM), has its own cache, control logic and arithmetic units (math calculations). A GPU has access to its own memory, and has a scheduler, but other than that it is pretty much just floating point operations. In essence, each CPU core is a bigger beast in terms of functionality than a GPU "core". (this is off course an overly simplified explanation!) Low Level Learning on YouTube has a nice video on this topic if you are interested.

Container technology - Process separation to utilize computing resources more effectively

While Virtual Machines (VMs) have been around for a while, they had one serious drawback; you had to emulate or virtualize several operating systems on the same machine. This provided a separation of processes on a machine, and therefore let us run a lot of different programs on a single computer. Having to run multiple operating systems wasted a lot of resources though. Containers on the other hand share the operating system kernel itself, but have their own file system and process isolation (each container only being able to see its own processes). In Linux this is done through mechanisms like CGroups and namespaces. In essence, a container is just a process with extra steps making it more isolated. This is very lightweight, and has made it super easy to avoid the classical "works on my machine" issues. You pack all your dependencies, like a Debian filesystem and/or OpenSSL, into the container, and the recipient runs exactly the same versions like you. That enabled you to set up completely working servers, database setups and similar, and others can download the complete setup. These days containers come in two main flavors: Linux containers (the original and most popular), and Windows containers (if that is your poison).

That being said, if you use Windows or Mac OS X, you still need a Linux virtual machine in the background to run Linux containers.

If you are new to containers, and find the above a bit confusing, there is an awesome article from WeaveWorks you can read.

Affordable computing devices and micro controllers

I want to begin with a quote from the former boss of Commodore, Jack Tramiel. His goal with the computers they made was to make them affordable so every home could have them. It is possible to make money, and have socially good causes at the same time! (just in case you doubted it for a second)

Computers for the masses, not the classes – Jack Tramiel, founder of Commodore Business Machines

In the last 10-15 years (or so) we have gotten many small affordable computer- and micro controller devices. These include small computers like Raspberry Pi, NVidia Jetson Nano, and Odroid, but also micro controllers like the Arduino and Raspberry Pi Pico. What these devices have done is to empower individuals to make their own software and hardware projects in a much more affordable and easy way. Programming an Arduino for various home automation and robotics projects (e.g, measuring CO2 in your home, or small robots), is way easier than having to do everything from scratch. Raspberry Pi also made it far easier and affordable for lots of people to make their own home servers, retro game emulation machines etc. It even makes for a great entry to GNU/Linux and programming, so go buy your kids/grandkids/nephews/nieces a Raspberry Pi or similar!

3D printers - Everyone can make products at home (given some basic computer skills)

This is probably the section I have to explain the least! Being able to make your own 3D models (CAD) or download them, and then printing them yourself is a huge innovation. This makes it possible to a whole new degree for individuals to produce their own products, and therefore being able to complete in the market place in a whole new way (both from selling digital products and physical ones). You will notice that in everything from home decor, and cases for computers like Raspberry Pi, to guns. Maybe we will also print food in the future, who knows? While 3D printer prices are still a barrier to entry, the prices are slowly declining. More competition in the free(ish) market is amazing like that! Speaking for myself, I would love to own one! Hopefully I will have room for one in the near future.

Artificial Intelligence (AI) as a service - For those of us who don't have petabytes of data to train our own datasets

Artificial Intelligence (AI) is not a new field, and has been with us in one shape or form since the 50s. In my view, it is thanks to the growing computing power and related innovations (see GPU section above) that we have been able to do more research faster. This has caused a surge of new algorithms, use cases and more. What I consider the biggest innovation though, is making it available for everyone. Sure, you can program and train your own models, but you probably don't have the amount of data that a company like OpenAI, Google or Microsoft has. These companies providing their trained models as APIs have made it possible for more people to utilize advanced AI (neural networks, machine learning etc.) technology in their applications. You may not have noticed it that much until recently, but many of these APIs have been used in various web and mobile apps for years.

Honorable mention: Digital distribution and better accessibility

You may wonder: what is so special about digital distribution? It has after all been done since the 90s for various media like games and software. Thanks to increasing internet speeds, you can now do all of it way faster. That has in itself enabled platforms like Steam (for games), Netflix (for movies), and more. Still, that is not what I want to highlight here. Digital distribution has also made it easier than ever to access audio books, books, courses etc. The accessibility here is two-fold; actually getting hold of the material, and also making it possible to use for people who may not have had the options earlier. One example is text/subtitles in movies and courses, where you can now increase the font size in most services. Same for audio books becoming popular, and that you are now able to set the speed if you need to (some may want to slow down, or you need to set it up to 2x speed to not get bored). Or for dyslexic people who may not have had the chance to enjoy the book as much otherwise! Having material in digital format has also enabled us to have screen readers to make everything from code to web pages accessible for blind people. To me, accessibility issues like these are probably the most important innovations that have come from digital distribution. My hope for the future is that we can continue to make our media (both entertainment and educational) available to a higher number of people.




Other posts that might interest you: