One of the greatest widely believed falsehoods is that the current technology of the world is good. The basic premise of this document is that our technology is extremely bad in its design, function, efficiency and other important aspects and does much more harm than good. The goal of the document is to show why it is so and what good technology would look like instead.
How is it possible to say, in a world of interconnection where anything can be achieved by pressing a button or speaking a voice command, that our technology is bad and harmful? To answer this we have to realize a distinction about two kinds of technology:
These kinds are antagonistic, meaning that a technology that's good for business aims to benefit a company on the detriment of people, and vice versa.
Technology that's made for the benefit of a company -- the capitalist technology -- prevails in our extremely capitalist society nowadays. It has been proven beyond any doubt that a corporation doesn't have any conscience and without restrictions (which it at all times tries to bypass) becomes a disastrously powerful, merciless fascist entity aiming solely for own benefit, destruction of all competition, becoming an eternal monopoly and by the power of the strongest with absolute power enslaving all people. Capitalist technology is always a technology made towards this goal, and in the market space is by the laws of competition and evolution getting ever better at achieving it, which we can clearly see happening right before our eyes at this very moment in history.
Because our technology is capitalist, we don't have to guess, but can take a look at what it looks like. The following are just some features of our capitalist technology:
Surveillance and privacy violation, planned obsolescence, DRM, closed technology (e.g. Apple), loss of freedom and control of the users, ads, immense bloat, unnecessary and forced burdening killer features, wasting resources, consumer technology, artificial creation of needs, anti-repairability, anti-moddability, focus on short term on the detriment of long term, focus on good look on the detriment of good design, wasting work (not sharing source code, reinventing wheels), unethical practices (e.g. loot boxes in games, i.e. gambling aimed at children), purposeful incompatibility with competition technology, creating illusion of being helpful, enslavement of humans to technology (users by interdependencies, developers by maintenance) and many more.
This is an alarming cost to pay for what we mostly don't even really need ("smart" light bulbs, social networks like Instagram etc.).
On the other hand, what does the good technology look like? It is that which helps people without discrimination. Being helpful (enabling making money or otherwise benefit) to only a specific person or a group of people (e.g. the authors of given software), is discriminatory and doesn't mean helping people at large -- on the contrary, this is falls under the definition of capitalist technology.
A typical computer would most likely be based on a 32 bit architecture, because that seems to be the minimal value for practical usability that also stays nicely aligned. It allows to address 4 GB of RAM, which is several times more than should be ever needed by an average user, and it allows to make most practical computations without having to worry about overflows. It is enough to store pixels in RGBA8 format, floats etc.
Our OS would indeed support GUI, as that is unarguably very usefuly many times (e.g. interactive image editing). However, the GUI system wouldn't be based on the common absraction of windows, as is common.
Why? Because windows are an unnecessary abstraction layer that serves little to no purpose, adds a great amount of complexity — e.g. in programs having to constantly be ready to dynamically resize their GUI, designers having to set various size hints, programmers needing to implement and maintain windowing systems that manage such things as overlapping decorated windows etc. The concept of windows was invented as a shiny killer features of early operating systems, and unfortunately users got used to it and so it became common. But really there is no need for it — we can achieve a similar environment in a much more elegant way.
Instead of windows we would have just a screen. This would however still be an abstraction, just much more thin, simple and undersandable than that of a window. Pograms needing to use GUI would simply be provided a screen to draw pixels to. The parameters of the screen (resolution, color format etc.) would be fixed, so the program wouldn't have to dynamically respond to changes in them.
As we're still talking about a screen abstraction, the sreen would really be virtual, abstracted from the hardware, providing simple interface for writing pixels to — likely a screen bufer memory to write to — whether this memory would in reality be the real screen buffer of the screen or just a memory to be processed and copied to it would be left for the GUI library implementor to decide.
The virtual screen would often correspond to the real physical screen in resolution, so that only one GUI program, out of potentially multiple simulatenously running GUI programs, would be displayed at a time, taking the whole screen — just as is the case on most mobile devices nowadays.
However, this wouldn't have to be the case — running GUI programs side by side on a single screen can often be beneficial (e.g. a terminal and a web browser) and would be achieveable by using a screen manager program that would be able to split the physical screens into multiple virtual ones, enabling side by side views. Note that a screen manager is a much more simple and lightweight program that a window manager (no dynamic resizing, no overlapping, no decorations etc.), and also isn't likely used by default, just when needed. In the end by this, for a much smaller price, we achieve what we mostly use complex window systems for: displaying two programs side by side.