Aral Balkan

Mastodon icon RSS feed icon

Optimal automation

We keep seeing the following cycle in growth-based systems:

  1. A technology is introduced that improves a manual process via automation.

  2. The technology is iteratively improved, eventually reaching an optimal state of automation.

  3. Instead of stopping there1, due to infinite growth requirements, we move past the optimal state to excessive automation.

Excessive automation can remove the positive aspects of the human experience from the equation, introduce error states that would otherwise not exist, introduce unnecessary erosions of human rights (e.g., privacy implications via ubiquitous surveillance and mass centralised data gathering) and/or lead to excessive consumption of resources, needless environmental damage, and habitat loss.

And all this, for negligible additional value, if not a negative return, over optimal automation.

Automation, in other words, obeys the law of diminishing returns.

Automatic transmissions and cruise control are examples of optimal automation, while self-driving cars are an example of excessive automation.2

Small language models and on-device applications of machine learning such as translation and personal knowledge management are examples of optimal automation, while the global rush to implement large language models/generative AI – glorified autocomplete that spews bullshit, poisons our global network of knowledge, and is scaled at great environmental cost and misleadingly marketed as a pathway to artificial general intelligence – is an example of excessive automation.

Where optimal automation augments human abilities, excessive automation removes the human from the equation altogether.

Capitalism is incapable of optimal automation and will always overshoot it

Capitalism – growth-driven as it is – is unable to stop at the optimal level of automation and careens head-first into excessive automation time and time again.

This is a natural outcome of its inherent hype/boom/bust/bail-out cycles fueled by venture-capital-to-exit pipelines.

The emergence of excessive automation signals the arrival of the bust stage of such a cycle. At such time, profits have already been privatised, losses are queued to be socialised, and the cycle prepares to begin anew.

Nothing lasts forever

Of course, such a process cannot continue ad infinitum in a closed system like our habitat here on Earth3.

At each iteration of the cycle, resources are depleted in tandem with ever-increasing energy requirements (see proof-of-work cryptocurrencies, then NFTs, now AI…)4

Given this, our two options are either a rapid evolution towards a different system where the main success criterion is not infinite growth with finite resources (aka extinction) or extinction.

Such a macro analysis and damning long term prognosis – although both simple and logically sound – might seem alarmist. The events we are experiencing as I write this in 2024 should remind us that they are anything but.

Optimal automation and degrowth

Optimal automation is intimately related to degrowth.

It is an existential challenge for our species to evolve the success criteria of our social systems to be able to identify the optimal level of automation as the point where we must stop.

And if we have rushed past this point in certain areas, we must either turn back or, at the very least, stay where we are without falling deeper into excessive automation.

This necessitates that we dismantle the cult of infinite growth with finite resources that currently has us on a collision path with the extinction of our species.

This isn’t merely a technical problem but a social one.

It is not one we can tackle independent of the problem of systemic inequality.

If we are to solve it, we must embrace post-capitalist approaches and adopt a degrowth mindset.

The concept of optimal automation can be a useful conceptual lens for analysis and policymaking as we tackle the self-imposed existential threats facing our species.


  1. Stopping at optimal automation doesn’t necessarily mean we stop improving the tool or service we’re building (although it is entirely legitimate to do so if it has reached a point where it works well). It can mean, for example, focussing our energy on improving other aspects of the experience (beyond automation) through iterative design. ↩︎

  2. Of course, we might argue that cars themselves are not an optimal solution to the problem of transportation (and that improved public transportation infrastructure and 15-minute-cities are). ↩︎

  3. And no, at this rate, it’s highly unlikely that we will find a second habitable planet and get enough of us on it to halve our risk of extinction. The billionaires who are driving the bus of the cliff will not be magically building a bridge underneith it as it falls off, no matter what they tell you. In other words, no, technology will not save us. ↩︎

  4. As our valid needs (and even wants) are met in earlier cycles, the bullshit coefficient of each cycle necessarily increases and the wants that are manufactured become increasingly more fantastical and removed from reality. ↩︎