Technological Autonomy

Written by Chad Haag. Support Chad Haag on Patreon, or check out his Youtube channel.

Introduction

Technological autonomy refers to modern technology’s tendency to develop itself in accord with its own goals rather than any goals which humans might seem to posit for it. Defined negatively, technological autonomy refers to the inability for humans to ever hope to maintain control over the long-term trajectory of technological development. Another negatively-defined way to phrase this is that technology is not merely a passive instrument which humans are free to use however they see fit because such a state of affairs would presuppose that modern technology could be constrained by some extrinsic set of humanistic moral values such as the need to uphold human dignity, freedom, or happiness. Insofar as the technological system may seem to be compatible with these moralistic values in the short-term, this is an illusion which masks the technological system’s total indifference to any values beyond pursuing its own growth in power and scope of influence. The only values which can determine which changes will be implemented in the development of modern technology are technology’s own self-reinforcing goals of achieving greater levels of efficiency, productivity, and predictability.

Etymology

From the standpoint of etymology, “technological autonomy” can be broken down into four Greek terms which, taken collectively, mean that technology rules itself. The Greek term “auto” means “self” and “nomos” means “law,” while the term “techne” refers the knowledge to create artificial things that would not spontaneously appear within wild nature[1] and “logos” refers to a rationality of a purely abstract nature.[2] Taken collectively, these four terms reveal that “technological autonomy” refers to the technological system’s tendency to develop ever-finer levels of artificial rationalization over time but only in accord with its own laws.

History

The historical origins of the idea of technological autonomy go back at least as far as Karl Marx, whose “Fragment on Machines” examined the irony that as Capitalism progresses, there is a strange shift from the worker’s use of tools to technology’s use of the worker.[3] We can rephrase this to say that as modern technology develops, it inevitably reduces the human person to a tiny appendage in a much larger but ultimately impersonal system of machines. Technological autonomy logically follows from Marx’s claim that over time Capital develops into a “moving power that moves itself” with no need for any human to press the figurative “start” button to launch its processes of production into motion. Instead, Capital recursively transposes its own source of motion back into the same system of production which then recycles that power to produce more change within itself without any need for human thinkers to guide the process in accord with their own plans or intentions: “[t]he science which compels the inanimate limbs of the machinery to act purposefully . . . don’t exist in the worker’s consciousness but instead act upon him as an alien power.” Put simply, under capitalistic conditions, the production process is no longer a labour process in any traditional or human sense of the term because labour itself comes to be fully subsumed under technology in a way which actually renders each human worker superfluous. Paradoxically, humans become less and less important to the production process because technical knowledge comes to matter far more than labour, but the knowledge which accounts for capitalism’s ability to maximize its productivity and continue to achieve finer grades of progress lies outside the tiny minds of the human cogs which are incorporated into the System. Marx calls this knowledge which matters far more than labour the “technological application of Science” but implies that this knowledge is located in the technological system itself and is only ever accidentally, belatedly, and incompletely disclosed in the minds of a handful of particularly well-educated human scientists or engineers. No individual human could hope to control an impersonal process of capitalistic advancement which paradoxically “thinks for itself” despite lacking a human mind.

It is very important to note, however, that Marx’s view of technological autonomy was incomplete at best, for he only ever saw technology as being conditionally rather than absolutely autonomous. This is because Marx assumed that although technology had gained a certain level of freedom over the human workers among the proletariat, it remained under the control of the humans among the capitalist elite. In this sense, Marx understood technology’s power over the human worker to be just a belated manifestation of the capitalist’s control over the proletariat. Marx rested assured that this was a temporary arrangement which would supposedly be overturned in the coming Communist Revolution and the shift to a whole new mode of production that was no longer capitalistic in nature. For this reason, one must be careful to not risk conflating anti-technological thought as such with Marxism because the two really are quite different and ultimately incompatible systems of thought.

            Jacques Ellul greatly improved upon Marx’s insights by emphasizing technology as such rather than subordinate it to the economic notion of Capital. In The Technological Society, Ellul explicitly cited “The Autonomy of Technique” as the fifth and final characteristic differentiating modern technology from more traditional forms of Technique.[4] In Ellul’s own treatment of technological autonomy, he used the following analogy: if we consider a factory to be something like an artificial organism, we will realize that that factory is a world enclosed within itself with no need for an external agent to control it from some outside position. In much the same way that a natural organism like a dog is a closed system which can regulate its own conditions through a hard-wired orientation to maintain homeostatic norms with regard to temperature, hydration, nutrition etc. Modern Technique has its own homeostatic tendency to favour anything that increases its efficiency and predictability while resisting anything that inhibits these goals. In more pragmatic terms, this means that modern technology’s ongoing attempts to find and implement the most efficient method available could never be constrained by any supplementary system of rules with non-technical values such as human beings’ demands to uphold vague ideals like morality, justice, or human sentiment.

Ted Kaczynski addressed the idea of technological autonomy in a number of different writings, dating at least as far back as his 1972 unpublished essay “Progress versus Liberty,” in which he warned that it is impossible for humans to control the technological system because this would involve a contradiction of terms.[5] Just as any increase in technological advancement necessarily reduces human freedom, so too humans can only hope to gain more freedom by reducing the level of technologization under which they live. The reason why “human freedom” and “technological advancement” are inverse terms which a priori exclude one another is that the very condition for modern technology to advance is for it to discover new ways to expand its scope of operation or increase its regulation of the elements brought under its control. Technology only advances if it finds new ways to make humans act and think only in those ways which are fully predictable and fully compatible with the technological system’s needs.

For this reason, in Industrial Society and Its Future Kaczynski compared modern technology to a malicious neighbour who comes back once a year to steal half of his neighbour’s remaining land in order to show that humans must also sacrifice more and more of their remaining freedom with each new technological advance.[6] Phrased in less metaphorical terms, human iterations through the Power Process are inevitably reduced to the status of mere surrogate activities which the technological system allows because they do not conflict with its monopoly over really serious issues such as its control of food, water, or living spaces. In fact, surrogate activities actually reinforce the system’s power through very subtle means such as leftist political activism which demands things the System already needs such as open borders, gender fluidity, or a loss of religious or ethnic identity.

Analogies and Examples

In one particularly memorable example, Ellul noted that it would be hopelessly naïve to imagine that one could get nuclear energy without first developing nuclear bombs because the humanistic demand to circumvent the destructive military technology of weapons of mass destruction in order to go straight to the non-violent technology of power plants to generate electrical energy for consumeristic purposes misses the point that the bomb has to appear before the power plant because it is a simpler technology which is easier to implement.[7] Expecting Modern Technique to skip such a crucial step for the sake of humanistic moral values such as the need to maintain world peace or prevent human deaths misses the point that modern technology could never be constrained by any motivation except that of seeking finer grades of efficiency, productivity, and predictability for their own sake. It is simply indifferent to any other moral standard.

            Likewise, there is always a mismatch between the way big changes appear from the standpoint of humans and the way they appear to the technological system itself. For example, from the standpoint of humans, the agricultural revolution was a technological change whose only goal was to reduce world hunger by generating unnaturally-large food surpluses through the use of modern pesticides, fertilizers, tractors, and GMO crops. From the standpoint of the technological system, however, the goal was to outcompete traditional farming methods in order to increase its own scope of operations and its control of human living conditions. In addition, dissolving peasant social networks was necessary to “modernize” those people still living in traditional conditions by incorporating them into the system as modern workers and consumers who would be more easily influenced by technologies of propaganda.[8] Large food surpluses were therefore never anything except a temporary side effect rather than the goal of the agricultural revolution because the mandates to feed the hungry and to eliminate poverty are moral values from religious or philosophical systems of thought which predate the rise of Modern Technique.

            One might also consider how the internet exemplifies this mismatch between human perception and technological reality. In the short term, it may seem that the internet exists for the sole purpose of giving humans whatever they desire, such as free communication with any location in the world, free content related to one’s personal interests, or free channels of entertainment (including more risqué forms of pleasure-inducing distraction such as pornography) but these consumeristic uses of the internet are all merely accidental in comparison with the internet’s essential purpose of expanding technology’s scope of operations for its own sake by having virtually every human activity be relocated into a virtual medium over which the System can maintain constant and total surveillance.

Why the Concept is Important

Technological autonomy is a particularly important and underappreciated concept because it is virtually always overlooked as an impossibility in discussions of technological advancement. Because it is almost always assumed that some rational human agent must be in control of technological advancement, commentators rest assured that some person’s humanistic moral values will guide it to some destination which would seem beneficial or desirable to human beings, for the question is not whether humans control it but, instead, which humans do so. This Utilitarian Fallacy, however, leads one to gravely misunderstand the trajectory of technological advancement, which continues to progress for reasons that have nothing to do with any human person’s moral values. It is all but impossible to entertain the thought that the ultimate destination might not include humans at all, as there is no a priori reason why technological advancement and human extinction would be incompatible with one another. In fact, Kaczynski speculated in the second chapter of Anti-Tech Revolution: Why and How that this is the most likely outcome of pushing the process to its logical conclusion.


___________

NOTES:

[1] Martin Heidegger, “The Question Concerning Technology” in The Question Concerning Technology and Other Essays. Trans. William Lovitt, Harper Colophon. New York. 1977, page 11.

[2] See Plato’s designation of Logos as belonging to the immaterial soul rather than the physical body or the social self in his dialogue Phaedrus. In The Collected Dialogues of Plato. Ed. Edith Hamilton. Princeton University Press. Princeton. 1989, page 485.

[3] See Karl Marx’s “Fragment on Machines” in #Accelerate: An Accelerationist Reader Ed. Robin Mackay and Armen Avanessian. MIT Press. Cambridge. 2014. page 51.  

[4] Jacques Ellul. The Technological Society. Trans. John Wilkinson. Vintage Books. New York. 1964, page 133.

[5] Ted Kaczynski, “Progress versus Liberty,” unpublished essay, page 1.

[6] Ted Kaczynski, Industrial Society and Its Future. Para. 125.

[7] Jacques Ellul, The Technological Society, Trans. John Wilkinson. Vintage Books. New York. 1964, page 99.

[8] Jacques Ellul, Propaganda: The Formation of Men’s Attitudes, Trans. Konrad Kellen and Jean Lerner, Vintage Books, New York page 93.

Copyright 2024 by Chad Haag. All rights reserved. This is published with the permission of the copyright owner.