Wilderness Front

View Original

Review of “Life 3.0”

Book review by qpooqpoo

Life 3.0

by Max Tegmark

Alfred A. Knopf, New York, NY, 2017.[1]

Painting depicting a cutaway view of a space colony designed by NASA in the 1970s. NASA ID number AC76-1089

“The techies’ belief-system can best be explained as a religious phenomenon… [and] already has the earmarks of an apocalyptic and millenarian cult…. Historically, millenarian cults have tended to emerge at ‘times of great social change or crisis.’ This suggests that the techies’ beliefs reflect not a genuine confidence in technology, but rather their own anxieties about the future of the technological society—anxieties from which they try to escape by creating a quasi-religious myth.”

 

—Theodore John Kaczynski, Anti-Tech Revolution[2]

 

This book is essentially a religious testament, an ode so to speak, to the nascent faith of technological progress.  This may come as a shock to those who are familiar with the background of this work but have not read it, as it was written by a prominent professor at MIT and ostensibly deals with what should be very sober and serious issues of fact that pose grave existential implications for the fate of humanity and the biosphere.  Nevertheless, its core positions are premised on fundamental axioms that are taken on faith, and the flavor of the book betrays an almost giddy, whimsical, and cavalier attitude toward the material—the kind of excitement that a child gets from playing a game—and it’s clear the author takes satisfaction not only in thinking purely for thinking’s sake, but also a kind of smug satisfaction with himself and his co-religionists.  On the surface this book provides a happy and inspiring look into the future of artificial intelligence and its implications for humanity, and some feel-good, optimistic advice.  Beneath the surface, it’s a completely worthless, vain, impractical, and ultimately masturbatory exercise; its ponderings are either wild speculation for speculation’s sake, or prescriptions for society that are so naïve you would think a grade-schooler wrote them.  It would all be very funny if what was at stake for our world in the face of rapid technological growth were not so deadly serious.  If someone of this author’s position doesn’t provide convincing answers for how to deal with the social implications of artificial intelligence, then where on Earth is the person that does? Is anyone thinking seriously about this matter?

First, let’s deal with the religious overtones.  Tegmark embodies the conventional technocratic techno-optimist worldview.  This worldview takes for granted a certain teleology in the progress of technology, assuming it as inevitable and insurmountable.  While many people who share this view see humanity as a happy passenger on this journey—continually improving and benefiting from the changes that technological advances bring—this perspective is so obviously problematic that it’s now relegated as a religion of the masses.  Slightly more sophisticated thinkers, however, see that humanity itself may have no place in the technological future.[3]  Instead, so their thinking goes, humans will be the founders of a great endowment for which they will never be around to witness but alas should take pride in knowing that they ushered it in.  The scientific and technician class love this narrative of course, because in their insufferable arrogance and pride, this narrative places them in particular as the progenitors of this supposedly wonderful future.  Tegmark incessantly preaches his faith.  The universe has the potential (with modern technology) to “wake up” (page 23) he tells us (whatever that means), technology will allow “our Universe to finally fulfill its potential and wake up fully” (page 29).  The process of increasing complexity and increasing technologization represents an “amazing awakening…a relentless 13.8-billion-year process that’s making our Universe ever more complex and interesting.” (page 23). Life “has the potential to flourish on Earth and far beyond for many billions of years, beyond the wildest dreams of our ancestors,” (p. 248) it is our “cosmic endowment” (p. 191)—and, by direct implication, our responsibility to see this process carried through.  “[A]ided by technology, life has the potential to flourish for billions of years, not merely here in our solar system, but also throughout a cosmos far more grand and inspiring than our ancestors imagined. Not even the sky is the limit.” (p. 203).  A great awakening! A rapture! The zealot is giddy with his grandiose visions of his heaven.  His concern is in “making our observable Universe come alive in the future.”[4]  Yet, there are serious issues facing humanity at its current stage of technological development.  Humanity is now at a critical juncture at which it must align the technologies just right, so that this wonderful “flourishing” can spring forward on the glorious path and not descend us into hell. So, in order to fulfill the prophecy of fulfilling humanity’s potential for “billions of years” we have to ensure, now, that advanced new technologies and in particular AI, get developed just right. As he puts it, it’s “our responsibility for ensuring that this future potential of life to flourish isn’t squandered.” (p. 233).

Let’s start by looking at the most pronounced aspect of the book: Tegmark seems to acknowledge certain intractable problems with reconciling modern technology with human society, wrestles with the problem for a moment, and ultimately just throws his hands up and says that it’s something that should be considered.  A perfect example of this is the problem of providing humans with meaning and purpose in a world dominated by AI, where—in the best-case scenario—everything is provided and everything is run and organized by AI.  What will humans do in this world?  Specifically, how will humans have meaningful lives that won’t inevitably lead to boredom, depression, and despair?[5] He acknowledges that this is a likely scenario: “Although people can create artificial challenges, from scientific rediscovery to rock climbing, everyone knows that there is no true challenge, merely entertainment. [Everyone]…eventually succumbs to ennui and requests annihilation.” (p. 172). Yet, he contradicts himself when he states “[I]t should be possible to find alternative ways of providing both income and purpose without jobs.” (p. 126).  But what makes him believe that it should be possible?  Apparently, it’s just something that is possible because in theory anything is possible.  He never zeroes in on what the fundamental dynamics of meaning and purpose are and what conditions they spring from.  If he did, it would make his “possibility” statement glaringly empty and baseless.  In actuality, humans have a concrete need—derived from biological evolution—to have serious, practical, life-and-death goals that they must exert serious effort towards obtaining, exercising their intelligence, creativity, courage, and resourcefulness etc. in obtaining them, and they need to have a great degree of autonomy in pursuing these goals.  There are simply no conceivable ways humans can fulfill these conditions in a high-tech environment, let alone an environment dominated by AI.  With the widespread psychological issues that we are already seeing as a result of people being unable to exert serious effort towards life-and-death goals, imagine how much more commonplace depression and despair will be in a world run by AI.

Although he says nothing of gradual decline in quality and satisfaction of new jobs, Tegmark understands that the prior historical trend of new jobs being created to replace outmoded ones will not continue indefinitely. Tegmark’s only response to this problem is to fall back on the old tired and pathetic cliché of working to create “fulfilling” activities for people to do.  On page 129 he lists a number of what are, essentially, hobbies—surrogate activities[6]—that he thinks humans should be happy with to fritter away their time in hedonism. “[W]e therefore need to understand how to help such well-being-inducing activities thrive.” (p. 129).  This obviously won’t work, as he already admitted. He talks about the need for concerted efforts by scientists, engineers, and economists, etc. to find meaningful surrogate activities for people to do.  This is the technical mindset at work: every social problem caused by modern technology must have a technical fix!  Yet he still can’t escape his own admitted problem of the ennui and purposelessness and despair that will arise when humans are left to squander their lives in mere games and entertainment, a problem we are already facing today.  He seems to understand that humans need more—but this instinct of his never crystallizes into a rational understanding of this aspect of human nature and the need for purposeful work.  In reality, humans will have to be engineered, biologically or psychologically, to remove their need for purposeful work—their drives for power and control over their environments—just to prevent them from becoming hopelessly depressed.  And at that point anyway, why bother with the trouble of creating fulfilling hobbies, why not just give them happy pills?  And what would the implications be for human dignity in this Brave New World?  The whole issue is touched on with an inexcusable superficiality given its extreme importance and massive implication—the problem is acknowledged, and then, quietly, shoved under the carpet of optimistic platitudes.  Because it’s possible in theory, it can be done, we're assured.  We can solve the problem of purpose!  “[W]e have the potential to create a fantastic future with leisure and unprecedented opulence for everyone who wants it.” (p. 118).  Of course, leisure is not the issue.  Leisure is a value of civilization.  Freedom to spend all of one’s life in pleasure seeking is just another industrial product.  Again, people don’t need just fun and entertainment or “opulence,” they need purposeful work and they need to be in control of it. They need freedom.  Here we’ve only touched on a particular—though the most significant—aspect of the social implications of this line of thinking. There are certainly other costs here. For example, think of the enormous cost to the natural world if everyone had a high level of “opulence” and leisure.  But Tegmark’s in peak salesman mode here, pitching the technological society in the terms in which the technological man has been preprogrammed to desire.  

Most telling is the indifference Tegmark shows for the inevitable erosion in individual freedom and dignity in his technological future.  For Tegmark, science is his surrogate activity: his pursuit that gives him his personal fulfillment and self-worth: “As a scientist, I take pride in setting my own goals, in using creativity and intuition to tackle a broad range of unsolved problems, and in using language to share what I discover.” (p. 82). But he couldn’t care less about the debasement of the average person’s practical activities that have already been undermined by modern tech: the destruction of their fulfillment and self-worth and the erosion of the quality and nature of modern employment.  So long as it doesn’t impact him.  “Personally, it doesn’t bother me that today’s machines outclass me at manual skills…” (p. 82).  Say what you want about the technocratic class’s lack of empathy for their fellow humans, at least Tegmark is being honest here. “[W]ill the rise of AI eventually eclipse also those abilities that provide my current send of self-worth and value…?” (p. 83).  Yes, they will Dr. Tegmark. At which point I’d like to see how you view this technological adventure.  Just how faithful are you?

But all this doesn’t get to the real core of the problem, which is Tegmark’s utter naivete about society, which is unfortunately a defining attribute of the technocratic class.  When entertaining wild or fantastic speculations on various possible future outcomes for mankind and the “Universe” in the face of rapid technological growth, he assures us that although they may seem like science fiction, they are actually plausible outcomes because they are physically possible.  “Although these ideas may sound like science fiction, they certainly don’t violate any known laws of physics, so the most interesting question isn’t whether they can happen, but whether they will happen.” (p. 155).  But just because they don’t violate any known laws of physics does not mean that they don’t violate any other known or unknown laws.  In particular, laws which may govern the development of human societies and the evolution of natural systems.  Natural selection, for example, is not a law of physics per se, but it is still a valid law reliably predicting the dynamics of biological evolution, and there is no reason to believe the same dynamics are not operative on artificial systems, whether human societies or machines.  The author of this essay thinks there indeed are natural laws guiding the development of societies and complex systems and that these laws determine a process endemic to technological growth that inevitably leads to disaster:[7] a plausible explanation for the “Fermi Paradox,” which Tegmark acknowledges but never attempts to explain.  But regardless of one’s opinion on this matter, Tegmark’s notion that the limits imposed by the known laws of physics are the only limits to content with for future possible scenarios belies either his insular physics-focused academic background, his general naivete, or his great capacity for self-deception.[8]  As we will see further, it’s more likely the latter.

“If a technologically superior AI-fueled civilization arrives because we built it, on the other hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So, we should instead ask: What should happen?” (p. 159). How ridiculous! We will have a great influence over a process simply because we started it?  It’s said that Gottlieb Daimler and Karl Benz invented the automobile. To what degree have they “influenced” the expansion of this technology over time and its ultimate and system-wide impact on society through complex interaction and coevolution with society and the biosphere over the last hundred years? Of course, these things expand and interact over time within a complex network of material, political, economic, and cultural relationships and respond to natural processes both known and unknown, such as human psychology and natural selection among competing material, political, economic and cultural relationships.  Here Tegmark embodies the ultimate expression of the arrogance and immorality of the technocratic class. 

First, he assumes that the impacts of any given technology can be rationally predicted and controlled. Yet, no society can manage or control its own development over the long term in theory, and no society has ever managed to do so in practice. In view of our own past record, as well as a rudimentary understanding of how societies evolve, to say nothing of the problems of complexity, chaos, the competition for power among organizations operating under natural selection, competing wills, etc. to think that we can control the development of society—and the correct “use” of any given technology in society over time—let alone a whole host of technologies and their multifaceted and evolved interactions over time—is wild utopian dreaming.

“We’re assuming, among other things, that the problems of complexity, chaos, and the resistance of subordinates, also the purely technical factors that limit the options open to leaders, as well as the competitive, power-seeking groups that evolve within a society under the influence of natural selection, can all be overcome to such an extent that an all-powerful leader will be able to govern the society rationally; we’re assuming that the “conflicts among many individual wills” within the society can be resolved well enough so that it will be possible to make a rational choice of leader; we’re assuming that means will be found to put the chosen leader into a position of absolute power and to guarantee forever the succession of competent and conscientious leaders who will govern in accord with some stable and permanent system of values. And if the hypothetical possibility of steering a society rationally is to afford any comfort to the reader, he will have to assume that the system of values according to which the society is steered will be one that is at least marginally acceptable to himself—which is a sufficiently daring assumption. It’s now clear that we have wandered into the realm of fantasy.”[9]

Apparently Tegmark thinks it’s even sensible to talk about preserving our values and priorities indefinitely. The ultimate outcome of our technological future will “depend on the prevailing goals and values” and we can “explore our ability to influence these goals and values of future life.” Once again, the development of a society has always shown to be, and is on principle, ultimately beyond long-term prediction and control. A corollary to this is that a society’s values and priorities are also beyond long-term prediction and control as these values are a product of social evolution and material conditions and change in complex and multifaceted ways just as all other aspects of a society change through time. Yet Tegmark thinks it’s even sensible to talk about “preserving” values and goals for billions of years. Here again we’ve wandered either into the realm of total fantasy, or just simple lazy showmanship: a sop to conventional attitudes that serves primarily to placate the public’s unease about radical technological growth extrapolated into the indefinite future than any real thinking.  Setting aside the strictly ethical debate about goals, values, and priorities, social and political scientists, philosophers, and other thinkers don’t even understand how society works now—there aren’t even any established theoretical models that can predict with accuracy any social developments into the far future, let alone the next decade.  Yet Tegmark expects us to have a reasonable chance to predict and control the development of AI into the future, including the preservation of its values and goals (because it’s physically possible of course!). So much for the practicality of Tegmark. 

Second, he is wrong that we should be asking what “should” happen.  This is a totally meaningless and idle question, since any agreement on what “should” be the appropriate application and use of technology would be useless, since, as explained in the foregoing, any attempt at implementing this application in the real world would be impossible.  But add to this the philosophical problem of arriving at an agreement on what should happen that would be totally satisfactory to all people on Earth, and who should decide what happens, how the decision is made and according to what system of values, and we’ve gone further into the rabbit hole of worthless utopian dreaming.  In fact, this doesn’t even reach the level of utopianism since utopianism implies at least a rough approximation of future arrangements.  Here the thinking is so vague and ill-conceived it’s more accurate to say that it’s simply gibberish—complete and utter nonsense. “We need to start thinking hard about which outcome we prefer and how to steer in that direction…” [emphasis added] (p. 160). Throughout the book, he repeats this false premise—on which his entire enterprise as a public policy advocate and social thinker is based—that “we” can guide, steer, control, influence, or direct the development of these new technologies. “Throughout human history,” Tegmark tells us, “we’ve relied on the same tried-and-true approach to keeping our technology beneficial: learning from mistakes.” (p. 93).  Just look around you and judge for yourself how this method is working, and whether it alone can deal with the colossal new powers and new risks of the new technologies.  Once a problem gets big enough it’s often too late to do anything about it.  Often a problem cannot be fixed because its exact cause is not known.  Even if a problem can be solved, the solution is often offensive to human dignity and freedom.  Many problems are unsolvable because of the very nature of technological society.  Let’s take a look at just one example: the dominance of individuals by large organizations.  How does Tegmark expect we “learn from our mistakes” here?  Or the problem of environmental degradation in the form of massive species die offs.  Tegmark’s approach to these problems is a sort of non-think.  A brisk and casual platitude.  Given the seriousness of the problems we face this lazy approach from someone in his position rises to the level of gross irresponsibility.  A careful review of Tegmark’s “Future of Life Institute’s” (FLI) “Asilomer AI Principles”[10]—his manifesto on the “ethical” development of AI—reveals again and again a shockingly useless and naïve program, prattling on about “steering” society and endless self-righteous preaching.  It's clear that this book was not penned to convince anyone who is skeptical of technological progress, rather, it aims to merely pander to those who have already bought in to the myth of progress.

There’s something else going on here though.  When Tegmark prioritizes asking the question of what “should” happen (with modern technology) he betrays the technocratic class’s self-serving moral ideology: that technology is neutral both in the abstract and also in the concrete sense.  What matters is “how” it is used—decisions on what “should” be done.  This allows them to absolve themselves of any responsibility for creating new scientific and technological discoveries that are simply “misused.”  But the question to ask is not, as we have argued, how technology “should” be used, the question we should be asking is how technology will be used.  Since it cannot be determined beyond all doubt how exactly a given technology will be used, and how it will interact with and impact human society and the biosphere over the long-term, then to create that technology in the first place is grossly irresponsible.  Time and time again scientists invent new technologies—experiencing all the power, fulfillment, excitement, status, and money they get from the corporate and government entities that fund their research—and then wash their hands of any responsibility for how it is ultimately used, instead preaching, with smug sanctimony, about how their technology “should” be used. 

Getting back to the naivete, Tegmark, like most of his technocratic class today, focuses a good portion on what’s come to be known as the “goal alignment problem.”  In order for AI to be developed “safely” and to the “benefit of humanity,” “we need to ensure that its goals are aligned with ours.” (p. 43). You don’t need to think about this for more than a few seconds to see how ridiculous the attempt is.  First, who is this “we” he is talking about? Second, what exactly should the goals be and who decides?  Finally, how does one ensure that the goals are consistently aligned—if they ever can be—indefinitely into the future as society evolves?  Tegmark admits that the “AI goal-alignment problem has three parts, none of which is solved and all of which is now the subject of active research.” (p. 268).  He admits we are nowhere close to solving the most important thing that needs to be solved if by his own standard AI will be “beneficial,” whatever that means.  But notice the other problem here.  These are primarily social and philosophical issues, not technical ones, and they are issues that affect everyone.  Yet is the average person involved in this “research”?  Are we all invited to these conferences and seminars?  Are we all at the table when considering these issues and ultimately making policy?  Of course not.  It’s in the hands of a small select group of “researchers”—working for the “benefit of humanity” of course…

“But who are ‘we’? Whose goals are we talking about?...both this ethical problem and the goal-alignment problem are crucial ones that need to be solved before any superintelligence is developed.” (p. 269). Yet Tegmark concedes that “the only consensus that has been reached is that there is no consensus.” (p. 269).  Let’s take stock of what Tegmark is saying here:  Tegmark tells us that humanity must solve all of the most fundamental problems of philosophy that have plagued civilization for thousands of years and are still no closer to any resolution before it can safely create AI that would be “beneficial,” and humanity must do all of this in the mere decades before AI is developed.  This assumes that progress in the research and development of AI should continue simultaneously to humanity’s efforts to solve these problems. But wouldn’t it only be prudent to first solve all of these crucial social problems before AI is developed, and indeed before any further research is done to develop AI?  Wouldn’t it be grossly irresponsible to continue research and development of AI while these problems persist?  Of course it would be.  But this is a dangerous notion for Tegmark, as it puts moral responsibility in the hands of his technocratic class and this can’t be tolerated, so the notion itself is quietly side-tracked.  We must charge ahead to ensure that humanity “awakens!” and better to risk total annihilation in this race to the future than to delay in any way the coming rapture.

There is one last thing that is particularly telling: nowhere does Tegmark ever contemplate that a collapse of technological society is a possibility, either naturally or through human will.  Tegmark outlines a number of possible scenarios, from AI becoming a benevolent zookeeper for humanity to AI exterminating all humans.  To be clear, all of them—even those that Tegmark seems to prefer as more or less better outcomes than others—are terrible.  One scenario stands out however: “Technological reversion.”  Tegmark posits that one scenario might be that humanity never reaches AI because it decides to revert technologically, something akin to the Amish.  The problem is, “reversion” as he describes it is clearly a conscious, systematic and planned de-technologization.  He thinks that such a coordinated effort would necessarily require a totalitarian society to implement—which is a fair assumption. “I think that the only viable path to broad relinquishment of technology is to enforce it through a global totalitarian state.” (p. 192).  And he paints a picture of a kind of hyper-technological North Korea, a 1984 dystopia of horrific proportions.  Even on his “Future of Life Institute” website wherein he surveys[11] people to find out which of his preconceived alternative scenarios they would most likely hope for, a crucial scenario is omitted: that the global technological system may collapse spontaneously or that it might collapse through the coordinated efforts of anti-tech revolutionaries.  There could be several reasons why he doesn’t mention this scenario: it’s both horrifying and politically taboo for someone of his class. But perhaps he simply is so steeped in the technological worldview that he just can’t conceive of it.  There is a more cynical interpretation however: by listing it as a possible scenario, he would have to acknowledge that there are people living right now who may actually want this scenario, and a recognition of these people—who are by definition diametrically opposed to him, his class, and his worldview, would simply be too dangerous.  So instead, he poisons the well: he associates, by tacit implication, the ideology of technological reversion with the horrors of a technological totalitarianism.  At best an ideological blind-spot on his part, at worst an insidious political ploy.  Regardless, reversion of technology is bad, according to Tegmark, because, “unless our technology advances far beyond its present level, Mother Nature will drive us extinct long before another billion years have passed.” (p. 195).  This is a common theme within the technocratic worldview: we have to continue with technological growth because only continued technological growth holds the possibility that life can continue for “billions of years” whereas, so the thinking goes, a non-technological future holds the certainty that life will cease at some point before that time.  The problem with this thinking is that it assumes, a priori, that there is not a process fundamental to technological growth itself that ultimately and inevitably leads to disaster for life.  Indeed, there may be such a process, in which case it is a certainty that future technological progress leads to disaster for life, for “billions of years.”  Already we see that the global technological system has done enormous damage to the natural world, and now threatens the security of vital aspects of the biosphere’s functioning.  The preservation of life—for billions of years—may require for its flourishing the organic processes that have evolved through billions of years without collective human or machine attempts at management or control.

Gloating insufferably in its own elitism (the author, rather embarrassingly, never forgets to remind you of his celebrity “friends,” his sushi dinners, cocktail parties and dances on resort beaches), the book is only worth reading for its insights into the techie mindset.  It oscillates between wild speculation on such things as Dyson spheres, the harnessing of energy from black holes, expanding AI into the universe through an intergalactic self-replicating machine system, etc. etc., and blind, unthinking optimism on the hard social and philosophical questions.  The mindset of the techie is the logical conclusion of the mindset of civilization: conquest.  Conquest of the planet, conquest of the universe, the shaping of the world into the ideal, the “improvement” of nature.  It’s as old as civilization, but its dangers are now greater than ever.  Whereas an ancient civilization could threaten only its local ecology, modern technological civilization seeks to conquer and control all of the universe.  “[T]he matter in our machines, roads, buildings and other engineering projects appears on track to soon overtake all living matter on Earth. In other words, even without an intelligence explosion, most matter on Earth that exhibits goal-oriented properties may soon be designed rather than evolved.” (p. 258).  If you think there’s a place for human freedom, autonomy, and dignity in this “designed” world, or if you even think there’s a place for humanity in it at all for much longer, then, like Tegmark, you’re dreaming.


___________

NOTES:

[1] This review is based on the 2017 hardcover edition.

[2] Kaczynski, Theodore John, Anti-Tech Revolution: Why and How, Fitch & Madison Publishers, Scottsdale, Arizona, 2020, p. 83.

[3] See, e.g., Joy, Bill, “Why the Future Doesn’t Need Us,” Wired Magazine, April 1, 2000.

[4] Opposed to these hopes of a wonderful technological future is the dread of a non-technological future: “Had our Universe never awoken, then, as far as I’m concerned, it would have been completely pointless—merely a gigantic waste of space.  Should our Universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap, it will, alas, become meaningless.” (p. 23).  This mirrors the typical duality of most contemporary religions.  Good and Evil.  Virtue and sin. Heaven and Hell.

[5] It should be noted that feelings of purposelessness, emptiness, and anomie are already prevalent in modern society at rates far beyond those observed in premodern societies. 

[6] Artificial goals created by a society to fill the need people have for goals, as opposed to natural goals that are directly linked to the survival of the individual.  See, e.g., Kaczynski, Theodore John, Technological Slavery, Fitch & Madison Publishers, Scottsdale, AZ, pp. 36-50, 154, 155, 195, 279.

[7] See, e.g., Anti-Tech Revolution (2020), pp. 50-64.

[8] Is it possible Tegmark is consciously deceiving his readers?  Such a cynical position is—as Tegmark himself would say—not beyond the realm of physical possibility.

[9] Kaczynski, Theodore John, Anti-Tech Revolution: Why and How, Fitch & Madison Publishers, Scottsdale, Arizona, 2020, p. 36.

[10] https://futureoflife.org/open-letter/ai-principles/

[11] https://futureoflife.org/ai/superintelligence-survey/


Copyright 2024 by qpooqpoo. All rights reserved. This is published with the permission of the copyright owner.