Essay

Philosophy Against the Present: The Foundations and Critique of Longtermism

Fig. 1.1: Images taken by Curiosity's Navigation Camera (Navcam) on Mars. Credit: NASA/JPL-Caltech

1. Caring for the Long Term

In his book Crowds and Power, Elias Canetti writes of the “noble” concern with posterity in the following terms: “In the universal anxiety about the future of the earth, this feeling for the unborn is of the greatest importance.”[1] Today, too, it seems intuitive to be increasingly concerned for future generations: With the climate crisis, the aftermath of the Covid-19 pandemic, nuclear threats in the wake of the ongoing war in Ukraine, or recent assertions of a sentient artificial intelligence (AI),[2] phenomena that pose existential threats to human life—or at least are perceived as doing such—are on the increase. It is along such lines that in this text I encounter the philosophy of longtermism, mainly promoted by a group of scholars from Oxford, most prominently by Nick Bostrom and William MacAskill.

According to longtermism, Canetti’s intuitive concern is conflated with simple math: Bostrom and others seek to quantify human life. They consider each human life to be of an equal value (the living and the not-yet born)[3] and then assume that there are a number of future “human-brain-emulation subjective life-years” possible on the planet, which they believe to be roughly 1054.[4] Consequently, they declare, “the ultimate potential for Earth-originating intelligent life is literally astronomical.”[5] In conclusion, the care for these future lives becomes more important than one might initially think coming from Canetti—it becomes quasi-imperative. Bostrom argues that, since it is difficult to predict how to trigger good outcomes in the future, one should instead focus on decreasing so-called existential risks, “where an adverse outcome [an existential catastrophe] would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”[6]

Averting existential risks can be quantified in longtermist thinking. This has far-reaching consequences. Bostrom goes on to take the “most conservative” estimate of 1016 possible future human lives on earth (it “entirely ignores the possibility of space colonisation and software minds”),[7] which could be lost due to an existential catastrophe. He states that the reduction of such an existential risk “by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.”[8] For some longtermist scholars, this has implications which are quite alarming. In his dissertation On the Overwhelming Importance of Shaping the Far Future (2013), supposedly “one of the best texts on existential risks,”[9] Nicholas Beckstead meditates on the “ripple effects” a human life might have for future generations and concludes “that saving a life in a rich country is substantially more important than saving a life in a poor country” due to the higher level of innovation and economic productivity attained in these countries.[10] In longtermism, risk-reduction is equated with saving (particular) human lives in the future. This conclusion alone calls for an examination of longtermism considering its histories and consequences. Following vocal critic Émile P. Torres, a former longtermist turned prominent adversary, longtermism cannot just be regarded as equivalent to caring about the long term or the wellbeing of future generations.[11] The idea and configuration of humanity’s potential is crucial. Hence, this essay will not undertake a critique of long-term thinking as such but rather an unfolding and critique of the ideas that inform longtermism. This will become particularly evident through examining its stances towards systemic issues and climate change.

 

2. Existential Risks and Humanity’s Potential

Longtermist think tanks like the Future of Humanity Institute (Oxford) or the Centre for the Study of Existential Risk (Cambridge) enjoy vast amounts of funding from influential people in the technology sector (e.g. Elon Musk).[12] William MacAskill, co-author of “The Case for Strong Longtermism”[13] and author of the recent, heavily publicised book What We Owe the Future (2022), is president of the Centre for Effective Altruism which has secured $46 billion in committed funding and promotes a longtermist philosophy.[14]

This network of philosophy departments, think tanks, and Silicon Valley companies is important to understand the course of action longtermism facilitates. When looking at the conclusions of respective philosophers, the networks’ synergies become evident. In the aforementioned paper “The Case for Strong Longtermism,” Hilary Greaves and MacAskill assume a number of 1024 possible future lives, and following this assumption argue for the funding of AI as a form risk-prevention in the following terms: They assign a 0.001% probability to an AI-driven catastrophe in the coming century and speculate that $1 billion of targeted spending in AI research could lower this risk by 1%. They thus conclude that $100 spent could save one trillion lives—“far more than the near-future benefits of bednet distribution.”[15] As such, AI research is deemed more “effective” than the distribution of antimalarial bednets today. This justification through speculative long-term benefits is reminiscent of Beckstead’s troubling claim. Ignoring the use of mathematical tools for the moment, these calculations hint at longtermism’s potential to justify cruel political decisions[16] and emphasize how it negotiates issues of global health and AI-alignment in the framework of capital allocation. This underlines the importance of examining its philosophical foundations. Since longterism is a philosophy promoted by a wide range of disparate (albeit connected) individuals and institutions, this essay will mainly follow Bostrom’s ideas, “the father of longtermism,”[17] the one that coined the term “existential risk,”[18] and the director of the Future of Humanity Institute,[19] to examine the underlying assumptions that lead to the promotion of such problematic directives.

 

3. The Foundations of Longtermism

When invoking humanity’s potential, longtermism tends to draw from transhumanism, space expansionism, and total utilitarianism[20]—supposedly rather contemporary fields due to what longtermists refer to as “the grand battle”: the idea that right now is a pivotal point, a moment where certain values in these fields are about to be locked in in ways that might become unchangeable in the future.[21] So, which values are perpetuated by longtermism?

Fig. 2: Images taken by Curiosity's Navigation Camera (Navcam) on Mars. Credit: NASA/JPL-Caltech

 

3.1 Software Minds

Bostrom employs the notion of “Earth-originating intelligent life.” He does not limit what the human might become in the future and often invokes the figure of the posthuman. Consequently, the number of possible future lives in his calculation skyrockets as soon as scenarios of uploaded human minds without biological bodies are taken into account. Regarding humanity’s potential, Bostrom considers the permanent foreclosure of any possible scenario leading to such a transformation of the human to be an existential catastrophe in itself.[22] Imaginaries of enhanced or body-less humans are part of the philosophy of transhumanism. For instance, Greaves and MacAskill refer to “leading theories of philosophy of mind” which support the idea of non-biological but digitally instantiated consciousness.[23] Interestingly, though perhaps unsurprisingly, this stance is also common among technologists in Silicon Valley and reflected in information monism and in computationalism, the belief that everything can be broken down into information as the smallest unit for calculation[24] and the belief that the human brain works just like a computer.[25] Corporeality as integral to life is something that is dismissed in these spheres. Or as Bostrom poetically puts it: “The mind’s cellars have no ceilings,” whereas “your body is a deathtrap.”[26]

There is a shared history to the idea of the posthuman transcending bodily restrictions and the conceptualization of humanity’s extinction. I will return to the latter below. In transhumanism, the posthuman is something to be achieved through human enhancement. Critics of transhumanism generally connect its origins to eugenics. Julian Huxley proposed the term “transhumanism” in 1957 for the aspiration of humanity to transcend itself. He considered the search for a eugenic policy to meet these ends as “not only an urgent but an inspiring task.”[27] Transhumanists usually oppose this narrative by pointing towards the groundedness of contemporary transhumanist human enhancement in liberalism[28] and by altogether expanding the history of humans striving towards enhancement. The latter move is, for instance, undertaken by Bostrom when he asserts that humans have always searched for methods of self-improvement. He cites the Epic of Gilgamesh, one of the oldest existing written texts, in which the eponymous king is on a quest for immortality.[29] Bostrom furthermore draws a line to the Renaissance—when, in his estimation, the self-optimization of humans became a central theme.[30] As such Bostrom positions the aspiration for enhancement as a constant in human history. This historical continuity is then employed to naturalize transhumanists’ contemporary aspiration to further enhance humanity. All of this is in line with longtermist thinking. Even more so when highlighting transhumanism’s elitist and exclusive character.[31] According to Janina Loh, transhumanism needs this “trivial anthropology” in order to justify and naturalize its aspirations for enhancements.[32] In this way, humanity and its evolution become controllable in transhumanist thought. To counter arguments pointing to a more multifaceted view of the human and the difficulty of objectively assigning value to specific enhancements, transhumanists usually introduce what Bostrom calls the “technological completion conjecture”—the belief that “if scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.”[33] This technologically determinist vision of inexorable progress, in conjunction with Bostrom’s anthropology, solidifies the perceived inevitability of human enhancement (as an essential fulfilment of humanity’s potential).

While such enhancement is supposed to lead to posthumanity at some point, transhumanists believe that human enhancement is a continual process without end.[34] This begs the question of when humanity’s potential is ever going to be realized. When will humans cease to be instrumental to this quest and not only strive for, but actually live in the moment of this invoked potential? According to Bostrom, at least not until mind uploading becomes a reality. This illustrates how subjective the concept of humanity’s “potential” is. Besides, as Christopher Coenen and Reinhard Heil argue, transhumanism in this way misunderstands its Enlightenment origins of a self-reflexive, autonomous ethics, when it employs the human as a mere means to the end of achieving the teleological subjugation of nature. The emancipatory moment technology could facilitate vanishes, and humanity becomes a mere appendage of technology.[35]

 

3.2 “Who Cares About Earth?”

Apart from the virtual minds into which humans could expand as software minds, space is another realm that opened up for the human in the twentieth century.[36] Transhumanists see humanity’s “confinement to planet Earth” as something that should be “overcome.”[37] Or seen the other way around, for them the continued delay of space colonization amounts to an “astronomical waste” of value. In Bostrom’s terms: “The potential for approximately 1038 human lives is lost every century that colonization of our local supercluster is delayed.”[38] Space colonization becomes not only imperative, but the failure to reach it an existential catastrophe. Moreover, space colonization serves as an insurance against any extinction event happening on Earth and the only way out of the eventual heat death when the Sun singes the planet—it is a deus ex machina providing “existential redundancy.”[39] Moreover, it supposedly provides unlimited resources for techno-salvation on an otherwise finite planet Earth. The destruction caused on the way to space is a calculated price for future-facing longtermists.[40]

 

Fig. 3: Peter Paul Rubens, The Fall of Phaeton, ca. 1604/05. The story of Phaeton crashing the carriage of his father, the Sun god Helios, and burning the Earth is also used by Plato to allegorically illustrate what he thought to be periodic conflagrations of the planet due to cosmic constellations.[41]

 

When interviewed on this topic, Elon Musk fittingly exclaimed: “Fuck Earth! […] Who cares about Earth?”[42] Taking the lack of knowledge of extraterrestrial life as hint for “a whole lot of dead, one-planet civilisations,” Musk understands space exploration as a way of saving humanity’s future, which assumes priority over earthly matters such as the alleviation of global poverty.[43] (Notably, global poverty is still thought of as something to tackle through Musk’s personal capital allocation instead of systemic change.) On the other hand, entertaining Musk’s vision, space technologies might pose an existential risk themselves,[44] space regimes might subordinate the individual due to novel threats[45] and there might emerge a state of Hobbesian “warre” between space civilizations leading to “astronomical amounts of [...] suffering.”[46] Then again, this arguably might be no problem for the post-biological posthuman.[47] Moreover, as Thomas Moynihan argues, “everything other than expanding outward is [...] a definite death sentence.”[48]

In longtermism, to use a simple image, humanity and its potential are invoked as being analogous to a teenager with most of their life ahead of them.[49] Why then, and this goes for longtermist’s concerns generally, has this young human got to start packing their things now, leaving their “cradle”already? [50] On another note, one could ask whether initial colonization might even be desirable for the individual. Disregarding escapism due to worsened climate change induced livelihoods, who would want to live on a planet devoid of the natural diversity facilitated by Earth’s atmosphere and the vast possibilities for human connections its global population makes possible? We can consider here William Shatner, who, once he was on board Jeff Bezos’ Blue Origin spaceship, did not recall his space trip in awe: When looking back at Earth from the “vicious coldness of space,” he instead felt an “overwhelming sadness.”[51]

 

Fig. 4: Victor Hugo’s symbolist Planet from around 1857 echoes the cosmic loneliness articulated by Shatner and also strikingly resembles the famous “Earthrise” photo from a hundred years later. Besides, it shows how images are laden with projections: the “planet” is a coin imprint.[52]

 

Fig. 5: Earth photographed by Buzz Aldrin from the Apollo 11 Lunar Module. Toby Ord presents this and other Earthrise-like images on his website in order to convey a sense of urgency to care for the continuous existence of human life on Earth. Of course, this universalising depiction omits social struggles and sets the non-human life-world apart from the human which implies its own set of politics.[53]

 

3.3 Total Utilitarianism

Longtermists refrain from such individualistic considerations and instead employ a totalizing view of the world, one which calculates the overall outcome. Their moral philosophy is accordingly that of total utilitarianism.[54] It follows a maximization structure (of aggregate well-being) and is a consequentialist ethics, meaning it judges actions solely by their expected outcome. Hence, utilitarians always seek to forecast.[55] Total utilitarians (henceforth “utilitarians”) are concerned with the overall increase of value in the world instead of an individual life.[56]

John Rawls argues that utilitarianism tends to discriminate against minorities when it employs simple aggregation to determine total value by equalizing every quantum of value.[57] In the case of Beckstead and others, “minorities” denote those who are expected to affect the least amount of future people—not regarding systemic effects that shaped this position. Moreover, Bernard Williams argues that utilitarianism deprives people of their own integrity. Since utilitarianism entails the idea of “negative responsibility” (a responsibility for everything that one allows or fails to prevent) and is only concerned with outcomes, the line of the individual’s actions and projects becomes blurred, eventually leading to “unlimited responsibility” where one has to take into account how one’s own actions affect those of other people. Integrity is then lost, when the identification of the individual with their projects and attitudes collides with the frayed scope of unlimited responsibility.[58] The utilitarian agent becomes a mere “channel between the input of everyone’s projects, including his own, and an output of optimific decision.”[59] They become a container of value. As such, utilitarianism goes together well with transhumanism and the computationalist dream of uniformly uploading minds and the teleological drive towards the posthuman. Since value is individualized in units of lives, utilitarian perspectives also provide no framework for cultures or communities.[60] This approach of ostensibly equally aggregating value nevertheless ends up catering to a specific group of people (as transhumanism also appears to do) when climate change is dismissed as non-existential risk,[61] and increased value is attached to certain lives merely due to their expected probability (Beckstead).[62] Taking a step back from these “moral mathematics,” Kieran Setiya emphasizes the nature of morality. While employed as a supposedly “detached, impersonal theorizing about the good” in this case, it might otherwise be thought of as a way to govern ourselves or more profoundly as an expression of human nature.[63]

 

4. Against Longtermism

Transhumanism and the strive towards space colonization combined with utilitarianism in population ethics constitute what longtermists regard as humanity’s “potential”—the ultimate objective of all their efforts. The potential of software minds and space colonies dwarfs every other pressing issue of the present, and so it becomes morally permissible, by this logic, to completely ignore such issues (as long as they do not develop into existential risks).[64] Longtermism needs to be understood under this aspect and not as mere long-term thinking.

 

4.1 Probability

Calculating the benefits of AI funding over antimalarial bednets is just one example for rendering mute any argument against the concern with the far future using mathematical tools. Crucial to these calculations, and a consequence of longtermism’s utilitarian foundations, is its use of Bayesian epistemology—according to which probability is linked to expectation. In longtermism, these expected probabilities are used to feign objectivity, but in reality can be arbitrary or even informed by the person invoking them. This has led to some areas of risk (AI) with shallow empirical basis being heavily foregrounded, whereas empirically grounded risks such as climate change are even dismissed as non-existential risks altogether.[65] Vaden Masrani criticizes this lack of data. Even with data, however, fully predicting the future is impossible, as the future depends on future knowledge not yet attained.[66] As a result, deriving moral obligations from such expected probability loses its efficacy. In his criticism of longtermism, Masrani draws from Karl Popper, who dismisses contemplations on the far future as “dreams from our poets and prophets” and argues against the instrumentalization of one generation’s suffering as a means to the end of realizing a later one’s happiness. He further emphasizes that the present is the only point in time that can be reliably affected.[67] One can help those who are suffering only in the now, whereas longtermism promotes an indefinite disregard for the present and near-future concerns of others, forever.[68]

Another concern with the employment of expected probability lies in the destruction “of the means by which we make progress,” as Ben Chugg argues. He defines progress as “solving problems and generating the knowledge to do so” wherein the solutions chosen then beget more problems, which are in turn corrected allowing better ideas to be implemented. He sees this in the fields of morality, the arts, and the sciences. The ignorance towards short-term issues thwarts the ability to advance. Also, it makes it impossible to receive feedback for the efforts directed to solving problems (or reducing existential risk).[69] Who could evaluate how and when to reduce the likelihood of an existential risk in the distant future and, in addition, derive instructions for action from this evaluation?

 

4.2 Democratizing Risk

Grappling with Bayesian epistemology goes to show how longtermism cannot ensure that the biases of its researchers don’t channel into the expectations used in complex risk assessment. Longtermism blurs the line between certainty and uncertainty, the different quantities of evidence and the degrees of plausibility of made-up scenarios versus already existing issues. Moreover, longtermism does not answer the question of how it incorporates “the diversity of human preferences and visions of the future,” as Carla Zoe Cremer and Luke Kemp note.[70] As far as this is statistically tractable, it has been shown that only a minority of people subscribe to utilitarian or transhumanist views.[71] This would be dismissible if longtermists would only pursue scholarly endeavors, however they also have a growing political influence.[72] A tradeoff between different interests is compromised by the overrepresentation of a few.[73] Cremer and Kemp warn that risk prevention might “justify violence, [lead to] dangerous technological developments, or drastically constrain freedom in favor of (perceived) security.”[74] Drawing historical connections, they propose democratic constraints in such endeavors not as a form of collective self-interest but as a proven way to reduce violent conflict.[75] Moreover, they emphasize the nature of collective decision-making concerning human futures: Whose visions, desires, and values are being incorporated in the shaping of those futures? One case is that of climate change. Supposedly not likely to cause extinction, it is not considered an existential risk.[76] Its cascading effects in intricate global systems appear contested.[77] Cremer and Kemp propose to shift the attention away from identifying singular existential risks to rather identifying processes that channel into an overall likelihood of human extinction: “A field looking for the one hazard to kill them all will end up writing science fiction. More speculative risks are prioritized because a seemingly more complete story can be told.”[78]

 

4.3 Eugenics Revisited

In his 2020 book X-Risk, Thomas Moynihan, a researcher at Oxford University’s Future of Humanity Institute, traces the intellectual genealogy of human extinction as an idea. He describes how (in the western tradition), before Kant awakened from his dogmatic slumber, the world was considered to be good—“the best of all possible worlds,” following Leibniz.[79] Kant showed that values are human-made and thus do not exist independently of the human species. Humanity alone became responsible for the things it cherishes.[80] Then, in the 1800s, the idea of a universally inhabitable cosmos and, in the 1900s, the idea that every inhabitable planet will be an eventual home for rational life forms, collapsed.[81] With Charles Darwin’s theory of evolution, humans eventually understood extinction not as an unprecedented incident but as a common feature in the process of evolution and as sheer contingency.[82] Prior to Darwin, extinction had been understood as immoral,[83] with Darwin, it was found that “life is essentially something amoral,“ as Nietzsche put it subsequently.[84] This novel idea of human extinction went on to spark undertakings toward its prevention. Perverting Darwinian theory and thinking they were working against “civilizational decay” were the eugenicists, who promised to safeguard humanity by the means of guided evolution.[85] Huxley even saw in the non-application of eugenics the assured gradual destruction of humanity.[86] The eugenicists’ incorporation of Darwinian thought remains a blank in Moynihan’s text although it constitutes an early confrontation with and supposed answer to the threat of extinction.[87] As mentioned above, transhumanism likewise fails to deal with or adequately confront the eugenic past. Interestingly, it is here that Darwin is once again essential. Moynihan states: “The Darwinian order is a dynasty of death; the only way beyond it is artifice.”[88] Bostrom contemplates on the need for “intellectually talented individuals” in the future and the seemingly “negative correlation in some places between intellectual achievement and fertility.” Employing the term “dysgenic pressures,” by which he means to denote the outbreeding of “talented individuals” by those with “lower IQs,” Bostrom considers this to constitute an existential risk. Allaying these concerns, he points to the possibilities of enhancing human intellect on the horizon.[89] By recalling the great suffering of eugenic endeavors (and the fact that they were once meant to avert the perishment of humanity), one should be led to reflect on the political and social dimensions of the actions facilitated by longtermism.

 

5. Conclusion: Longtermism and Progress

In his essay on “Progress,” Adorno writes that “Progress occurs where it ends.”[90] With this, he points to the emancipatory potential that occurs when progress as a form of domination is overcome.[91] Arguably, this gesture of domination is present in the aspirations of space colonization and in many other related dreams of human enhancement. Recalling Chugg’s notion of progress, it should also entail political and social change. While Bostrom considers a “misguided world government” stabilized by “advanced surveillance or mind-control technologies”[92] as an existential risk, the praxis of longtermists today does away with questions of larger systemic change. Right now, longtermists are, as Alice Crary argues, simply “perpetuating the institutions that reliably produce the ills they address.”[93] Progress, following a longtermist stance, is rather considered in terms only of technological developments.[94] But the compatibility of such dreams with the climate crisis is not given.[95] Why not embark on grander political action apart from the individualistic, non-binding re-orientation of the charitable pursuits of a small, affluent group (as is also done in the “Effective Altruism” movement)? Especially if the lock-in of values is of concern. In Longtermism, according to its utilitarian roots, an individualism is propagated, which cannot then, however, be practically redeemed at all due to the calculations for the invisible mass of distant lives. Against the background of the assumptions outlined above, a particular worldview is thus universally imposed.

On another note, this also makes sense through the lens of “criti-hype,” a term proposed by Lee Vinsel. While “emerging technologies,” as one might call AI today, have in the past often not had the impact which was projected for them, people studying and criticizing such a harmful impact have often themselves (negatively) contributed to and overtaken the grand narratives that were promoted by the technology companies. Economic conditions recede into the background in favor of these socio-technical imaginaries.[96] In the web of heavily funded Oxbridge think tanks and tech-billionaires pursuing space travel, this mutually assures further academic research and creates an action plan for products needed along the journey. The tech industry can cast itself a “savior” while ignoring its short-term effects.[97] But the critique of longtermism is not restricted to the tech industry. According to longtermism, as Christine Emba argues, the future becomes a “clean slate” onto which techno-utopian fantasies can be projected while abandoning the concerns of existing communities with the very systems that allowed longtermists to thrive.[98] Longtermism and the risks it deems existential lack a framework of global justice and history and, additionally, perpetuate and produce global injustice and inequality while the status quo remains unquestioned. The alleviation of global poverty and fighting the climate crisis are only secondary goals of longtermist charities, but they would never be possible through charity in the first place. Here, systemic change is needed. And as the discussion of probability here has shown, solely the present can be reliably affected. Kathryn Yusoff and Jennifer Gabrys note that the identification of risk configures the sphere of the actionable and thus of the imagination.[99] So, with the demand for systemic change in mind, I finish with a quote from Toby Ord (Future of Humanity Institute):

In ages to come, when global poverty is no more, people will look back at our time and be dumbfounded by the moral paralysis of those who had the resources to help. Even more shocking will be the fact that so many theories failed to accord global poverty a central place—indeed, that they found it advantageous not to demand much sacrifice from their adherents. For a moral theory to demand that we make large sacrifices in righting these wrongs is not too demanding, but just demanding enough.[100]

 


[1] Elias Canetti, Crowds and Power, Continuum, 1978, p. 46f.

[2] See J. Porter, “Google suspends engineer who claims its AI is sentient,” The Verge, 2022 https://www.theverge.com/2022/6/13/23165535/google-suspends-ai-artificial-intelligence-engineer-sentient Accessed 26. Oct. 2022.Most recently, this includes the debate around Large Language Models such as GPT4, their ostensible “intelligence,” and an open letter published by the Future of Life Institute (Cambridge) that calls for a halt on further AI research. According to the letter, which is co-signed by high-profile people like Elon Musk, Jaan Tallinn, and also Yuval Noah Harari, “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control (Future of Life Institute: “Pause Giant AI Experiments” https://futureoflife.org/open-letter/pause-giant-ai-experiments/, own emphasis). Emily M. Bender has criticized how the letter misleadingly highlights possible or actual negative effects of AI while actually engaging in an “#AIhype” with no regards for further questions of power and the general use of technology (Bender, “Policy makers: Please don’t fall for the distractions of #AIhype” https://medium.com/@emilymenonbender/policy-makers-please-dont-fall-for-the-distractions-of-aihype-e03fa80ddbf1).

[3] Nick Bostrom, “Existential Risk Prevention as Global Priority: Existential Risk Prevention as Global Priority,” Global Policy, vol. 4, no. 1, Feb. 2013, pp. 15–31. DOI.org (Crossref), https://doi.org/10.1111/1758-5899.12002. p. 16; Bostrom naturally includes “human-brain-emulations” in “human lives” as he is an advocate of human enhancement (See for example: Bostrom: “Why I Want to be Posthuman When I Grow Up”).

[4] 1054 equals 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.

[5] Nick Bostrom, “Existential Risks FAQ,” 2013. https://existential-risk.org/faq.pdf, p. 4.

[6] Bostrom: “Existential Risks,” p. 2.

[7] Bostrom: “Existential Risk Prevention as Global Priority,” p. 18.

[8] Ibid., p. 18f (Italics in original); Bostrom takes the number of 1016 from Derek Parfit’s 1984 book Reasons and Persons, Clarendon Press, 1987, p.453f.

[9] According to Toby Ord, The Precipice: Existential Risk and the Future of Humanity, Hachette Books, 2020. Chapter 6, Footnote 38.

[10] Nicholas Beckstead, On the Overwhelming Importance of Shaping the Far Future, Rutgers, The State University of New Jersey, 2013, p. 11.

[11] Émile P. Torres, “Against Longtermism” Aeon, 2021, https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo. Accessed 26. Oct. 2022.

[12] The latter was co-founded by Skype co-founder Jann Tallinn (Zaitchik: “The Heavy Price of Longtermism”). Also among the top funders of this cause are Silicon Valley millionaires like Facebook co-founder Dustin Moskovitz and cryptocurrency entrepreneur Sam Bankman-Fried (Stöcker: “Ist »Longtermism« die Rettung—oder eine Gefahr?”). (In the context of the spectacular collapse of Bankman-Fried’s crypto exchange FTX and the following billion dollar losses for its customers, the movement around effective altruism and longtermism received broad critical attention anew. After Bankman-Fried’s cryptic statements about his philosophy and motives, it is questionable to what extent longtermism to him can be on the one hand a mere, very high-profile marketing tool, or whether, on the other hand, he also employed it to justify morally and legally questionable actions. And clearly, Bankman-Fried only serves as one recent, very public example of advocates of longtermism here. (Zeeshan Aleem: Opinion | How Sam Bankman-Fried Exposes the Perils of Effective Altruism).

[13] Throughout this essay, I will consider the “strong” version of longtermism that regards the far future as not one but “the” area of present concern. This view is also taken by Bostrom and MacAskill (Greaves and MacAskill: “The Case for Strong Longtermism (GPI Working Paper No. 5-2021),” Nick Bostrom, “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” 2003. http://www.nickbostrom.com/astronomical/waste.pdf, p.6).

[14] Benjamin Todd, “Why despite Global Progress, Humanity Is Probably Facing Its Most Dangerous Time Ever,” 80,000 Hours, 2017, https://80000hours.org/articles/existential-risks/. Accessed 26. Oct. 2022; Samuel Sigal, “Effective Altruism’s Most Controversial Idea,” Vox, 6 Sept. 2022, https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future. Accessed 26. Oct. 2022.

[15] Hilary Greaves and William MacAskill. “The Case for Strong Longtermism.” Global Priorities Institute, GPI Working Paper No. 7-2019, September 2019. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Greaves_MacAskill_strong_longtermism.pdf Accessed 29. Oct. 2022, p. 15.

[16] Olle Häggström contemplates on the political justification of genocide. (Häggström, Olle. Here Be Dragons: Science, Technology and the Future of Humanity. First edition, Oxford University Press, 2016, p. 240).

[17] Torres, “The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk,’” Current Affairs, 28 July 2021. Current Affairs, https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk. Accessed 26. Oct. 2022.

[18] Bostrom: “Existential Risks.”

[19] This means the examination will mainly deal with the “techno-utopian approach” of existential risk studies, as is highlighted by Cremer and Kemp (Carla Zoe Cremerand Luke Kemp. “Democratising Risk: In Search of a Methodology to Study Existential Risk,” 2021, pp. 1–34 https://ssrn.com/abstract=3995225, p. 1).

[20] Torres: “Against Longtermism”; Cremer and Kemp: “Democratising Risk,” p. 2f.

[21] Torres: “The Dangerous Ideology of the Tech Elite w/ Phil Torres,” 20:00.

[22] Ibid., 22:30; Torres: “Against Longtermism.”

[23] Greaves and MacAskill: “The Case for Strong Longtermism (GPI Working Paper No. 5-2021),” p. 7.

[24] Janina Loh, Trans- und Posthumanismus zur Einführung, Junius, 2019, p. 27.

[25] Arthur I. Miller, The Artist in the Machine: The World of AI Powered Creativity, The MIT Press, 2019, p. 296.

[26] Bostrom, “Letter from Utopia,” Studies in Ethics, Law, and Technology, vol. 2, no. 1, Jan. 2008. DOI.org (Crossref), https://doi.org/10.2202/1941-6008.1025, p. 3f; Elon Musk has a similar stance towards human bodies as “hideous sacks of meat” (Guerrero: “I once fell for the fantasy of uploading ourselves”).

[27] Julian Huxley, New Bottles for New Wine, Chatto & Windus, 1959, p. 306. By this time (post-World War II), Huxley was already repositioning eugenics socially and biologically. Dropping the term “race”, concerns with “racial deterioration” and propagating “evolutionary humanism”, Huxley bridged “old eugenics” and “new eugenics” based on molecular biology while also paying attention to its public appeal (Weindling: “Julian Huxley and the Continuity of Eugenics in Twentieth-century Britain”).

[28] Stefan Lorenz Sorgner, Übermensch: Plädoyer für einen Nietzscheanischen Transhumanismus, Schwabe Verlag, 2019, p. 103.

[29] Bostrom: “Why I Want to be a Posthuman When I Grow Up,” p. 20.

[30] Like other transhumanists, he cites Pico della Mirandola’s Oration on the Dignity of Man (1486) according to which human dignity might be enhanced through self-optimization. (Bostrom: “Dignity and Enhancement,” p. 11f).

[31] Cremer and Kemp, “Democratising Risk,” p. 21.

[32] Loh, Trans- und Posthumanismus, p. 83; Interestingly, the transhumanists herewith employ a maneuver also found in their counterpart of bioconservatism: Through this trivial anthropology, the supposedly natural is invoked to veil ulterior political and ideological motifs. (See: Lorraine Daston, Against Nature, MIT Press, 2019, p. 3).

[33] Bostrom: “The Future of Humanity,” p. 5.

[34] Max More, “The Philosophy of Transhumanism,” in Max More and Natasha Vita-More, editors, The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Wiley-Blackwell, 2013, pp. 3–17; p. 14.

[35] Christopher Coenen and Reinhard Heil, “Historische Aspekte aktueller Menschenverbesserungsvisionen,” JAHRBUCH FUR PADAGOGIK 2014. Menschenverbesserung Transhumanismus, edited by Sven Kluge et al., Peter Lang GmbH, 2014, pp. 35–50, 10.3726/978-3-653-05104-9, p. 45.

[36] Similar to Bostrom’s notion of the body as a “deathtrap,” precursor of transhumanism and Russian cosmist Nikolai Fyodorovich Fyodorov regarded Earth as a “cemetery.” (Fyodorov: Philosophy of the Common Task, p. 189, quoted in Moynihan: X-Risk, p. 389).

[37] Doug Bailey et al., “The Transhumanist Declaration,” Humanity+, 2009. https://www.humanityplus.org/the-transhumanist-declaration. Accessed 7 Sept. 2022.

[38] Bostrom: “Astronomical Waste,” p. 3.

[39] Thomas Moynihan, X-Risk: How Humanity Discovered Its Own Extinction. Urbanomic, 2020, p. 389.

[40] Jean Guerrero, “I Once Fell for the Fantasy of Uploading Ourselves. It’s a Dangerous Myth,” Los Angeles Times, 10 Oct. 2022, https://www.latimes.com/opinion/story/2022-10-10/longtermism-climate-change-elon-musk. Accessed 26. Oct. 2022.

[41] Plato, Timaeus and Critias, p. 10.

[42] Musk quoted in Andersen: “Exodus.”

[43] Andersen: “Exodus.”

[44] Torres, “Space Colonization and Suffering Risks,” p. 2.

[45] Daniel Deudney, Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, Oxford University Press, 2020, p. 350.

[46] Torres: “Space Colonization and Suffering Risks,” p. 16; Due to the vast distance between colonization, Torres assumes an all-encompassing Leviathan in the Hobbesian sense to be impossible to implement. (Ibid., p. 10f).

[47] Moynihan: X-Risk, p. 407f; Moynihan argues that in the context of space expansion there will also be a leaving behind of the “baggage of natural selection” which at present still leads to aggression or preference for “one’s own kind.” Believing that the human future is defined by such present “sins” is regarded as anthropocentrism by Moynihan (Ibid.).

[48] Ibid., p. 408.

[49] Throughout his new book, William MacAskill also uses the figure of the teenager as metaphor for humanity (William MacAskill, What We Owe the Future. First edition, Hachette Book Group, Inc, 2022).

[50] “The Earth is the cradle of reason, but one cannot forever live in the cradle.” Moynihan quotes Konstantin Tsiolkovsky when emphasising that remaining “on the earth is precisely the evil option, and even the selfish one, in that it amounts to tacit support for a universe where extinction and wasted opportunity is the rule” (Moynihan: X-Risk, p. 411).

[51] William Shatner, “My Trip to Space Filled Me With ‘Overwhelming Sadness’,” Variety, 6 Oct. 2022, https://variety.com/2022/tv/news/william-shatner-space-boldly-go-excerpt-1235395113/. Accessed 26. Oct. 2022. With reference to F.W.J. Schelling, Moynihan states that “reason reveals blind attachment to our planetary and stellar birthplace—which is a matter of contingency rather than choice—[it is] just as unreasonable as blind allegiance to one’s nation or race (Moynihan: X-Risk, p. 386).

[52] Chomard and Harth, Tintenauge und Schattenmund. Victor Hugos Zeichnungen, p. 97, 115.

[53] See for example: Diedrich Diederichsen and Anselm Franke, The Whole Earth: California and the Disappearance of the Outside, Sternberg Press, 2014.

[54] Utilitarians were also concerned with future lives early on: Henry Sidgwick in 1874 argues for a consideration of posterity, although the effects one can have on it “must necessarily be more uncertain,” anticipating a central issue of longtermism today. (Henry Sidgwick, The Methods of Ethics, Palgrave Macmillan UK, 1962. DOI.org (Crossref), https://doi.org/10.1007/978-1-349-81786-3, p. 414).

[55] Ott: Moralbegründungen, p. 96; Interestingly, early utilitarian Jeremy Bentham argues anthropologically (like Bostrom elsewhere above) when he derives this moral philosophy from the fact that humans tend to strive towards pleasure and want to evade pain (Bentham: An Introduction to the Principles of Morals and Legislation, p. 14). This conjuncture of a descriptive anthropological and a normative ethical approach is an explanation but not necessarily a justification, argues Konrad Ott (Ott: Moralbegründungen, p. 100f). Again the natural is invoked to derive the normative.

[56] Ott: Moralbegründungen, p. 116.

[57] Rawls: Theory of Justice, p. 23.

[58] Williams: “Critique of Utilitarianism,” pp. 253–261.

[59] Ibid., p. 260.

[60] Ott: Moralbegründungen, p. 106.

[61] Todd: “Why despite Global Progress, Humanity Is Probably Facing Its Most Dangerous Time Ever;” Kremer and Cemp: “Democratizing Risk, p. 14.

[62] As philosophical foundations of longtermism, transhumanism and utilitarianism are also mainly supported and explored by a very narrow demographic of mainly white men between 30 and 33. (Cremer and Kemp: “Democratising Risk,” p. 6f).

[63] Kieran Setiya, “The New Moral Mathematics,” Boston Review, 15 Aug. 2022, https://www.bostonreview.net/articles/the-new-moral-mathematics/. Accessed 26. Oct. 2022.

[64] For the purpose of evaluating action, MacAskill would “ignore all the effects contained in the first 100 (or even 1000) years.” (Greaves and MacAskill: “The Case for Strong Longtermism (GPI Working Paper No. 7-2019),” p. 1; Samuel: “Effective altruism’s most controversial idea”). In Greaves and MacAskill’s more recent version of the paper, this passage was removed.

[65] Cremer and Kemp: “Democratising Risk,” p. 14.

[66] Vaden Masrani, “A Case Against Strong Longtermism,” Vaden Masrani’s Academic Website. A Place to Link to News, Recent Work, and What’s Been Interesting Me of Late, 2020, https://vmasrani.github.io/blog/2020/against_longtermism/. Accessed 19 Sept. 2022.

[67] Karl R. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge, 2002, p. 486.

[68] Masrani: “A Case Against Strong Longtermism.”

[69] Chugg: “Against Strong Longtermism.”

[70] Cremer and Kemp: “Democratising Risk,” p. 1; Cremer and Kemp are affiliated with the Future of humanity Institute and the Centre for the Study of Existential Risk and still consider themselves longtermists—“(probs just not the techno utopian kind).” (Cremer: “Democratising Risk - or how EA deals with critics”).

[71] Cremer and Kemp: “Democratising Risk,” p. 6f.

[72] For instance, Toby Ord, a researcher at the Future of Humanity Institute, has been, together with existential risks, referenced by former UK prime minister Boris Johnson, and serves as a policy advisor to the World Health Organization or the World Economic Forum. (Ord: “Toby Ord”).

[73] Cremer and Kemp: “Democratising Risk,” p. 27f.

[74] Ibid., p. 26.

[75] Ibid., p. 27.

[76] Piper: “Is climate change an ‘existential threat’.”

[77] Cremer and Kemp: “Democratising Risk,” p. 14.

[78] Ibid., p. 15.

[79] Moynihan: X-Risk, p. 41.

[80] Ibid., p. 95.

[81] Ibid., p. 120.

[82] Ibid., p. 199, 284f; Darwin himself took a rather progressivist stance thinking that extinct species were simply replaced by more well-adapted ones, granting humanity a sense of security. (Ibid., p. 283).

[83] Ibid., p. 372, 386.

[84] Friedrich Wilhelm Nietzsche, The Birth of Tragedy and Other Writings, Cambridge University Press, 1999, p. 9; Omri Boehm notes how Spinoza with his rationalism already before Darwin has reduced god to sheer nature and thus influenced the Enlightenment, similarly confronting humankind with the contingent processes of nature (Boehm: Radikaler Universalismus, p. 41).

[85] See for example: Fisher: “Eugenics.”

[86] Harper: “Elites Against Extinction.”

[87] Ibid.

[88] Moynihan: X-Risk, p. 355.

[89] Bostrom: “Existential Risks,” p. 11f.

[90] Theodor W. Adorno, “Progress,” in Adorno et al., Critical Models: Interventions and Catchwords, Columbia University Press, 2005, p. 150.

[91] This reading of Adorno is, strongly abridged, taken from Ray Brassier and his lectures within the Summer School of BICAR—Beirut Institute for Critical Analysis and Research in Beirut, 2022.

[92] Bostrom: “Existential Risks,” p. 11.

[93] Alice Crary, “Against ‘Effective Altruism,’” Radical Philosophy, 2(10), (2021), https://www.radicalphilosophy.com/article/against-effective-altruism, p. 39.

[94] Bostrom invokes the technological completion conjecture which, if true and following his speculations, would also mean humanity is already locked into a scenario of e.g. rogue super-intelligent AI.

[95] Alexander Zaitchik, “The Heavy Price of Longtermism,” The New Republic, 24 Oct. 2022. https://newrepublic.com/article/168047/longtermism-future-humanity-william-macaskill. Accessed 26. Oct. 2022.

[96] Lee Vinsel, “You’re Doing It Wrong: Notes on Criticism and Technology Hype,” Medium, 1 Feb. 2021, https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5. Accessed 26. Oct. 2022.

[97] Paris Marx, “Elon Musk Is Convinced He’s the Future. We Need to Look Beyond Him,” Time, 8. Aug. 2022, https://time.com/6203815/elon-musk-flaws-billionaire-visions/. Accessed 30 Oct. 2022.

[98] Christine Emba, “Why ‘Longtermism’ Isn’t Ethically Sound,” Washington Post, 5 Sept. 2022, https://www.washingtonpost.com/opinions/2022/09/05/longtermism-philanthropy-altruism-risks/. Accessed 29. Oct. 2022.

[99] Kathryn Yusoff and Jennifer Gabrys, “Climate Change and the Imagination,” WIREs Climate Change, vol. 2, no. 4, July 2011, pp. 516–34. DOI.org (Crossref), https://doi.org/10.1002/wcc.117, p. 4.

[100] Ord: “Global poverty and the demands of morality,” p. 191.

About the author

Yannick Nepomuk Fritz

Published on 2023-05-11 14:00