Essay

Resilient Models. Training on Volatility

In his writings on the accident, the French philosopher Paul Virilio warns us that “when the unexpected is repeated at more or less constant intervals, you come to expect it, and this ‘expectation horizon’ then becomes an obsession”.1 The perpetual arrival of the unexpected leads to a sort of “ambient anxiety”,2 a culture where turbulence, as Yuk Hui argues in reference to Virilio, is already to be expected as part of “a global technological system that is open to the repetitive arrivals of catastrophe without apocalypse.”3

What is at the core of this development is a shift in world picture, where the linear chains of rule-based orders were replaced by networks of spontaneous self-organisation and complex systems in which volatility proliferates. It is perhaps best illustrated by the concept of a VUCA world, a world that is volatile, uncertain, complex and ambiguous. The term was popularised in 1987 in the U.S. Army War College, where it was developed into a central strategic military framework following the end of the Cold War, and has since become a ubiquitous buzzword in corporate and governmental strategy.4 What it highlights, as a proxy of a larger historic development, is a specific understanding of volatility as the perpetual and contingent arrival of events that emerge out of a complex environment, and that contain a degree of violence.

Not only has volatility since then become perpetual and part of our expectation horizon, but is also increasingly valorised, be it as a technocratic strategy to bring about “change” in the form of disruptive innovation, or simply as “friction” on which AI (artificial intelligence) models are trained and through which they become resilient. Through exposure to the complexity and volatility of the “real world”, these models become more capable, extending the logic of resilience as a specific mode of governance that is focused on adapting to continuous volatility in order to stabilise the system at large and rationalise, or even capitalise on disruption. 

<p>Wols, <em>Große Tache I</em> (1949). Wols was one of the most influential artists of the Tachisme movement, which formed in postwar France, and which favored intuition, gestural abstraction and the absence of premeditated structure.</p>

Wols, Große Tache I (1949). Wols was one of the most influential artists of the Tachisme movement, which formed in postwar France, and which favored intuition, gestural abstraction and the absence of premeditated structure.

A Shifted World Picture

In the postwar period, multiple strands of theory were parting ways with a mechanistic, linear and rule-based world-view and instead embraced notions of contingency, chaos, emergence and an open future. Melanie Mitchell traces the notion of a predictable universe to Newtonian mechanics and the picture of the “clockwork universe”, from which mathematician Pierre Simon Laplace in 1814 derived the understanding that it was possible, in principle, to predict everything for all time.5 This belief was shaken up gradually with the discovery of the “uncertainty principle” by physicist Werner Heisenberg in 1927, only to be finally laid to rest with the advent of the idea of chaotic systems, in which even small uncertainties about a system’s initial position would cause massive errors in the long-term prediction of these qualities.6 Chaos theory thus gave proof that perfect prediction was impossible, not only in practice but also in principle, which in turn meant that equilibrium models of the world were no longer simply limited in practice, but also no longer representational. 

From here—and shared across a wide variety of strands of theory—a shift in world picture (Weltbild) unfolded, which Yuk Hui, with reference to Martin Heidegger, describes as a disruptive process where the previous world picture is fully rejected and the one emerging in its place “becomes a force which repels the movement of the whole culture”.7 Hui calls this “the computational turn”, which he identifies as the shift from the scientific thought of analogue models to a world-picture grounded in networks and patterns. Not only is this a world of networks however, which have replaced the previous linear chains of rule-based orders, but it is also one that has abandoned notions of equilibrium in favor of spontaneous organisation and proliferating uncertainty, contingency and volatility.8

<p>Experiments from Edward Lorenz, the founder of modern chaos theory, who recorded multiple weather simulation runs that never matched up completely. The experiments, later formulated into the “butterfly effect”, showed that even the smallest variations of the initial state would lead to a different outcome. Still from: Chris Haws, “Equinox: Chaos” [video], YouTube (televised by Channel 4, November 12, 1988, uploaded January 19, 2017), <a href="http://www.youtube.com/watch?v=lnkovGeASzE">www.youtube.com/watch?v=lnkovGeASzE</a>. </p>

Experiments from Edward Lorenz, the founder of modern chaos theory, who recorded multiple weather simulation runs that never matched up completely. The experiments, later formulated into the “butterfly effect”, showed that even the smallest variations of the initial state would lead to a different outcome. Still from: Chris Haws, “Equinox: Chaos” [video], YouTube (televised by Channel 4, November 12, 1988, uploaded January 19, 2017), www.youtube.com/watch?v=lnkovGeASzE

As a response to this reconfiguration of the world beyond equilibrium, new forms of neoliberal management such as resilience emerged, which Ben Anderson has termed “anticipatory logics”, and which break with the concept of risk as “calculable uncertainty” based on induction from past events.9 In contrast to both pre-caution and pre-emption, resilience prepares for the time after a perturbation has occurred and stops the terminal effects on “valued life” rather than the event itself: “a resilient system is one that can adapt, transform and recover post events.”10 In this sense, referring back to Yuk Hui, resilience could be understood to represent a mode of managing and normalising the repetitive arrivals of catastrophe without apocalypse.

The concept of resilience originally emerged in the context of ecological systems, where it referred to the capabilities of living systems to absorb perturbations.11 A central shift leading to this definition of resilience took place in 1973 when the Canadian ecologist Crawford Stanley Holling published a paper that reformulated ecological systems: ecosystems were no longer understood in the tradition of the postwar mechanistic belief in equilibrium of first-order cybernetics, but rather oriented “toward the contemporary ‘complexity science’ view of ecosystems.”12 Significantly, Holling took further inspiration from complexity by turning resilience into a universal concept that could be applied not only to ecosystems, but also to markets, human populations and communities and other complex systems.

Alongside this abstraction of resilience onto all systems, including social ones, came the notion of deriving value from volatility and contingent disruptions. The Stockholm Resilience Centre defines resilience as a way of “using shocks and disturbances like a financial crisis or climate change to spur renewal and innovative thinking”.13 Here, resilience connects to the notion of disruption as a form of innovation: based on this premise that disruption incites renewal, dis-equilibrium is actively produced in order to keep the system flourishing, thus turning it into an object of speculation. 

“Instrumental disruption”, as Nicole Sunday Grove calls it, functions by appropriating this logic of resilience through the “cultivation of flexibility and a neoliberal sense of self for the purpose of maintaining the order of an existing system.”14 Critically, resilience and adaptability are turned into frameworks that invade “the realm of character”: the individual is urged to develop a habit of flexibility and accept disruption and volatility not only as something normal, but rather, an opportunity to be seized.15 It is to no surprise then, when asked about what students should learn for the future, OpenAI CEO Sam Altman listed resilience first, among adaptability, a “high-rate of learning” and creativity.16

<p>Still from: Will Freudenheim and Christina Lu with Dalena Tran, “Vivarium”, Antikythera (posted 2024).</p>

Still from: Will Freudenheim and Christina Lu with Dalena Tran, “Vivarium”, Antikythera (posted 2024).

<p>Still from: Will Freudenheim and Christina Lu with Dalena Tran, “Vivarium”, Antikythera (posted 2024), using footage from the ANYmal Robot by the Robotic Systems Lab, ETH Zurich &amp; NVIDIA.</p>

Still from: Will Freudenheim and Christina Lu with Dalena Tran, “Vivarium”, Antikythera (posted 2024), using footage from the ANYmal Robot by the Robotic Systems Lab, ETH Zurich & NVIDIA.

<p>Screenshot from Nvidia Isaac Gym (2021), a physics simulation environment for reinforcement learning research.</p>

Screenshot from Nvidia Isaac Gym (2021), a physics simulation environment for reinforcement learning research.

Resilient Models

Resilience, adaptability and a high-rate of learning are not only values projected onto individuals, they also act as benchmarks against which the training success of AI models themselves can then be measured. Former OpenAI CTO Mira Murati delineated how users can participate in the “collective intelligence” of ChatGPT: by releasing the model to the public at an early stage, there is time for “society to adapt”, and for the model to learn from “the friction” with “reality”.17 Learning from this “friction” suggests that perturbations and volatility function as a necessary source of learning for the model, as it feeds it with the complexities, non-linearities and ambiguities of human and “world” behavior. Of course, the “reality” that Murati hints at, is not limited to the “real world” as such, but takes place in all of the many shades in-between reality and the synthetic and at differing grades of complexity. 

It is not a coincidence that AI companies have focused so adamantly on images, videos, text and other data: not just because they are easily accessible on the web in large quantities, or because they are central to a culture where knowledge is seen as largely visual and textual, but also because they are centered around objects, rather than processes. The generation of images, artworks and text do not require the model to have learned the processes that lead up to their creation, but only require it to imitate the results of them. Instead, when Murati speaks about the friction with the real world, she is speaking about training the model on the complexities that emerge out of interaction, which, one could speculate, is why OpenAI is collecting data on all “conversations” with the model by default.

The distinction between objects and processes becomes much clearer with the training necessary for embodied and physicalised AI, which require the ability to “capture the indeterminacies of the real”.18 To train AI for such processes, simulated training environments such as OpenAI Gym or GoogleDeepMind Lab were developed and made publically available. In these simulations, AI agents are trained on simulated scenes and in competition, cooperation or co-existence with other non-human and human agents, in scenes where supposedly “complexity emerges from end-to-end training in a rich environment.”19

In their video essay “Vivarium”, researchers from the think-tank Antikythera propose a future virtual training ground and simulation engine for embodied AI, where AI models are trained in “toy worlds” in various human-AI configurations, encouraging “learning from basic movement to interactive negotiation, adversarial feints to stigmergic coordination”.20 Nonetheless, the researchers also refer to the phenomenon of the Sim2Real Gap, which describes the issues that arise when attempting to transfer capabilities learned in a simulated environment to the “real world”. Complexity in AI goes both ways: volatile and complex data is necessary for training purposes, but it also poses a limit for what models can adapt to: training embodied AI in the real world is not only extremely expensive, but is also “impossible to parallelize, and difficult to control.”21 Nonetheless, the idea of training models directly in the real world is a recurring one: in 2018 an MIT Technology Review Column proposed that India would not only need an AI revolution to push forward its economy, but also that “India’s mess of complexity is just what AI needs”.22 India, as the author argues, could function as an ideal training ground on which AI could “mature” and which would make it “more resilient”.23

As Luciana Parisi argues with reference to Gilles Deleuze, AI models as a “machine ecology infected with randomness” not only represent “an interactive system of learning and continuous adaptation”, but also brings forth a specific “logic of governance driven by the variable mesh of continuous variability”.24  More recently, Louise Amoore proposed a machine learning political order, a specific style of governance that is grounded in the transformation towards the “productive generation of turbulence and division from which algorithmic systems are derived.”25 Importantly, this machine learning political order is not the result of a “causal relationship where ideas from computer science bleed into the state and sovereign logics”, nor is it the same as considering AI models as political decision-makers or describing the automation of previously human governmental processes.26 Instead, it becomes relevant to investigate the broader “epistemic and political transformations” and the generation of “new norms and thresholds”.27 In this sense, this order is enframed by a much older shift in world picture from rule-based orders to a world that operates at the edge of chaos and which makes it possible, according to Amoore, to profit from “the volatilities of fractured disorder”.28

<p>Still from: Tesla, “Tesla AI Day 2021” (video), YouTube (uploaded 20.08.2021), <a href="https://www.youtube.com/watch?v=j0z4FweCy4M&t=7971s">https://www.youtube.com/watch?v=j0z4FweCy4M&amp;t=7971s</a>.</p>

Still from: Tesla, “Tesla AI Day 2021” (video), YouTube (uploaded 20.08.2021), https://www.youtube.com/watch?v=j0z4FweCy4M&t=7971s.

Part of this emerging political order is a shifted understanding of failure, which in algorithmic systems, as Hui has argued, not only represents a necessary by-product of technological innovation, but instead functions as something “immanent to its operation and maintenance.”29 The accident, that which according to Virilio arrives ex abrupto, is then no longer an inflection point inviting critical reflection, a “techno-analysis” of that which resides beneath,30 but instead has become essential to technological systems and their evolution. For example, in order to train their AI models, Tesla uses multicam footage directly drawn from its customers’ cars, or as Tesla has written on one of their presentations: “the Fleet Giveth Back”.31 Here, the footage of rare occurrences and near-accidents is processed in order to improve the models performance in unknown or ambivalent scenarios. 

Further, Amoore emphasises that the contingency at play here is not just the contingency in the data set, but the fraying and disruption of social relationships and political order itself.32 The failure of politics as it reveals itself in disruptions of the social, is turned into a recursive “instructive experience” for the algorithm.33 In this way, failure never leads to a broader, structural change in approach, but merely to the adjustment of a specific parameter of the system, while allowing its broader structure to persist.34

<p>Wols, <em>Ohne Titel</em> (~1942).</p>

Wols, Ohne Titel (~1942).

Conclusion

Referring back to the editorial of this issue, it could be argued that we are in a situation of “bigger cages, longer chains”,35 even if the world picture has shifted to different forms of organisation altogether. As Hui already argued in 2010, we have long moved from “rhizome against tree”, or “non-linearity over linearity” to a “new tension: the celebration of networks and a new critique yet to come”.36 While the chain as a figure of linear rule-based orders has been largely dispelled, what has emerged in its place instead must now be “posed as a limit to be transcended”.37 This world picture of self-organising networks has brought forth resilience and adaptability as new benchmarks for both human and machine “intelligence” grounded in the valorisation of volatility and complexity as a source essential for “learning”. This notion of complexity in the training context of course stands in stark contrast to the technically reductive and stylistically “generic”38 output of these models. Resilience, at once a form of neoliberal governance producing a “resilient subject” that faces inward and adapts, rather than resists,39 is also becoming a central paradigm for the training of AI models, affirming the idea that “learning” can be reduced to a capacity for “malleability” in the face of uncertain and volatile data.40 By reducing learning to a process of reaction and adaptation to continuous perturbations, the breadth of human experience is drastically simplified—while invoking complexity. It is therefore important to critique this notion of learning as simply “a mode of behavioral conditioning and training”,41 by emphasizing the true complexities and differences of “systems” that have been rendered interchangeable. Invoking complexity, volatility or “spontaneity”42 as a means of critique can however also become a trap: rather than destabilising the dominant narratives around AI and the forms of resilient governance it inspires, it might instead help to sustain them by providing simply another volatile and deviant dataset on which to train.


The arguments of this text build on my master thesis, submitted in April 2024. Thank you to Yannick Nepomuk Fritz for formal editing and valuable feedback.

Footnotes

  1. Paul Virilio, The Original Accident, Cambridge: Polity Press, 2007, p. 64.

  2. Ibid, p. 41.

  3. Yuk Hui, “algorithmic catastrophe – the revenge of contingency”, Parrhesia: A Journal Of Critical Philosophy, vol.23 (2015): p. 124.

  4. US Army Heritage And Education Center, “Who first originated the term VUCA (Volatility, Uncertainty, Complexity and Ambiguity)?” (posted 2022), online usawc.libanswers.com/faq/84869, accessed August 7, 2024.

  5. Melanie Mitchell, Complexity: A Guided Tour, Oxford: Oxford University Press, 2009, p. 22.

  6. Ibid.

  7. Yuk Hui, “The computational turn, or, a new Weltbild” (posted 2010), online https://digitalmilieu.net/70/the-computational-turn-or-a-new-weltbild/, accessed July 2024.

  8. One of the earlier accounts of such spontaneous order can be found in the writings of economist Friedrich A. Hayek, whose thought was entangled with findings of neuro-psychology and early connectionist artificial intelligence and who is understood as an important precursor to the sciences of complexity. See: Jack Birner, “Mind, Market and Society. Network Structures in the Work of F. A. Hayek” (1996).

  9. Ben Anderson, “Preemption, precaution, preparedness: Anticipatory action and future geographies”, Progress In Human Geography, vol. 34, no. 6 (2010): p. 781.

  10. Ibid, p. 791.

  11. Julian Reid, “The Disastrous and Politically Debased Subject of Resilience”, Development Dialogue, vol. 58 (2012): p. 71.

  12. Jeremy Walker and Melinda Cooper, “Genealogies of resilience: From systems ecology to the political economy of crisis adaptation”, Security Dialogue, vol. 42, no. 2 (2011): p. 145.

  13. Stockholm Resilience Centre, “What is resilience?” (posted 2015), online https://www.stockholmresilience.org/research/research-news/2015-02-19-what-is-resilience.html, accessed August 5, 2024.

  14. Nicole Sunday Grove, “Receding resilience: On the planetary moods of disruption”, Review Of International Studies, vol. 49, no. 1 (2023): p. 10.

  15. Silvio Lorusso, Entreprecariat: Everyone Is an Entrepreneur. Nobody Is Safe, Eindhoven: Onomatopee, 2020, p. 29.

  16. Sam Altman quoted from an interview with Emily Chang: “Satya Nadella & Sam Altman: Dawn of the AI Wars” (video), YouTube (uploaded 18 August 2023), www.youtube.com/watch?v=6ydFDwv-n8w.

  17. Mira Murati in: “OpenAI CEO Sam Altman and CTO Mira Murati on the Future of AI and ChatGPT” (video), WSJ Tech Live 2023, YouTube (uploaded 21 October 2023) www.youtube.com/watch?v=byYlC2cagLw.

  18. Will Freudenheim and Christina Lu with Dalena Tran, “Vivarium”, Antikythera (posted 2024), online  vivarium.host/.

  19. Google DeepMind, “Emergence of Locomotion Behaviours in Rich Environments” (video), YouTube (July 14, 2017), www.youtube.com/watch?v=hx_bgoTF7bs.

  20. Will Freudenheim et al., “Vivarium”.

  21. Ibid.

  22. Varun Aggarwal, “India’s mess of complexity is just what AI needs”, MIT Technology Review (posted 2018), online www.technologyreview.com/2018/06/27/240474/indias-mess-of-complexity-is-just-what-ai-needs, accessed August 27, 2024.

  23. Ibid.

  24. Luciana Parisi, “Instrumental Reason, Algorithmic Capitalism, and the Incomputable”, in Alleys of Your Mind: Augmented Intelligence and Its Traumas, ed. Matteo Pasquinelli, Lüneburg: meson press, 2015, p. 129.

  25. Louise Amoore, “Machine Learning Political Orders”, Review of International Studies, vol. 49, no. 1 (2023): p. 26.

  26. Ibid, p. 22.

  27. Ibid, p. 21.

  28. Ibid, p. 22.

  29. Hui, “algorithmic catastrophe”, pp. 131–132.

  30. Virilio, The Original Accident, p. 5.

  31. Tesla, “Tesla AI Day 2021” (video), YouTube (uploaded 20.08.2021), https://www.youtube.com/watch?v=j0z4FweCy4M&t=7971s.

  32. Amoore, “Machine Learning Political Orders”, p. 31.

  33. Ibid, p. 31.

  34. Orit Halpern, “Hopeful Resilience”, e-flux Architecture, Accumulation (2017).

  35. Jule Köpke and Charlotte Eifler and Livia Emma Lazzarini and Paolo Caffoni with Yannick Nepomuk Fritz, “Chaining”, UMBAU Journal, vol. 3 (2024), https://umbau.hfg-karlsruhe.de/posts/chaining.

  36. Hui, “The computational turn, or, a new Weltbild”.

  37. Ibid.

  38. John Herrman, “Is That AI? Or Does It Just Suck? AI is becoming synonymous with things that are unbelievable, generic, or just a little bit off.”, New York Magazine Intelligencer (posted 2024), online www.nymag.com/intelligencer/article/is-that-ai-or-does-it-just-suck.html, accessed October 9, 2024.

  39. Reid, “The Disastrous and Politically Debased Subject of Resilience”, p. 74

  40. Here, I am following a line of thinking on neural networks from Ranjodh Singh Dhaliwal and Théo Lepage-Richer with Lucy Suchman, Neural Networks, 2024, Lüneburg: meson press, p. 4.

  41. Orit Halpern, “Becoming Smart”, LAS Art Foundation  (posted 2023), online www.las-art.foundation/explore/becoming-smart, accessed October 3, 2024.

  42. Here, I am referring to William Davies, who argues “spontaneity” has become appropriated in the “reaction economy”, asking how any of us “can become comfortable with our own freedom, our own spontaneity, against the backdrop of surveillance capitalism”. See: William Davis, “The Reaction Economy”, London Review of Books (posted 2023), online www.lrb.co.uk/the-paper/v45/n05/william-davies/the-reaction-economy, accessed October 3, 2024.

About the author

Lars Pinkwart

Published on 2024-10-11 12:30