Models of Hand and Head
Matteo Pasquinelli in conversation with Paolo Caffoni
Figure. 1: Photograph by Mirja de Vries. Originally published in Das Hexenspiel, Köln: DuMont Buchverlag, 1978.
This interview was conducted on the occasion of the publication of the book The Eye of the Master: A Social History of Artificial Intelligence (Verso 2023). The research for this book was primarily conducted during Matteo Pasquinelli’s time as a professor at HfG Karslruhe, and many of the themes explored in the publication were previously debated also in his teaching seminars.
The book title alludes to a view “from above”: The eye of the master is the work of supervision historically carried out in workshops, plantations, assembly lines, as well as the contemporary data governance exercised by gig economy platforms and social media. Nevertheless, the originality of Pasquinelli’s argumentation is to be found in a critique of many contemporary studies which merely describe the process artificial intelligence as techniques of control (indeed from above) on passive subjects—such as the case of “Surveillance capitalism,” for example. Instead, it highlights not only that collective knowledge and labor are the primary source that AI encodes and commodifies, but also that such collective intelligence “shapes the very design of AI algorithms from within.”
Two central theses of the book could be roughly summarized as follows: First, artificial intelligence is to be understood as the automation of the social division of labor and not of human biological intelligence. Secondly, and related to it, social cooperation and abstraction precede the process of their technological automation.
Figure 2: Terminator 2: Judgement Day, 1991.
Paolo Caffoni: What are the blind spots that “The Eye of the Master” is unable to capture?
Matteo Pasquinelli: The metaphor of a poet, to give a short answer, any act of invention and creation. Many things that belong to our everyday life, affects and conflicts as acts of invention and creation, if you allow me a longer answer. “The Eye of the Master” is a disciplinary image from previous centuries that I use (in English) to signify the perspective of AI on the present, understood as capacity to discipline and manage labour, but also to render knowledge, learning, and education at large. Today, AI represents a specific technique of statistical modelling of data and these techniques obviously have their blind spots (of which, for instance, the debate on bias is only the tip of the iceberg: see the Nooscope which I designed together with Vladan Joler to attempt a larger cartography of errors, faults, fallacies, and approximations intrinsic to machine learning…). What “The Eye of Master” is unable to capture is what is excluded from any image of DALL-E or Midjourney as examples of a normalised synthesis of our cultural heritage. With AI, mass culture has become statistical culture—we still have to elaborate this.
PC: It is curious that you start a book on Artificial Intelligence with the figure of a truck driver. The application of AI in self-driving vehicles has transformed the common perception of manual skills, such as driving. Indeed, you describe the truck driver as an intellectual or a cognitive worker. An unexpected outcome of the debate around machine learning is the challenge posed to one of the foundational hierarchies in Western culture: “At least since the Aristotelian opposition of episteme (‘knowledge’) and techne (‘art’ or ‘craft’),” the dichotomy between head and hand, mental and manual labor, is today once again up to discussion. Should hand movements be considered akin to intellectual activities?
MP: I think that the body-mind dualism has been always questionable. Teaching at HfG Karlsruhe, I was too often impressed by how many colleagues and students kept entertaining the opposition of practice and theory, art and philosophy. Of course, the concrete and the abstract are always intertwined, always in a dialectical relation. We perceive our body through a body map that is continuously projected by our brain: the sense of touch is an illusion, as Buddhist philosophy argued once and as contemporary neurology demonstrates today. Similarly, our ideas, the way we think in general, are continuously constructed by our bodily movements, by spatial and social interactions. Consequently, the distinction of mental and manual labour is outdated. Manual labour is an intellectual activity—you don’t need a philosopher to see that. And mental labour originally meant hand calculation—calculation using your hand, movements of your hand that your mind then internalizes in mental models. The mental manipulation of symbols is another interesting expression. Manipulation meant originally in Latin mani-pulation, “pushing by hand.” Expressing it in English, Simon Schaffer and Lissa Roberts came up with this elegant expression of the mindful hand, which I mention in the first page of my book. On this, I forward you to the recent work of Lissa Roberts on the labour theory of science1, which says everything as a research agenda. That truck driver was my father, by the way.
PC: In your book, you mention that in today’s technical language an artificial neural network such as Frank Rosenblatt’s 1958 Perceptron is considered a classifier: “an algorithm for statistically discriminating among images and assigning them a class or category (also known as a ‘label’).” You argue that this classification emerges from associations with external conventions that define the meaning of a symbol in a given culture. On the other hand, recent debates on Large Language Models such as GPT and BERT have questioned definitions such as that of the “author,” the “artist,” “language” and more. In your view, is this a technical paradigm shift or a paradox of automation inherent to the ongoing redistribution of the division of labour?
MP: This is a great point and a great question, thanks. You started from a technical definition, the fact that any machine learning algorithms such as the Perceptron originally were meant for classification and how this technology, in a very controversial way, attempted to automate a crucial individual and collective faculty such as the act of interpreting an image. But then you flip the table, and ask: hey, but this is not the case that labour and its organisation through society changed, that we have the rise of cognitive tasks, and that somehow technology mirrored the complexification of tasks in our societies? You say that “GPT and BERT have questioned definitions such as that of the author, the artist, language”—but is it not the case that we started to question the status of author, artist, and language before these large AI models took over? I answer to this question with a question of a higher degree.
Figure 3: Cover, The Eye of the Master, London: Verso 2023.
PC: Among the central references in the book, that to the work of historian of science Peter Damerow is of particular interest for his emphasis on the social process of learning and its relations with the tools of science, as tools of learning and speculation. Damerow argued that learning involves constructing “mental models” that internalize external actions on real objects, and vice versa, tools help constructing mental model. Where do we position machine learning algorithms in this chain relationship?
MP: This is another important question. Peter Damerow (together with his colleague Wolfgang Lefèvre) always stressed in a typical Hegelian way that every time we use a tool we always learn something more than the knowledge that was incorporated in the tool’s design. The developers of AI do not address this question: they often think that the mission of AI is too mimic human intelligence and that this process of automation will escalate into a sort of autonomous superintelligence. On the other hand, we have already internalised these paradigms of thinking in the way we now see society and cultural heritage more and more from a statistical perspective. See for instance how Digital Humanities engage with the so-called “distant reading” (that is the application of statistical tools to cultural analysis, also know as Cultural Analytics).
PC: During the discussion that followed the book launch of The Eye of the Master at the Pro-qm bookshop in Berlin on December 15, 2023, you expressed concerns about the resurgence of an idea of artificial intelligence as modelled around a concept of biological intelligence. In contrast, in the 1970s, especially in French thought, significant efforts were made to question the normative definitions of intelligence and abnormal behaviour, such as schizophrenia and madness. What does the insistence on rationality and neuronormativity which is found in AI tells us about the time we are living in?
MP: It tells us that the normative power over the collective body shifted from the knowledge institutions of the nation state (such as hospitals, asylum, universities, schools, etc.) to corporate monopolies with their global platforms, vast data centres, and deep learning algorithms. It is telling us, as I mention in the book, that Frank Rosenblatt constructed the first neural network Perceptron by automating the principles of psychometrics, which is the discipline that applies statistics to the measure of cognitive tasks (as done in the IQ test). An institutional discipline such us psychometrics, with a strong normative dimension regarding the human psyche, happened to become a principle at the core of the most successful project of automation, that is AI. The history of AI is, in this sense, the history of the measurement of intellectual abilities and, on the other hand, disabilities. This may appear as a reading along the tradition of Foucauldian studies and, more recently, the critical epistemology of science and technology. But we could make a step further and see AI as a new system that not only controls abnormal behaviours and psychopathologies, but also orchestrates them. Rereading today the two volumes on “Capitalism and Schizophrenia” by Gilles Deleuze and Félix Guattari and their idea of the machinic unconscious (indebted also to Lacan), one wonders if AI is not actually a corporate apparatus of hallucination to colonise our unconscious.
Figure 4: François Pain, Min Tanaka à La Borde, still from film. Félix Guattari invited dancer Min Tanaka to perform at La Borde clinic in 1986.
PC: After teaching seven years at HfG Karlsruhe, you have recently assumed a new position as associate professor in Philosophy of Science and initiated a 5-year ERC project at the University of Venice about the “historical epistemology of artificial intelligence” and the study of the “models of reason and unreason.” Could you give us some insight into the direction your research is expected to take and how it differs from the objectives of the book you have just published?
MP: The Eye of the Master is a pre-history of artificial intelligence, we may say, as it stops its overview to the 1960s. I thought it was necessary to illuminate how these machines and algorithms were invented to understand which kind of ‘cognitive fossils’ we have inherited from connectionism in the current form of AI. The book, however, mainly follows the making of visual AI systems from which, however, also current Large Language Models (such as GPT) directly evolved. Rather than the visual paradigm, the ERC project AIMODELS that I started in Venice in January 2024 will address more the issue of language and its formalisation as a pre-condition for the rise of information technologies and AI. We will be concerned with how language has been central to the forms of production and labour in post-Fordism before AI came to automate linguistic work. In other words, we will try to frame the making of AI as the automation of linguistics, if you allow me this expression, yet always trying to understand it in a social and economic context, not as a technological evolution abstracted from history.
Footnotes
Alexandra Hui, Lissa Roberts, and Seth Rockman, “Introduction: Launching a Labor History of Science,” Isis 2023 114:4, 817–826 https://www.journals.uchicago.edu/doi/full/10.1086/727646 ↑