Selected Presentations & Appearances
Myths about the becoming of all modes of being from precosmological times are present in many Amerindian myths. In Amerindian thinking, these mythical times spread a common condition shared by all actants before their current metamorphosis. Eduardo Viveiros de Castro’s perspectivism, drawing upon these Amerindian cosmologies, suggests conceptualizing these mythical times as the moment of indifferentiation -or pure potency- where all ‘actants’ should be considered originally as humans. In this regard, perspectivism appears to challenge a classical perspective, which was rejuvenated in the 20th century by certain cybernetic and artificial intelligence theories, positing that all beings can and should be understood and explained as complex, discrete, rational, and mechanical machines.
My primary argument is that by comparing Amerindian and machine learning mythologies, perspectivism challenges the constrained conceptualization of actants and matter imposed by the latter, as it seeks to encompass the entirety of the world as its fundamental justification for existence and applicability. This speculative dialogue raises a crucial question regarding the distinction between what is computable and what is not.
In machine learning classification models, an individual is simply and arbitrarily defined as an array of ones and zeros tethered to a semantic label. This semantic arbitrariness is the mythological consequence of a time before algorithmic entities’ distinction and differentiation, as in the Amerindian humanness mythic time, but in this case a machineness mythical time. Like in the times where a shared-person condition of all actants is predicated, in machine learning classification models there is an indiscernible amount of potential entities before the onset of data processing. This myth is indispensable to the planetary applicability of machine learning and is predicated upon another myth regarding the universal aspirations of computing, wherein the idea that everything on this planet is computable is embraced. By examining Amerindian creation myths and confronting them with the myth underlying machine learning's ambition to classify the entire world, the question about what is computable helps elucidate certain epistemological bias that reinforce colonialism and contribute to the erasure of non-hegemonic knowledge regarding the ontology of the world.
Arising at the intersection of interdisciplinary scientific approaches, contemporary artificial intelligence (AI) techniques, such as machine learning and deep learning using neural networks, are often perceived as embodying orderly and impartial rational processes through complex mathematical formulations. Popular claims about the super-intelligence of these models point out to this ability to apply rational procedures to vast amounts of data that may be beyond human capacity. However, taken to their logical extremes, this form of rationality, which I term radical rationalism, can yield machine learning outputs that are incomprehensible to humans, appearing as if they are generated by an alien form of rationality rather than a human one.
In such cases, machine learning outputs may seem irrational to human understanding, but in reality, they are the unexpected outcomes of a radical rationalism that arises from algorithmic functioning within artificial neural networks. This paper posits two hypotheses: firstly, that human rationality could become an alien form of rationality when radical rationalism is demanded by algorithmic processes within AI neural networks; and secondly, that this alien form of rationality, or radical rationalism, could be utilized as a potential strategy in the context of emergent hacking practices within the field of machine learning. Thus, the notion of radical rationalism in machine learning raises intriguing questions about the potential alien nature of AI-generated outputs and their implications for the evolving field of adversarial machine learning.
If we can speak of ANN cultures that are both shaped by the spread of ML applications and by the cultural processes that these learning algorithms undergo, characterized by a privatization of process, I propose the term "Neural Networks Visual Counter-Culture" (NVC) to describe the emerging set of visual practices that oppose the dominant values of the current ANN culture, which is predominantly controlled by corporations and security forces. These artistic and political expressions are creating a new visual culture of evasion, camouflage, and invisibility, where visual data becomes an operational image, as it challenges the military and economic interests embedded in its functioning. If culture is a site of identity formation, then counter-culture becomes a space where people can subvert and run away from their algorithmic identities.
This paper presentation focuses on visual anti-hegemonic data practices and proposes an analysis of these counter-culture manifestations through the lens of visibility/invisibility dialectic within the field of computer vision. It aims to explore possible strategies to deceive the classificatory duties of algorithms over historically marginalized people. These practices are examined through a theoretical inquiry into the sociogenic replicator code that perpetuates a colonial epistemology based on classificatory practices from the 17th and 18th centuries in Europe, which were imposed as universal truths determining the hierarchical allocation of humans and non-humans in the modern chain of being.
In her search for a definition of humanity that goes beyond the sociogenic replicator code, where a second set of instructions defines what it means to be human according to Man1's Renaissance homo politicus and Man2's homo oeconomicus, Sylvia Wynter proposes a wide horizon in which a purportedly ecumenical condition of all humans obliges us to return critically to the origin myths that formed these Man1 and Man2 narratives. Through this critical return, different ways of how this replicator code is reproduced in an autopoietic fashion emerge. The hypothesis proposed for this presentation is that this sociogenic replicator code, drawing on feedback and autopoiesis cybernetics concepts, is inherited into machine learning classification models as a foundational myth, leveraging the monohumanist ideology of the colonial project.
This article proposes to analyze the definition of the human being through the ‘eyes’ of machine learning models dedicated to the recognition and classification of people. For this, the image of a dialectic of surveillance and social control is offered based on the constant visibility of those affected and the invisibility of those who benefit from this system. The algorithmic vision emerges here as a privileged place to exercise this power, but at the same time as the place of vulnerability, from where it is possible to transform this dialectic and the power relations that it determines. The article then proposes a reflection on the different counter-surveillance strategies that take advantage of creativity and aesthetic experience to think about other power relations, that is, about other possibilities to think about what a human being is today.
La clasificación es una práctica social y política que ha estado ligada históricamente a la estadística y con ella a sistemas de vigilancia y control social. El funcionamiento de los modelos de Machine Learning, basados en redes neuronales artificiales, dependen de la clasificación estadística de los datos. Gran parte de los modelos de clasificación están ligados íntimamente a intereses corporativos y policiales que se despliegan en sistemas de vigilancia y control algorítmico, definiendo la identidad de lo normal y anormal a partir de una media estadística, pero también a partir de unas visiones de mundo que se encarnan en el funcionamiento mismo de estas redes neuronales. Esta clasificación algorítmica termina en muchos casos perjudicando a personas o comunidades históricamente discriminadas, reproduciendo así relaciones coloniales y de dominación. Frente a estos problemas, los Adversarial Examples se presentan en el campo del Machine Learning como una excusa para pensar en las estrategias o contra-mecanismos clasificación algorítmica, abriendo el campo a diferentes prácticas políticas y estéticas que cuestionan esta clasificación y los intereses que la animan. También, los Adversarial Examples permiten pensar en prácticas de-coloniales que con el mestizaje propongan formas de resistencia desde el sur global, frente a un reordenamiento colonial de los datos y las identidades.
Pensar en la acción de categorizar exige preguntar no sólo quién decide y en dónde se deciden las categorías y sus objetos, sino también el tipo de negociaciones que median estas decisiones y las infraestructuras técnicas que permitirían implementarlas. El objetivismo sostiene que la mente aprehende el mundo por reflejo, es decir, como espectadora neutral de la realidad. Las categorías se definirían así por aquellas propiedades comunes de los objetos clasificados; de esta manera, las categorías de la mente serían un reflejo modesto del mundo. Esta visión es fundamental dentro de las ciencias cognitivas y existe en el corazón de muchas infraestructuras técnicas, como en la IA. La visión algorítmica según esta perspectiva, tendría la capacidad para identificar y categorizar objetivamente personas, cosas o acciones, a partir de los datos visuales. La visión algorítmica clasifica para alguien, está atravesada, en su gran mayoría de aplicaciones funcionales, por unos intereses corporativos, militares y policiales específicos. Es decir, permite desplegar ampliamente unos sistemas de vigilancia y control social que en muchas ocasiones perjudican personas o grupos sociales históricamente discriminados por clase, raza, género, sexo o etnia. En primer lugar, el aprendizaje automático funciona a partir de la identificación y clasificación estadística de patrones. Por eso se lo define como ‘estadística en esteroides’. A su vez, la estadística ha estado desde sus orígenes, con la emergencia de los Estados-Nación, al control policial; y hoy lo sigue estando por medio del aprendizaje automático. Esta herencia fuertemente policial se vincula a una clasificación objetivista que pretende trasmutar los datos visuales identificados y clasificados por la visión algorítmica en características psicológicas y comportamentales, convirtiendo la superficialidad perceptible en una especie de interfaz transparente a través de la cual se accede a una intimidad históricamente inaccesible. ¿Cómo la máquina ha sido entrenada para ver? ¿Cuál es la realidad que capta? ¿Cuáles son los supuestos conceptuales que se esconden detrás de estas prácticas tecnológicas? Específicamente: ¿Qué es la clasificación para la máquina? ¿Cómo se le ha enseñado a identificar y clasificar? ¿Qué relación hay entre estos procesos de identificación y clasificación con los sistemas de vigilancia y control? Y ¿A quién beneficia y a quién perjudica la visión algorítmica?