WHAT MAKES US HUMAN • The theme of PhotoVogue Festival 2023
Photo by Amy Woodward

WHAT MAKES US HUMAN? • The theme of PhotoVogue Festival 2023

Image in the age of A.I. 

Ever since the first edition of PhotoVogue Festival, we have always dedicated the event to themes that we believe are ethically and aesthetically crucial: from the female gaze to issues of representation; from representations of masculinity to inclusivity, and so on. 

In the last edition in 2022, we explored how the ubiquity of images influences our ability to understand experience and react to events around us. 

With this in mind, today we cannot avoid addressing the staggering development of artificial intelligence (AI) and the anthropological revolution that it is already bringing about, perhaps comparable to the invention of the wheel or the emergence of alphabetic writing. We are certain that the risk must be taken, but we are well aware that the speed of progress in this field threatens to render many of today’s considerations obsolete tomorrow. 

AI will therefore be the theme of PhotoVogue Festival 2023 in Milan from 16 to 19 November. 

From now until our event, we will immediately seek to track evolutions in the intellectual debate with a series of articles that will feature on our platform. 

In our usual style, the event itself will then propose a well-structured programme of talks with leading figures and experts at the forefront of this revolution, with particular attention to the pressing ethical and aesthetic issues raised by the development of AI in relation to the creation of images. 

As for the exhibitions, there will be a prevalence of works created by our dear, flawed but unique human intelligence. Regarding AI-generated contents, meanwhile, we intend to show contributions that suggest a virtuous use of this technology. For example, we would like to present a movement spawned by artists aspiring to overcome questions of representation thanks to images created with AI, or the opportunities offered by AI to give visible form to scenes that would otherwise only be possible in the imagination, thereby freeing creativity from the constraints of the real. These are therefore cases where the use of AI is openly transparent. 

But now let’s take a look at the theme of the 2023 edition: AI.

In the 1950s, mathematician Alan Turing devised a test to determine if a machine was able to exhibit “intelligent” behaviour. It was the beginning of the AI adventure. The pace of events then accelerated exponentially, progressing from neural networks to arrive at the first industrial applications. Since then, and with an increasingly rapid evolution, its use and development have become ever more pervasive and endemic, to the point where we have been immersed in AI for some time now, without being aware of it. So far nothing original. 

The aim of AI’s industrial use is consistent with the principle of classical advanced capitalism, i.e. to speed up all processes to achieve results in the shortest possible time frame and at the lowest possible costs, according to a logic of time and performance that is increasingly detached from the more human – perhaps too human – rhythms of reflective thought, creativity and personal development. 

Things start to become more interesting and delicate when the development of AI begins to touch on two critical areas of societal life: the sphere of health, with AI’s biomedical, diagnostics and prosthetic applications, and the scope of AI’s use by government agencies and big companies. In other words, things become interesting when applications of AI concern “life governance”. Broadly speaking, the spheres of health and social control. 

This is to say that AI affects our lives on multiple levels, and each of these requires the utmost attention.

To start with, we are faced with the big problem of work, which AI renders increasingly obsolete by bringing us closer to the technological unemployment theorised by economist John Maynard Keynes. Or, for those who can afford it, there is the issue of leisure, as Hannah Arendt observed in the mid-20th century. In her book The Human Condition, she described “a society of workers freed from the shackles of labour”, while also highlighting the danger that this society had forgotten the “higher and more meaningful activities for the sake of which this freedom would deserve to be won.” 

Secondly, with AI’s medical and biomedical applications, there is an increasing erosion of the boundaries between human and artificial. The integration of these applications is becoming ever more structural and indissoluble, and with the advance of AI’s powers of “self-learning”, it almost seems to threaten the assumed primacy of human intelligence. Some people are already starting to worry about and recall the famous three laws of robotics defined by Isaac Asimov (“1 – A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2 – a robot must obey the orders given to it by human beings except where such orders would conflict with the First Law; 3 – a robot must protect its own existence as long as such protection does not conflict with the First or Second Law”). And others are anxiously reminding us of all 1970s science fiction, seeing as everything seems to indicate that the distant future described in those books is already in the here and now, or in the very near future at the latest. 

From this point of view, enthusiasm for AI’s progress and, at the same time, unease about the possibilities it puts at our disposal go hand in hand. That said, it would be necessary to engage in a critical reflection that would allow us to frame AI in the right perspective, particularly concerning the relationship between human life and technology and in relation to the statute of intelligence. 

In every respect, the anthropological revolution that AI entails is very particular and in some ways paradoxical. Indeed, it does not alter the nature of human life, making it more artificial or less natural. Instead, it leads the most profound and invariant essence of human life to extreme consequences. 

That is to say, it reveals that technology (of which artificial intelligence is the most complex and current expression) and human life are consubstantial. Technology, contrary to a large section of mainstream thinking, has not evolved to stand in for human limitations, ending up alienating the nature of those limitations in a dimension where it does not belong, i.e. in the dimension of the machine. On the contrary, as Jean-Luc Nancy wrote, human life is technology by definition. As philosopher Carlo Sini reminds us, human life is always inscribed and circumscribed by practices of knowledge, of writing – essentially by technology – and those practices, whatever they are – from the stick to the wheel, from runic to alphabetic letters through to cybernetic programming – define human “nature” and expression. 

From this perspective, there is no humanity that is not infused with technology, and this is precisely the most unsettling aspect of AI and its revolution. It brings to light the profound nature of human life: the fact that the technological part is inseparable from the natural part. At most, we can think of technology as the internal difference that prevents nature from coinciding with itself, and vice versa. In other words, the revolution brought about by the explosion of AI is a profound experience of truth and the deconstruction of our most deep-rooted preconceptions. 

The problem of intelligence is easily stated. It is very alluring to attribute the sex appeal of intelligence to a machine. But, as of now, the only intelligence in machines is the one that humankind and its writing practices have deposited in them. 

Machines only have what humankind puts inside them, through the process of their implementation and subsequent inputs. And this will also apply when artificial machines produce other artificial machines that can learn by themselves (again according to criteria of self-learning installed in them from the beginning). 

From this point of view, we should be wary of being seduced or overly alarmed by the progress of discoveries in this field. The substance for amazement or alarm is always exclusively human, and all too human. God is dead, wrote Nietzsche, but Woody Allen wasn’t far wrong when he added, “And I don’t feel so good either.” In this “not so good”, the real game of humanity is played out. 

With AI, humans attain or can attain the power to shatter the framework of reality (for example by mixing reality and fiction in deep fakes), or implement their totalitarian visions (as with the extreme use of AI for social control), or definitively dematerialise and de-territorialise the subject (which, embedded in a parallel world of virtual experiences, becomes or can become uncoupled from bodily reality to a previously unthinkable degree).

Thirdly, the other extensive field of reflection raised by developments in AI concerns its political applications in a broad sense. The scandal of Cambridge Analytica – the famous British consulting firm – has already shown how the use of AI can enable very effective and targeted interference in a country’s political life by manipulating the voting population. 

But what should we make of its large-scale use, for example with biometric recognition and surveillance programs in public spaces, directed at the control and censorship of political opponents in totalitarian regimes (or in the authoritarian drift of democratic regimes)? In other words, to what extent does AI lend itself to a totalitarian exploitation of its capabilities? In this regard, let us remember that every democratic and representative system presupposes that its interlocutors are its citizens, or rather reflective and critical individuals, and that this is only possible when the public space of political life is offset by the private space of private reflection. Both of these spaces are threatened by the potential offered by AI to control and manipulate our subjectivity.

We had a first glimpse of all this during the last edition of PhotoVogue Festival, with a conference held by Fred Ritchin. An illuminated intellectual who has been working on this topic for some time, he will soon be publishing a book on the subject. 

We would like to continue from those initial insights, approaching the problem from the point that most pertains to and qualifies PhotoVogue, i.e. the aspect of images and their reproduction (a problem dear to Walter Benjamin) and above all their production and creation – which AI allows us to detach from any real-world reference – as well as their use. The questions we will touch upon are far-reaching and unsettling.  

Here are some examples. 

AI is now able to “learn” with increasing margins of autonomy. Nevertheless, as of now, someone still has to provide AI with materials for the learning processes. But what happens if those materials are images that can be found on the internet and social media? 

Think, for example, of the big players in image creation, DALL-E, Midjourney and Stable Diffusion, which are today democratically available to anyone. By fishing from the immense pool of images on the web, these systems can only reproduce and therefore reinforce the stereotypes and prejudices that afflict the source material and the criteria governing their “self-learning” processes. 

The political and cultural question is therefore both serious and banal: there is no such thing as neutral AI, just as no human mind is free of prejudices, whether cultural or political. How can we envisage tackling these biases that, for example, already today convey a certain vision of humans in AI models that is skewed in favour of the anthropological and socio-cultural horizon of Big Tech? 

Again regarding the use of AI, there are many repercussions in terms of aesthetics and artistic life. Indeed, AI is able to produce images in the style of any photographer, with results that are indistinguishable from the photographer’s shots they are imitating. All you have to do is train AI to do it. 

But at this point, what about the authenticity, originality and uniqueness of a human work and creation? Could AI ever be more than simply derivative? And with an explosion of the results of artificial “creativity”, what would be the consequences for the archive of human works? 

To give an example, think of the work of Robert Capa. Thanks to DALL-E, we can already produce images à la Capa. We could therefore invent the lost photos of Omaha Beach starting from the huge archive of images in Hollywood’s war cinematography. Or we could sharpen the few surviving blurred photos, perhaps supplementing them with other elements that we might deem useful to the image’s meaning. What would become of the delicate poetic quality of those original shots? After all, their essence also consists in the misadventures of those negatives, and their strength does not so much lie in the realism of the image, but in the quivering and blurriness that condemns it. 

More speculatively, what would become of the original archive? How can we preserve its uniqueness? How can we think about the category of uniqueness any longer? More prosaically, how can we ethically, aesthetically and also commercially manage AI’s productions (even merely considering the question of copyright)? How can we reconcile images that, thanks to AI, are capable of gaining increasing liberty from their real-world references? 

In a recent article for Vanity Fair, Ritchin recalled an episode in the early days of the digital revolution, when National Geographic “had modified a photograph of the pyramids of Giza so that it would better fit on its cover, using a computer to shift one pyramid closer to the other. The magazine’s editor defended the alteration, viewing it not as a falsification but, as I wrote then, ‘merely the establishment of a new point of view, as if the photographer had been retroactively moved a few feet to one side’.” The most disturbing part of this incident is the publisher’s response. In its infancy, the transition to digital was already explicitly revealing something that had already been present in film photography, or rather the mediated relationship between photography and reality. However, digitalisation added an extreme dimension capable of undermining the customary opposition between real and virtual. This forces us to rethink our classical frameworks of categorisation, and it is by no means certain that we possess the conceptual tools to reflect on these problems. 

So far we have talked about the production of images starting from an archive that AI draws upon, and of course the relationship between artificial self-learning systems and the human creator who feeds them based on particular choices. But a new threshold is about to be crossed. Some images created with AI are indistinguishable from images produced by people. This in itself is enough to shake the foundations of photojournalism and the documentary importance of photography. 

But to take a step forward, there is an available use of AI that enables us to create semblances of reality, as in the case of deep fakes. This prompts a re-examination of the relationship between truth and reality, and the very idea of reality itself, in a way that would unsettle even the most high-spirited postmodern theorist. Some enlightened photojournalists have already been trying to draw attention to this aberration for some time. 

Think of Jonas Bendiksen’s The Book of Veles, which has the same title as a text dating from 1919 that is considered to be a historical forgery. Bendiksen’s book is about the town of Veles in North Macedonia, which is famed as an epicentre of fake news production, and it contains photos that are in turn false since they were manipulated with the insertion of 3D characters in post production. As Martino Pietropoli wrote in an article on Medium, the book is therefore based on three levels of falsification, or rather on three different manifestations of the false: current and informative falsity, historical falsity, and lastly documentary falsity, i.e. the result of Bendiksen’s editorial work.  

Meanwhile, @absolutely.ai won a competition with an AI-generated image and then confessed to the ruse to publicise the incredible quality that artificial systems are now able to guarantee. This gesture, perhaps unwittingly, exposes the danger we are facing. It is both an ethical and ontological danger given that we can create images from scratch that bear no correlation to a “real” object, whether for noble and honourable purposes or for much more dubious motives. 

For example, as Ritchin pointed out in his Vanity Fair article, we could create images about climate change to show everyone the seriousness of its potential effects and raise public awareness. But how might a totalitarian regime use this power? The fact is that AI massively increases the potential for political control over freedom of opinion, from press censorship to the creation of harmful virtual worlds on a level that would make Nazi propaganda look like child’s play. 

On the other hand, and more generally, AI carries out a disturbing rewriting of subjectivity, also unwittingly, in terms of its pliability and calculability. This, in turn, is none other than an implementation of the anthropological prejudices of its creators, as can be appreciated, for example, by considering the problem of AI self-learning and creativity. 

As already stated, AI can produce images in the style of any photographer, with results that could be indistinguishable from the photographer’s original work. AI just has to be trained to do it. But this success has nothing to do with the remarkable uniqueness of artists who have fully explored their medium and revolutionised it. To stay in the field of photography, think of Robert Frank, Irving Penn, Cindy Sherman, Nan Goldin, etc. So could AI ever invent new artistic languages and idioms? 

The question is ill-posed because we first need to understand how AI learns. 

At present, it learns according to strictly informational criteria (both cognitively and emotionally), or rather by transforming the world it has “experienced” into information. It therefore follows a merely deductive or inductive path (let’s forget about intuition, an eccentric idea that we have discarded at least since the times of Charles Sanders Peirce, seeing as intuition only applies if seen from afar; seen up close, it is always some form of reasoning). 

In contrast, human learning and the creativity that can result from it are not just informational. Humans do not transform the world into information. In some sense they do, but they simultaneously also do much else. Humans are in a bodily, unique and erotic relationship with the world. Their feelings are always embodied in the layers of life, which only in a small part ends up becoming “information” in the human mind. But it is these feelings that underpin and fuel the revolutionary creativity of art, philosophy and literature. 

Creativity and the self-learning of AI only go hand in hand if we reduce creativity to the informational components of worldly experience; only if we reduce the subject to this calculable and manipulable dimension. And this is perhaps the greatest danger that lies before us. 

Photo by Amy Woodward