Let me dream again (2020)
Generative adversarial network (GAN), film
Let Me Dream Again is a research project realised in part as a GAN generated film, which was first trained on early cinema and then edited by hand, in order to create a new, different, narrative. Its title, which the work draws from, references a turn of the century film of the same name which contains one of the first examples of a dream sequence ever portrayed on film. The title draws a connection to the concept of ‘dreaming’ often used to describe the output of GANs. In the turn of the century version the ‘me’ of the title is a clear reference to the protagonist who longs to return to his fantasy. However, in GAN generated film, meaning becomes obscured and identification turns ambiguous; is it a character in the film? Is it the artist? Is it the machine?
There is a spectrum of machine learning art, ranging from works with very little input from the artist, to works which are very tightly controlled by them. Ridler has always been very interested in working at the fundamental level of the dataset and a large part of previous projects have been the creation of them. In this experiment, she wanted to see how it would change her working practise to work from a source that she did not make and see how she could still retain creative control.
The term “latent cinema” has been used to describe the way GAN generated moving image work appears to place “a camera into the space in the mind of a machine” by showing infinite images. This has led some curators to argue that a major part of moving image AI artworks consists in conveying the entirety of a model’s latent space. However, this, to her, ignores an important aspect of film-making-the editing and curating – which is something that she wants to explore further in the project. The latent space of a generative model can be seen as the machine parallel of the psychic landscape of our unconscious mind. She wanted to see how she could not just reproduce this landscape, but to lay a narrative framework on top of it – how can she take what has been generated by a model and reconstruct it into a story?
GANs generate images which, when put together to create a film, move in a strange, unearthly way which defy the usual rules of logic of how objects and people should behave. GANs do not do well with the constraints of time – things morph and change and shift in ways they never would in the real world – making it very challenging to build a structure out of what was there using GANs such as stylegan. Experiments with other models that take in video rather than stills showed that it is possible to get more of a real-world logic, but it was still very difficult to make it work. Instead, structure came from creating a shadow film of stills from the original dataset, that sits behind the GAN generated film, moving in and out with varying degrees of success and accuracy. Ultimately, hundreds of films are recut and reconfigured, joining things that shouldn’t be joined, yet at the same time creating a structure and a coherence.
There are strong parallels between early cinema and imagery generated by machine learning. Early film-makers had to invent a film language, much like how artists working with machine learning generated imagery today are creating a new form of making work that does have prescribed rules. In both cases there has been a heavy emphasis on hardware , with early cinema placing a heavy emphasis on the machine that created it, rather than content it created. Both were considered niche technologies in their infancies, and both try to record and reflect the world as seen by those in control, at times exposing implicit bias or more critical views [something along those lines] Looking through the large amounts of early cinema that made up the training set there are uncomfortable images and phrases that appear as part of the films: references to Jim Crow, racial stereotypes, sexism. Should these be included or erased? This becomes a conversation with the past, a moment of reflection in the present and potential dream for the future. By combining machine learning with early cinema, many interesting questions arise around the plausibility of permanence. While it is unknown how much of early cinema has been lost (there is a common statistic which suggests that 90 percent of all American silent films no longer exist). Although it might be assumed that celluloid film would be the thing to disappear and decay, it is a physical object that can be preserved in the real world and has been for over a hundred years. No one knows what is going to happen to any of our digital media and it is, in a different way, just as fragile.
Anna Ridler is known for her generative works which utilise machine learning and data collection as a means of revealing the human aspect of increasingly pervasive deep technology. A core element of her work lies in handmade datasets she creates through a laborious process of selecting and classifying images and text. Through this process, she is able to uncover and expose underlying themes and concepts while also inverting the usual process of constructing large databases.
Born in London in 1985, Ridler received a BA from Oxford University and an MA from the Royal College of Art. She is a former fellow at UAL Creative Computing Institute in London and EMAP. Her work has been exhibited at numerous cultural institutions worldwide, including: the Victoria and Albert Museum, Tate Modern, the Barbican, Centre Pompidou, HeK Basel, the ZKM Karlsruhe, Ars Electronica, Geffen Contemporary Museum of Contemporary Art and the Design Museum in London.
In 2018, Ridler was named by Artnet as one of the “9 pioneering artists” exploring AI’s creative potential. The following year, she received a nomination for the Design Museum’s 2019 Beazley Designs of the Year award for her work Myriad (Tulips) (2018) along with an honorary mention in the 2019 Prix Ars Electronica awards for her work Mosaic Virus (2018). She was a European Union EMAP fellow and the winner of the 2018-2019 DARE Art Prize. Ridler has received numerous commissions, including Laws of Ordered Form (2020) for the ‘Data / Set / Match’ programme at The Photographers’ Gallery and Mosaic Virus (2019) for the Impakt Festival. Ridler was a 2019–2020 Google Artists + Machine Intelligence (AMI) Grant recipient, where her work will be exhibited on Google Arts & Culture in fall 2020.
Ridler lives and works in London.
By learning the most important common features of complex data such as images, it’s possible to compress and store them with less information, to be expanded later. That makes it possible to construct compressed data points (vectors) and expand them into images which may never have existed. We call the entire set of possible vectors the latent space, and artists explore this space as part of their practice.