2 channel video Installation
Elvis is a two screen video portrait of the artist as Elvis and Elvis as the artist. The work continues Libby’s investigations into the deepfake face swap AI algorithm as both a tool and subject. Deep fakes are a technique that allows a person in an existing image to be replaced with someone else’s likeness. While faking content is not new, deepfakes use powerful techniques from machine learning and artificial intelligence to manipulate or generate new content with a huge potential to deceive. The main methods used to create deepfakes are based on deep learning and involve training generative neural network architectures such as generative adversarial networks (GANs).
Since Elvis and Libby have different facial structures, there’s a subtle blurring of identity – a non- binary Elvis – an uncanny hybrid of them both. Audience members come to the piece with the assumption that both screen are showing the original Elvis, but then notice the differences due to the deep fakes. The piece highlights the constructed nature of gender, particularly in relation to recent digital technologies. The work questions the notion of male author genius and also talks about our desire and consumption around the cult of celebrity. Elvis invites the audience into a reimagined history where the King of Rock and Roll was actually a womxn.
Touch is response-ability, (2020)
Dual interactive installation on Instagram and at the gallery
This work was commissioned by Hervisions at LUX as part of their OUT of TOUCH programme during lockdown 2020. OUT OF TOUCH sought to understand new vocabularies of touch when all we have is the digital space. Considering how isolation has accelerated our digital vocabulary, what a meaningful language of touch might be beyond the physical.
touch is response-ability, tuuch os rispunsabilitreaeaeaea is a site-specific interactive animation, where the participants’ touch controls the movement of the frames. Using Instagram stories as a medium, the work existed as two durational performances that invited viewers to activate the animation through the action of touch. Each performance lasted for 24 hours on LUX Instagram.
The first and last stills in each performance were created by Heaney based on extensive research into representations of the body in computer vision and artificial intelligence and parallels in art history, highlighting the biases in which bodies are seen and neglected in both. The subsequent frames in the animation were generated by passing the initial frame through a quantum computer, which through entangled pixels, the quantum computer fragments and inverts the image.
In every frame the body from the initial image always exists but the quantum computer enables us to see it from alternative, multiple perspectives – boundary-less and form-less. The stills are watched with a computer vision algorithm – Open-Pose – which loses track of the body as it is released from its encoded shackles.
‘ The title of the work comes from Barad’s essay On Touching – the Inhuman That Therefore I Am.‘
Libby Heaney is a London-based artist, researcher with a background in quantum physics whose practice connects quantum theory, machine learning and our environment through performance, Virtual Reality and participatory experience. She makes use of new technologies such as artificial intelligence and quantum computing to question the machine’s forms of categorisation and expand technology beyond its predominant purpose.
Libby has exhibited her artwork widely in galleries and institutions in the UK and internationally including a solo exhibition as part of the 2017 EU capital of culture in Aarhus and in group shows at Arebyte Gallery (online 2020), LUX/Hervisions (online 2020), Tate Modern (London 2016, 2019), ICA (London 2019), V&A (London 2018), Barbican (London 2019), Somerset House (London 2019), Sheffield Documentary Festival (2018), Science Gallery Dublin (2017, 2018, 2019), Sonar+D (with the British Council, Barcelona 2017), Ars Electronica (Linz 2017), CogX (London 2018), Telefonica Fundacion (with the British Council, Lima 2017). Libby has received a number of Arts Council England projects grants to support her work and is currently a resident of Somerset House Studios.
Getting machines to “see” like people do is one of the main goals of artificial intelligence, with applications ranging from controlling robots and driving cars to detecting terrorists. Machine vision works by first collecting live images through cameras, and then interpreting these images with complex calculations in software. Many of the most successful techniques use trained models from deep learning to estimate things such as what content is in an image, what emotion someone is showing, or whether a piece of art is a pastiche of a great master or not.