Machine Vision symposium

July 6, 2021

On Friday 28 of February 2020, just before the London lockdown in what now seems like a very different time, a symposium took place at the V&A that brought together scholars, artists and curators to discuss the ways computers ‘see’ the world. The event, entitled ‘Pre-Histories and Futures of Machine Vision’, was an opportunity to discuss the history of computer generated images and early experiments in AI, as well as the ways that recent developments in machine learning and computer vision are impacting contemporary art and culture. The symposium was a culmination point for a year I had spent as a visiting fellow at the V&A Research Institute exploring the V&A’s early computer art collection and thinking about the museum’s projects that have focused on contemporary digital design and artificial intelligence. The V&A seemed to me an ideal place to trace a history of ‘machine vision’ from its origins in the 1960s to the recent explosion in computer vision and machine learning technologies that is significantly altering our contemporary world (from automated vehicles, to industrial robotics, to facial recognition systems). After all, where else but the V&A would it be possible to simultaneously visit an exhibition on contemporary video game design (Videogames: Design/Play/Disrupt), a display of some of the most canonical computer art works of the 1960s and 70s (Chance and Control: Art in the Age of Computers), and a showcase of artists and designers working with and questioning AI technologies (the Artificially Intelligent Display)?

Quadrate (Squares) print by Herbert W. Franke, 1969/70. Museum no. E.113-2008 © The artist / Victoria and Albert Museum, London

Art and design are an important lens through which to examine the history of machine vision, as the early moments of computer art occur in parallel with the first developments in computer vision. At the same time as the calculating machine was first being transformed into an image-making machine, early attempts were being made to allow computers to process images and begin to approximate human vision. Our contemporary visual culture is increasingly filled with the products of this research – images generated by machines and reliant on computer vision technology – from the notorious ‘deep fakes’ that circulate online to the multiple image filters provided by our current photo apps. Artists and designers are engaging with these new computational systems, producing creative and critical projects that are shaping our understanding of these emergent technologies, while highlighting the social and ethical concerns they raise.

We use third-party platforms (including Soundcloud, Spotify and YouTube) to share some content on this website. These set third-party cookies, for which we need your consent. If you are happy with this, please change your cookie consent for Targeting cookies.

The morning session of the symposium focused on the early history of computer images and AI. Our first speaker was V&A Senior Curator Douglas Dodds, who brought us on a visual tour of the museum’s Computational Art collection, an expansive group of works that includes key figures such as Herbert Franke, Vera Molnár and A. Michael Noll. The V&A first began collecting computer artworks in 1969 in the wake of the influential Cybernetic Serendipity exhibition at the ICA. If, as Dodds suggests, those early acquisitions may have originally seemed somewhat out of place in a museum of design objects, their important connection to contemporary digital visual culture is now unmistakable. The next symposium speaker Zabet Patterson, author of Peripheral Vision: Bell Labs, the S-C 4020, and the Origins of Computer Art, brought us even further back in in time to the first ‘Computer Art Contest’ showcased in the pages of the publication Computers and Automation in the early 1960s. Her talk focused specifically on the selected winner of the contest in 1963 and 1964, images produced not by an artist, but by the U.S. Army Ballistic Research Laboratories. Patterson suggests that these technical images, produced for military use, are representative of the ‘new regimes of perception’ coming into being at this time, a period when both scientists and artists were ‘working to reconceive vision as an adjunct to the computational machine.’ My own contribution to the morning session was an exploration of early experiments in AI by UK computer artists and researchers, focusing on three significant figures: the British neuro-physiologist and computer vision pioneer David Marr; the cybernetic sculptor Edward Ihnatowicz; and Harold Cohen, the creator of the automated image making system AARON. Although differing greatly in their methods and approaches, the three figures share the common notion that the most crucial insights produced by their early experiments in artificial intelligence are actually revelations about human sensory and cognitive systems – our own apparatuses of vision, perception and meaning.

The afternoon session of the symposium moved our discussion into considerations of the present and future of machine vision and shifted the perspective towards the views of artists and designers employing these new technologies in their work. Our first speaker of the afternoon was the artist and researcher Anna Ridler, who explained her use of machine learning in her creative practice. Her talk focused particularly on the emergence of Generative Adversarial Networks (GANs), a form of deep learning neural network used to generate images. Ridler’s own work highlights the importance and creative potential of the datasets that act as the often unseen training material underpinning the ‘automated dreaming’ of neural networks. Our next speaker, the artist and animator Alan Warburton, introduced us to the concepts motivating his first solo exhibition, RGBFAQ, which took place this year at London’s Arebyte Gallery. Warburton draws from his experience working with computer generated images and his intimate knowledge of the software used to produce them. His talk culminated in a discussion of ‘synthetic datasets’, computer generated images used to train AI systems. Synthetic datasets are the place, according to Warburton, where the parallel histories of CGI images, computer vision and machine learning intersect. Our final speakers of the day were the designer Tobias Revell and the V&A curator of Digital Design Natalie Kane, who work together under the name Haunted Machines. The design duo presented their ongoing investigations of the increasingly automated production and dissemination of images. From photo-realistic architectural visualizations to animated Instagram personalities, Kane and Revell reveal the social and political stakes involved in what feels like a race ‘to render and disseminate the most convincing or powerful imagery the fastest.’ The symposium concluded with a roundtable discussion, chaired by V&A Research Institute Director Joanna Norman, drawing links between these histories and futures of machine vision.

All of the symposium presentations are now available as audio recordings.

0 comments so far, view or add yours

Add a comment

Please read our privacy policy to understand what we do with your data.


Join today and enjoy unlimited free entry to all V&A exhibitions, Members-only previews and more

Find out more


Find inspiration in our incredible range of exclusive gifts, jewellery, books, fashion, prints & posters and much more...

Find out more