This is a guest blog-post by artist Nye Thompson.
Nye will be presenting work from her latest project, The Seeker, as part of our monthly Digital Design Drop-in on Saturday 17 February. During the drop-in session, we will follow The Seeker for 24 hours as it travels around the world and attempts to understand the things it sees.
We are being watched. The watchers are not human.
More and more, our lives and our environment are being observed by machines. Although the rise of lens-based surveillance is nothing new, and the security camera in the corner of the room is now so familiar that we barely notice it anymore. But something else is happening. Inconspicuously, covertly, a momentous paradigm shift is occurring.
Something non-human is watching through those lenses. Artificial Intelligences – AIs – are starting to look back at us and at our world. An entirely new way of seeing the world is evolving. Machines are being trained to see, and through their nascent gaze, they are learning to evaluate, judge and make decisions which will affect us all.
This paradigm shift is being driven by rapid advances in AI image recognition an identification technology, and funded by homeland security and software multinationals. But what does this new artificial vision look like? What do the AIs actually see? What is the topography of their new visual landscape? We have already seen the development of ‘machine bias’ in language analysis algorithms – the results of human bias inherent in their training. What are the machines learning to find worthy (or unworthy) of their visual attention?
As an artist, my practice involves the development of exploratory software systems. I originally became interested in the machine gaze while working on my global surveillance project Backdoored . Here I was collecting screenshots taken through unprotected online security cameras, and examining the privacy and social impact of the growing ‘Internet of Things’. I became increasingly interested in the machinic genesis of the images I was collecting. They are generated in the instant that a search-bot discovers an unprotected security camera. There is no human agency involved, only an emergent system acting on algorithmic ‘instinct’. These images are records of the machine gaze in action, and I started wondering what the machines could see in the images they had created and how they were learning to interpret that vision.
I created my latest project ‘The Seeker’ as a way of investigating this emerging machine gaze. The Seeker is an AI, a machine entity which travels the world virtually and describes to us what it sees.
The Seeker’s ‘eyes’ are security cameras located all around the world, and it uses the latest image recognition algorithms to describe what it sees.
I named the project for Ptah-Seker, the artist/technologist god of the Ancient Egyptians. An Ancient Egyption creation myth tells of how he created the world by speaking the words to describe it. This project looks at how this act of describing the world might establish a whole new worldview for machines and humans alike. The Seeker outputs machine-generated images and descriptions of vast numbers of surveilled scenes from along the world – big data. And through the act of description its own conceptual landscape and thought processes are revealed. It’s a process which is somehow mundane and epic at the same time.