The Future Starts Here – @futurepolitica1 on Twitter



July 16, 2018

@futurepolitica1 What is life like in North Korea?
@futurepolitica1 Do you trust yourself?
@futurepolitica1 Who should I vote for?

The political bot for The Future Starts Here was based on the findings of 2 years of research at the Oxford Internet Institute and Etic Lab into how automation on the internet has been exploited for political propaganda purposes. In the intervening time this has become something of a hot-button issue, and Twitter got wise to some of the more obvious ways in which their platform was being abused by bot builders to promote, distort or disrupt political messages.

Their subsequent modifications to their service meant that our bot couldn’t properly stand in as an artefact from the various campaigns of 2015 – 18 (specifically the ability to ‘flood’ hashtags with bot-driven tweets), but I’m still proud of its capabilities and it serves well as a demonstration of the potential for social media manipulation.

One of its functions, which is in fact something of a diversion from the ‘traditional’ activities of the political bots that have been found in the wild, was the chat feature. Based on a 50-year-old programme designed to keep a conversation rolling without providing definitive statements that took the conversation beyond the limitations of its code, I initially thought it would be a cute feature that elicited one or two tweets before being ignored. For sure that has indeed happened, but what I hadn’t anticipated, and what has struck me in going over these conversations, is the wide, wide range of expectations people have brought with them when they start talking to the bot. It has been wonderful for me – knowing what’s ‘under the hood’ and how I expect it to work – to see it used and exposed in ways I hadn’t anticipated, and watching it meet (or fail to meet) people’s criteria for being called a bot.

As it is advertised as a political bot, it’s unsurprising that we have seen a lot of political commentary. Questions about who to vote for, and about what it thinks of Donald Trump, seem pretty popular, and interestingly could be easily anticipated and thus programmed for, if not by me then by some other political bot designer somewhere in the world. It has made me curious to explore the extent to which a bot might be trusted with a direct answer to such questions, especially given that research has suggested that a direct bot, up front about its intentions, can be more successful in recruiting to an activist cause than something trying to be more ‘human’.

Trust is a particularly salient point to raise. Along with political issues, @futurepolitica1 has been asked why people are too fat and what they should have for lunch.

Sometimes the questions assume an omnipotence thoroughly undeserved by its simple programming, asking obscure questions that would require a lot of prior knowledge of a subject, or even the kind of introspection that would be difficult even for a human (or at least me!). Sometimes people can’t let it go when the bot doesn’t live up to their expectations and the conversation descends into argument – or complaints that its English is poor, or people start to get bored because it refuses to share an opinion. My favourite put down so far is definitely: ‘You’re just a silly little robot at the V and A and there will be a new exhibition to replace you’.

Then there have been the people who try to test the bot. At least one person has issued it with a Turing Test (it failed), while others have asked it to demonstrate the abilities they know bots have. Maybe I’m reading too much into these conversations, but I could swear sometimes that there is a kind of intellectual jousting going on – or at the very least a desire to prove that the bot isn’t as sophisticated as they know it should be.

I love these two images. In them, the bot has started the conversation, rather than waited to be spoken to. As it was designed to be asked questions, it actually has a very limited ability to respond to people answering with their own. Despite that, it has managed to generate responses that neatly tie up two short conversations, one’s a friendly exchange that could easily be a clumsy chat between strangers in a gallery, and in the other the bot is apparently bragging about its supposed power in a way that, on reflection, would kind of freak me out if I’d been on the other end.

In the wild the builders and users of political bots are constantly revising their creations, not least in response to any new restrictions implemented by the platforms. It might be nice to go back to this one, particularly with fresh eyes as to what people ‘want’ from their political propaganda bots. There is also a lot of technology out there that would be able to inject some of the more outlandish talents ascribed to the bot into its programming, and presumably it  won’t be too long before something like this project is in a museum of the past as its descendants rule over our high-tech future.

For now though it is delightful to see the bot being asked ‘Do you trust algorithms to mediate the formation of political consensus?’ and to wait in anticipation for the answer…

The Future Starts Here

About the author



July 16, 2018

I am a graduate of Imperial College London and currently Managing Partner of Etic Lab, a UK based research group with interests in Artificial Intelligence, Propaganda and Online Communities. I...

More from Alex Hogan
0 comments so far, view or add yours

Add a comment

Please read our privacy policy to understand what we do with your data.

MEMBERSHIP

Join today and enjoy unlimited free entry to all V&A exhibitions, Members-only previews and more

Find out more

SHOP

Find inspiration in our incredible range of exclusive gifts, jewellery, books, fashion, prints & posters and much more...

Find out more