Building on our previous posts on artificial intelligence, this week’s publication will have a look at “social bots”, their role in opinion-building and decision-making processes – and how thinking as a lawyer can be both disadvantageous and beneficial in this context.
With discussions centered around the recent US presidential election or UK’s Brexit and with several upcoming national elections in Europe (the Netherlands and France in spring, followed by Germany later this year), the topic of social media influence on traditional media, politicians, and voters through the instrument of social bots has been continuously on the rise.
Social bots are essentially “fake accounts” in social networks which – based on underlying programs managed in the background by bot software architects – act like human users, post comments and communicate opinions. Many of these bots are not even programmed for political purposes; they basically collect data, chase clicks and push advertisements into virtual space. At the same time, however, social bots attempt to influence the political debate culture by actively creating dedicated trends and key discussion topics through mere likes, reposts or retweets of specific content in a systematic and quantitatively massive manner. This can lead to false impressions and distorted perceptions of what an objective, authentic, relevant or popular (public) opinion is at a given point in time. Add the abundance of digital information and fake news into the picture, combine it with a pinch of confirmation bias and a bit of anchoring based on created trends – and quantity becomes quality information, especially when there is little time to review the “evidence”.
The legislator is already under pressure to act, even if only a few empirical studies or research initiatives exist to date which analyse whether social bots have the potential to significantly influence opinion-building and consequently our decision-making processes. [Recent scientific research includes publications by Yazan Boshmaf (Qatar Computing Research Institute), Emilio Ferrara (University of Southern California) or Simon Hegelich (Technical University of Munich) – just to name a few. You might also want to check out the following examples for running projects in Germany: “PropStop” and “Social Media Forensics”.]
According to Prof. Simon Hegelich, political data scientist at the Technical University of Munich, law makers have two possibilities to react: One option is to focus on structures which make sure that an open, pluralistic dialogue takes place, supported by an effective communication platform design and by eliminating threats, insults or harm on the net based on a consequent follow-up of violations. The second option, visible in the current political debate and dangerous in Hegelich’s view, lies in regulating the actual contents – which leads us to difficult discussions around questions such as who should decide what is undoubtedly true and what is not.
A very practical solution for us to escape from the influence of social bots in the digital data jungle is to start by checking the score of Twitter accounts we doubt on “BotOrNot”, a tool that helps identify bot profiles. Or, by playing our own “devil’s advocate”, we can outsmart potential traps and biases when we consider opposite views in an open-minded way, seek balanced information, search for things we might have overlooked or reasons why we could be wrong and widen our own frame of reference.
While this should be common way of working for all of us, the legal sector will need to play a key role as well – in taking into account the potential influence of social bots on free speech and in developing the appropriate legal framework that will ensure a safe communication on social networks for the future.
Food for (rational) thought.
What are your experiences with social bots? What is your opinion? Feel free to start a discussion on our Facebook Page!