Could bots sway popular opinion via social media?
Chatbots meant to scam users into visiting phishing sites or those featuring paid services have been employed in online chat rooms and instant messaging services for many years, and now we’re seeing their advanced offspring pop up across social media platforms.
After all, social media presents massive opportunites for not only profit, but also control of public opinion, something that is of ever-increasing importance in today’s world.
In fact, our colleague Andy Russell recently linked us to a BBC article by Chris Baraniuk that examined the experiments of researcher Fabricio Benevenuto and his team, who sought to test just how easy it is to convince Twitter users completely automated bots were real people:
Benevenuto and his colleagues created 120 bot accounts, making sure each one had a convincing profile complete with picture and attributes such as gender. After a month, they found that almost 70% of the bots were left untouched by Twitter’s bot detection mechanisms. What’s more, the bots were pre-programmed to interact with other users and quickly attracted a healthy band of followers, 4,999 in total.
The implications of this are not trivial. “If socialbots could be created in large numbers, they can potentially be used to bias public opinion, for example, by writing large amounts of fake messages and dishonestly improve or damage the public perception about a topic,” the paper notes.
It’s a problem known as ‘astroturfing’, in which a seemingly authentic swell of grass-root opinion is in fact manufactured by a battalion of opinionated bots. The potential for astroturfing to influence elections has already raised concerns, with a Reuters op-ed in January calling for a ban on candidates’ use of bots in the run-up to polls.
The follower numbers gained by Benevenuto’s were not particularly impressive, at approximately 42 each you could gain many more simply by using the tried and true “follow a bunch of people and hope they reciprocate” method, but the fact that most completely evaded detection gives you a hint at what could be lurking on the horizon.
In our opinion, a team of social media experts using a mix of automation and their own savvy to seed sentiment could, and likely already has, swayed public opinion on matters ranging from politics to corporate blunders.
What does this mean to crisis management? Well, just as you wouldn’t (at least we hope!) get your decision-making information from a random person on the street, you can’t trust everyone you hear from on social media. Be careful to fact-check before sharing things you read online, be wary of any and all private messages from users you don’t personally know, and be aware that, while there ARE bots out there trying to trick you into believing something you shouldn’t, the repurcussions that come from believing the wrong thing are going to fall squarely on your shoulders.
We’re sure some of you are wondering how you can compete with these bot nets, and the answer is really quite simple. Use both social media and real-life efforts to recognize and connect with your advocates, and create methods to mobilize them to share when the time comes. People are surprisingly happy to devote some of their own time to organizations or individuals they support if you give them a chance. Just don’t forget to say thank you when they do!
——————————-
For more resources, see the Free Management Library topic: Crisis Management
——————————-
[Jonathan Bernstein is president of Bernstein Crisis Management, Inc., an international crisis management consultancy, author of Manager’s Guide to Crisis Management and Keeping the Wolves at Bay – Media Training. Erik Bernstein is Social Media Manager for the firm, and also editor of its newsletter, Crisis Manager]
One Reply to “Bots, Social Media, and What it Means to Crisis Management”
Comments are closed.