The social media platform Twitter has a bot problem. Twitter has been under increased scrutiny lately for hosting hundreds of thousands of accounts that seem to be legitimately owned by real people, but in fact are just “bots,” or automated accounts, that are created en masse to flood the platform – usually to espouse political beliefs.
There are plenty of allegations that the English-speaking Twitter world was flooded with bots from nation-states like Russia to support Brexit in the UK and to promote or denigrate presidential candidates in the United States. Just this week, many high-profile, right wing Twitter users have noted that their accounts have been getting frozen and their follower counts plummeting in what they have termed #TwitterLockOut, though other Twitter users have argued that this was a long overdue purge of fake accounts.
These bots are easy to deploy and effective at plastering propaganda to influence discussion and divide populations, and they’re not a small presence: One study last year estimated that bots make up nearly 15% of Twitter users in total – about 30 million – double Twitter’s own estimate of bots on their platform.
Bots are by no means limited to supporting American right-wingers on Twitter, they are becoming an issue on all major social media platforms, especially Facebook and Instagram, across almost all countries and languages. If you’re on social media at all, it’s worth asking yourself: Can you tell when you’re talking to a bot?
Even if you’re smarter than the average bear, it’s not always easy to tell the bot accounts from real ones. (Bot creators are getting better by the day.)
- If the account claims to be representing a major politician or celebrity, check to verify that this account isn’t an impersonator. There’s a blue “verified” checkmark that Twitter bestows on accounts that have been proven to be owned by who they claim to be. That said, the check doesn’t exist for all official accounts, so this method isn’t fool-proof. Still, when possible, look for the blue check.
- Any account that has a generic blank user profile photo (previously it was the Twitter “egg”) and a username that is a noun followed by a bunch of random numbers is very likely a bot.
- Even a supposedly genuine looking profile photo can be deceptive. Many bots pull photos from public social media profiles or even stock imagery to give their profile photos an authentic veneer. Try doing a reverse Google Image search on a profile photo for a profile you suspect might not be real – chances are it may belong to someone with a completely different name.
- One of the latest ploys Twitter bots use is generating biographies (the descriptive text underneath your name) with random nouns and descriptors to make the profile look somewhat genuine. If the biography looks disjointed and doesn’t make much sense – e.g. the profile photo is of a young girl in a bikini, and the profile says “grandmother of 5, devoted husband,” that’s a big red flag.
- Does this user engage with people in conversation in a meaningful way, or does it just spit out statements, hashtags and links without any real interaction with other users? Yes, more sophisticated bots can have something resembling a back-and-forth conversation, but most of the basic ones flooding Twitter are rather spammy, and one-note – don’t expect a meaningful response if you ever Tweet at them.
There are also tools and websites that claim to track bot activity on Twitter and say they can even check if an account is a bot for you. These tools can be handy to confirm suspicions, but keep in mind that any tool is ultimately an extension of its creator – a bot checker tool could be completely reputable and trustworthy, or it may have its own political agenda.
In the end, trust your gut if something feels off with the account you’re talking to, and if you feel so inclined, report any suspicious accounts or bots to the social media platform to help keep interactions online genuine and as bot-free as possible.
Source : Naked Security