Bots are always in the news and no one has a particular soft spot for bots, but especially for social media companies; they are a real nightmare because they make social networks less attractive for real users.
This is why Twitter, for example, regularly identifies a large number of bots and bans them from the platform (as a result of which many users usually complain about a smaller number of followers). This means that social networks have their own ways of detecting bots. However, their efforts are often not enough to eliminate them completely.
Social network algorithms
Of course, social networks do not easily reveal their algorithms, but it can be said with absolute certainty, that their efforts are based on detecting abnormal behavior. This is one of the most obvious examples. If an account tries to post a hundred messages a minute, it is most certainly a bot. Even if an account only retweets other users’ posts, but never posts anything itself, it is very likely that a bot is also behind it.
However, bot developers are constantly adapting their creations to circumvent the detection techniques of social media services. They can’t or shouldn’t afford too many false alerts when identifying fake news flashes, as blocking too many genuine ones would cause outrage among users. However, this also means that a number of fake newsflashes will always remain undetected.
To learn more about the behavior of bots, we chose a feature to highlight a specific group of bots, in this case it was posts about terrorist attacks that were only published on Twitter. These accounts are usually bots (or the retweet of a bot). Next, you would like to examine what these accounts have in common.
How do fake news bots behave
First of all, you need to clarify that the terrorist attacks mentioned in the tweets of these accounts really happened, and also that the articles about these events were hosted on more or less serious websites (websites that obviously do not distribute fake news).
A small but important detail: all these incidents happened several years ago, which the respective stories did not mention in their tweets. By linking to reputable media outlets, Twitter’s fake news flash detection algorithms are also calmed. One of the main reasons why the mastermind behind fake news flashes chose this very strategy.
Furthermore, in the case of this particular fake news flash network, the account holders claimed to be based in the United States while tweeting primarily across European countries. Using this information, more than 200 accounts with a similar approach were identified, as well as finding and studying other similarities and relationships between them.
For example, the researchers created a pattern of activity that allowed them to determine that many of these bots were only active during specific time periods. Some accounts were already suspended in May of this year, and new accounts were created with the same patterns, which are still active today.
In addition, all accounts used a number of short URL services to publish their fake news, which provided backers with analytics data (information about how often a specific link was clicked, etc.) These were not common URL shredders such as t.co or goo.gl, but shredders created exclusively and solely to collect analytics data. By the way, all these short URL services have a surprisingly similar minimalist orange and white design.
Who are they?
The website data for these short URL services shows that all of them are hosted and registered anonymously on the Microsoft Azure cloud platform. Coincidence? Probably not. Although their campaigns differ, many similarities can be found between the different accounts. In general, finding the idiosyncrasies and peculiarities of one account, which can then be applied to other accounts, is an effective way to expose botnets.
How to find fake news flashes
You are created a small list of typical characteristics of bots. Accounts used in a network or campaign usually have several of these characteristics in common. Thus, accounts in the same social media channel are most likely false flashes of information if: the fact that several accounts have a similarity or commonality does not mean that they should be immediately identified as bots. However, if there is more than one (or, to avoid false positives, more than four or five) similarities, you have most likely just encountered one of the many social media botnets.
Research shows that using behavioral analysis is still a good way to identify bots. Researchers identify new bots with specific behavioral patterns and then look for other accounts that have similar patterns.
Of course, this is an ongoing (and probably never-ending) task, as new bots with their own behavioral patterns appear every day, and it’s impossible to identify all the fake news flashes with a single set of behavioral rules. But by using behavioral analysis, at least all parts of some botnets can be identified. In addition, social media channels can block bots this way and make their platforms a more pleasant place for their users.
Everyone who travels on social networks should be aware of the existence of these bots and should not trust them under any circumstances.