When Bots Control Content on Social Networking Sites

Scammers increasingly rely on the richness and popularity of the on-line social networks conduct fraudulent activities. They often employ malware, designed thrive in the social networking ecosystem, in support of these efforts. Malicious software might spread autonomously, like a worm, and might receive instructions from its operator, like a bot. The scammer’s objective may be to share links to malicious websites, distribute messages aimed to defraud their recipients, create postings to drive up the popularity of the advertised website, etc.

Bots Spread Content on Social Networks by Fitting In

It makes sense that attackers will want to automate their actions on social networking sites. For instance, Irene Michlin provided an insightful look into automated bot activities on LiveJournal—a site especially popular in the Russian-speaking community—which includes blogging and social networking features. Irine describes the complicated logic built into LiveJournal bots for spreading spam content on the site:

  • The bot tricks the person into befriending the bot’s LiveJournal account. This allows the bot to obtain personal details about the user for future fraudulent activities, improves the bot’s reputation and allows the bot to easily leave spam comments for the user.
  • The bot attempts to build up its LiveJournal profile to make it hard for comment moderators to distinguish the bot from a human user. This involves building fake friendships. According to Irene, “bot control programs recognise which accounts are more promising in improving their reputation.”
  • The bot mimics other users’ interests by copying features of their profiles into its own profile.

These bots strive to appear to be full participants of the social network, building up friendships and reputation so that the spam comments and blog postings they create are seen by a wide audience.

Bots Attempt to Interfere with Content Shared by Humans

Irene Michlin continued her discussion of malicious activities on LiveJournal by describing how bots drown out political discussion with spam and porn. In this example, bots appeared to seek out political content related to the controversial trial of Mikhail Khodorkovsky in Belarus.

Malware was programmed to overwhelm such “undesirable” posts with numerous spam comments, making it harder for humans to participate in discussions related to the attacked post. Irene noted that “people might be deterred from opening such posts, fearful that they will be caught accessing pornographic content.” The bots also replicated some aspects of the attacked post in an attempt to pollute search results.

Adopting to Bots’ Content-Focused Activities on Social Networks

Scammers will continue incorporating social networks into their content-distribution activities. This means that the users of social networks will need to adopt by:

  • Becoming more careful whom the “friend” on a social networking site
  • Being more critical of the content they read on social networks
  • Imposing tighter controls on who can attach comments to existing legitimate content

Companies who run social networking sites will need to provide more powerful controls for their users to make it easier to distinguish between real humans and bots and for overseeing the content shared by the users. Improvements I’d like to see include:

  • Provide detailed social networking reputation data about the user sending a message, a comment or a friend request.
  • Validate content shown to the user, automatically flagging spam content, similar to how existing tools do this for email spam.
  • Automatically remove malicious, suspicious or otherwise anomalous comments messages and posts, relying less on user’s actions to flag malicious activities.
  • Do a better job automatically flagging suspicious social networking accounts. In this case, “better” means incorporating more automation, faster response time and more sophisticated fraud-detection algorithms.

Twitter, Facebook, LinkedIn and other social networks I looked at do a poor job in helping the user decide whether to accept a friend request or respond to a message. They don’t seem to pay much attention to examining content for spam characteristics. Perhaps this leaves room for innovation for startups and the makers of anti-malware tools that oversee the user’s interactions with social networking sites.

This note is part of a 3-post series that reflects upon malware-related activities on on-line social networks and considers their implications. 2 other posts are:

Lenny Zeltser

Updated

About the Author

I transform ideas into successful outcomes, building on my 25 years of experience in cybersecurity. As the CISO at Axonius, I lead the security program to earn customers' trust. I'm also a Faculty Fellow at SANS Institute, where I author and deliver training for incident responders. The diversity of cybersecurity roles I've held over the years and the accumulated expertise, allow me to create practical solutions that drive business growth.

Learn more