Security builder & leader

When Bots Control Content on Social Networking Sites

Bots on social networks build fake friendships to improve reputation, mimic human profiles to evade detection, and flood discussions with spam to drown out legitimate content. Social networking sites do a poor job helping users distinguish bots from humans or examining content for spam characteristics.

Scammers increasingly rely on the richness and popularity of the on-line social networks conduct fraudulent activities. They often employ malware, designed thrive in the social networking ecosystem, in support of these efforts. Malicious software might spread autonomously, like a worm, and might receive instructions from its operator, like a bot. The scammer’s objective may be to share links to malicious websites, distribute messages aimed to defraud their recipients, create postings to drive up the popularity of the advertised website, etc.

Bots Spread Content on Social Networks by Fitting In

It makes sense that attackers will want to automate their actions on social networking sites. For instance, Irene Michlin provided an insightful look into automated bot activities on LiveJournal—a site especially popular in the Russian-speaking community—which includes blogging and social networking features. Irine describes the complicated logic built into LiveJournal bots for spreading spam content on the site:

These bots strive to appear to be full participants of the social network, building up friendships and reputation so that the spam comments and blog postings they create are seen by a wide audience.

Bots Attempt to Interfere with Content Shared by Humans

Irene Michlin continued her discussion of malicious activities on LiveJournal by describing how bots drown out political discussion with spam and porn. In this example, bots appeared to seek out political content related to the controversial trial of Mikhail Khodorkovsky in Belarus.

Malware was programmed to overwhelm such “undesirable” posts with numerous spam comments, making it harder for humans to participate in discussions related to the attacked post. Irene noted that “people might be deterred from opening such posts, fearful that they will be caught accessing pornographic content.” The bots also replicated some aspects of the attacked post in an attempt to pollute search results.

Adapting to Bots’ Content-Focused Activities on Social Networks

Scammers will continue incorporating social networks into their content-distribution activities. This means that the users of social networks will need to adapt by:

Companies who run social networking sites will need to provide more powerful controls for their users to make it easier to distinguish between real humans and bots and for overseeing the content shared by the users. Improvements I’d like to see include:

Twitter, Facebook, LinkedIn and other social networks I looked at do a poor job in helping the user decide whether to accept a friend request or respond to a message. They don’t seem to pay much attention to examining content for spam characteristics. Perhaps this leaves room for innovation for startups and the makers of anti-malware tools that oversee the user’s interactions with social networking sites. This note is part of a 4-post series that reflects upon malware-related activities on on-line social networks and considers their implications. Other posts are:

About the Author

Lenny Zeltser is a cybersecurity leader with deep technical roots and product management experience. He created REMnux, an open-source malware analysis toolkit, and the reverse-engineering course at SANS Institute. As CISO at Axonius, he leads the security and IT program, focusing on trust and growth. He writes this blog to think out loud and share resources with the community.

Learn more →