Researching Malicious Websites: A Few Tips

Malicious websites often aim to only attack end-users of computer systems, without revealing inner-workings to security researchers. Mike Wood, Threat Researcher at Sophos, described the defensive practices used by websites that distribute fake anti-virus tools.

Mike Wood pointed out that malicious sites often perform the following checks before deciding to attack the visitor:

  • Review the User-Agent header of the browser, only attacking certain browsers.
  • Review the Referer header, only attacking victims who come from certain websites, most notably from Google.
  • Use JavaScript to compute the destination of the redirection, hoping to fool some of the simpler crawlers or website mirroring tools
  • Use a “nonce to only return the attack payload if the link is fetched immediately after being generated”
  • Tack the visitor’s IP address, not attacking if the IP is on a “blacklist” or if it has already been attacked recently

There are other self-defensive measures as well… I recommend reading Mike Wood’s article for additional details regarding these tactics and for his recommendations how web surfers can turn these tactics to their advantage. (If the article reappears on the Sophos website.)

If you are a security researcher, here are some of the techniques that can help you bypass the self-defensive measures outlined above:

  • Fake your browser’s headers to match the likely values that the malicious website expects. I showed the importance this in an earlier article and also demonstrated how to do this with wget and curl tools.
  • Consider using a full browser, rather than a command line tool, to let your laboratory system be infected. I like capturing the infection into a PCAP file using a network sniffer and then examine the file with Jsunpack-n.
  • When navigating a malicious website using a browser, send your traffic through a local proxy, such as Paros Proxy or Fiddler, so that you have full visibility into the traffic exchanged between the browser and the website.
  • Consider using a honey client that can execute JavaScript, rather than merely running wget or curl commands. Jsunpack-n can do this. Recently-released PhoneyC seems to have this ability too, though I haven’t tried it yet.
  • Proxy your traffic to conceal its origin. Tor is a common option for this; however, Mike Wood pointed out that attackers sometimes cloak their sites from such traffic. Having your own network of proxy servers that keep changing is hard, but may be useful for a large security research operation.

If you’re just starting to learn how to research malicious websites, you might like my list of free online tools for looking up potentially malicious websites. Just keep in mind that these tools might be affected by the self-defensive properties of the sites they investigate.

Lenny Zeltser

Updated

About the Author

Lenny Zeltser is a seasoned business and technology leader with extensive information security experience. He builds creative anti-malware solutions as VP of Products at Minerva. He also trains incident response and digital forensics professionals at SANS Institute. Lenny frequently speaks at industry events, writes articles and has co-authored books. He has earned the prestigious GIAC Security Expert designation, has an MBA from MIT Sloan and a Computer Science degree from the University of Pennsylvania.

Learn more