publication . Preprint . 2015

Hiding in Plain Sight: The Anatomy of Malicious Facebook Pages

Dewan, Prateek; Kumaraguru, Ponnurangam;
Open Access English
  • Published: 20 Oct 2015
Facebook is the world's largest Online Social Network, having more than 1 billion users. Like most other social networks, Facebook is home to various categories of hostile entities who abuse the platform by posting malicious content. In this paper, we identify and characterize Facebook pages that engage in spreading URLs pointing to malicious domains. We used the Web of Trust API to determine domain reputations of URLs published by pages, and identified 627 pages publishing untrustworthy information, misleading content, adult and child unsafe content, scams, etc. which are deemed as "Page Spam" by Facebook, and do not comply with Facebook's community standards. ...
free text keywords: Computer Science - Social and Information Networks
Download from
34 references, page 1 of 3

[1] A. Aggarwal, A. Rajadesingan, and P. Kumaraguru. Phishari: Automatic realtime phishing detection on twitter. In eCRS, pages 1{12. IEEE, 2012.

[2] F. Ahmed and M. Abulaish. An mcl-based approach for spam pro le detection in online social networks. In IEEE TrustCom, pages 602{608. IEEE, 2012.

[3] M. Bastian, S. Heymann, M. Jacomy, et al. Gephi: an open source software for exploring and manipulating networks. ICWSM, 8:361{362, 2009.

[4] F. Benevenuto, G. Magno, T. Rodrigues, and V. Almeida. Detecting spammers on twitter. In CEAS, volume 6, page 12, 2010.

[5] C. Castillo, M. Mendoza, and B. Poblete. Information credibility on twitter. In WWW, pages 675{684. ACM, 2011.

[6] M. Cha, H. Haddadi, F. Benevenuto, and P. K. Gummadi. Measuring user in uence in twitter: The million follower fallacy. ICWSM, 10:10{17, 2010.

[7] P. Dewan and P. Kumaraguru. Towards automatic real time identi cation of malicious posts on facebook. In Privacy, Security and Trust (PST), pages 85{92. IEEE, 2015. [OpenAIRE]

[8] J. R. Douceur. The sybil attack. In Peer-to-peer Systems, pages 251{260. Springer, 2002.

[9] Facebook community standards., 2015.

[10] H. Gao, Y. Chen, K. Lee, D. Palsetia, and A. N. Choudhary. Towards online spam ltering in social networks. In NDSS, 2012.

[11] H. Gao, J. Hu, C. Wilson, Z. Li, Y. Chen, and B. Y. Zhao. Detecting and characterizing social spam campaigns. In IMC, pages 35{47. ACM, 2010.

[12] C. Grier, K. Thomas, V. Paxson, and M. Zhang. @ spam: the underground on 140 characters or less. In CCS, pages 27{37. ACM, 2010.

[13] A. Gupta and P. Kumaraguru. Credibility ranking of tweets during high impact events. In PSOSM. ACM, 2012.

[14] A. Gupta, P. Kumaraguru, C. Castillo, and P. Meier. Tweetcred: Real-time credibility assessment of content on twitter. In Social Informatics, pages 228{243. Springer, 2014.

[15] A. Gupta, H. Lamba, and P. Kumaraguru. $1.00 per rt #bostonmarathon #prayforboston: Analyzing fake content on twitter. In eCRS, page 12. IEEE, 2013.

34 references, page 1 of 3
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue