This invention relates generally to social networking systems, and in particular to determining whether social networking system users are malicious on interactions of the users with objects maintained by the social networking system.
Social networking systems allow users to connect to and communicate with other users of the social networking system. Users create profiles on the social networking system that are tied to their identities and include information about the users, such as interests and demographic information. Because of the increasing popularity of social networking systems and the significant amount of user-specific information maintained by social networking systems, a social networking system presents an ideal forum for users to share their interests and experiences with other users. For example, a social networking system allows its users to upload content, exchange information, organize events, post content for presentation to other users, and communicate messages to each other. However, while performing interactions via a social networking system, users may engage in malicious activities that may cause social harm. For example, users may post racist comments, violent videos, or child pornography.
Conventional social networking systems protect users by providing mechanisms for users to report malicious activity or malicious content. For example, users flag content they find offensive or inappropriate or report bullying or harassment by communicating a message to the social networking system. However, the target user's biases or personal preferences may make reports of malicious activity or content inaccurate. Additionally, users may not always take the time to report activity or content they think is malicious. Furthermore, the amount of content available on a social networking system may prevent malicious activity or content from being accessed by users that would report the malicious activity or content. Hence, social networking systems may be unable to take action against malicious activity or content that is unreported by users, and may allocate resources to reviewing false or unreliable reports of malicious activity and content.