US20080177834A1 - Content monitoring in a high volume on-line community application - Google Patents

Content monitoring in a high volume on-line community application Download PDF

Info

Publication number
US20080177834A1
US20080177834A1 US12/055,618 US5561808A US2008177834A1 US 20080177834 A1 US20080177834 A1 US 20080177834A1 US 5561808 A US5561808 A US 5561808A US 2008177834 A1 US2008177834 A1 US 2008177834A1
Authority
US
United States
Prior art keywords
postings
users
given user
content
risk value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/055,618
Inventor
Daniel F. Gruhl
Kevin Haas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/055,618 priority Critical patent/US20080177834A1/en
Publication of US20080177834A1 publication Critical patent/US20080177834A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management

Definitions

  • the embodiments of the invention generally relate to on-line communities and, more particularly, to a system and method for filtering the content of postings to on-line communities.
  • Online communities allow groups of people to communicate and interact via various online media, such as blogs, wikis, internet forums, chat rooms, instant messaging, electronic mail lists, etc.
  • Each of these online communities may have its own community standards related to the content of online media postings.
  • such online media may have standards to prevent the dissemination of objectionable content as defined by each community's own preset or pre-established standards.
  • These standards are typically enforced by way of manual and/or automated content review systems which are used to filter out and remove such objectionable postings.
  • limitations in the ability of current manual and/or automated content review systems to remove objectionable material from community websites in a timely manner have become increasingly apparent. Thus, additional workflows and methods are needed to ensure that community standards are maintained.
  • a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to an on-line community e.g., to a community website.
  • the system and method incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics to determine a risk value indicative of the likelihood that a posting contains objectionable content.
  • These metrics are not content based. Rather, they can be based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings.
  • These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed on the website immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • the system of the invention can comprise a database and a content management system (e.g., a computer or computer program-based web server or wireless application protocol (WAP) server) in communication with the database.
  • a content management system e.g., a computer or computer program-based web server or wireless application protocol (WAP) server
  • the database can be adapted to compile and store a plurality of metrics based on collected information related to users of a website (e.g., information related to members of the on-line community).
  • This information can comprise various parameters associated with each of the users (i.e., posters), such as age, gender, educational background, location, length of time as a member of the online community, etc.
  • This information can also include records of objectionable content postings for each of the users onto the website and, if practicable, onto other websites.
  • the information can also include the social networks of each of the users.
  • the content management system (e.g., web server, WAP server, etc.) can be adapted to receive postings from users, to monitor those postings for objectionable content, as defined by preset standards (e.g., community standards), and to determine whether or not to display the postings (e.g., on a community website).
  • preset standards e.g., community standards
  • the system of the invention also comprises both a pre-screener and a content filter, which can be integral components of the content management system or can be separate components in communication with the content management system.
  • the pre-screener is adapted to determine, based on at least one predetermined metric, a risk value that indicates the likelihood that a given posting by a given user contains objectionable content.
  • the metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster.
  • a given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.).
  • the user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • the content management system (CMS) (e.g., web server, WAP server, etc.) can further be adapted to perform different processes, depending upon whether or not the risk value of a given posting, as determined by the pre-screener, is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between.
  • the CMS can be adapted to allow the given posting to be immediately displayed on the website, if the risk value is below the predetermined low risk threshold value.
  • the CMS can be adapted to automatically remove a given posting from the website without further review, if the risk value is above a predetermined high risk threshold value.
  • the CMS can be adapted to request a posting confirmation and/or to analyze the posting itself for objectionable content.
  • the system of the invention can comprise a content filter.
  • This content filter can be adapted to analyze the content of each of the postings to determine an objectionable content score, which can optionally be weighted based on the risk value.
  • the content management system can further be adapted to display or remove a posting from the website, based on this weighted objectionable content score.
  • the order in which each of the postings is analyzed automatically by the content filter or, for that matter, manually by a website administrator can be dynamically determined by the content management system based on the risk value.
  • the method can comprise receiving from users (e.g., from members of an online community) postings to the on-line community (e.g., to a community website). Then, prior to analyzing each of the postings for objectionable content, as determined by preset community standards, a risk value is determined for each given posting from each given user. This risk value indicates a likelihood that the given posting by the given user contains objectionable content and is determined based on at least one predetermined metric.
  • the metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. Specifically, the following are examples of metrics that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user; (2) a metric based on a record of objectionable content postings by the given user; (3) a metric based on average predetermined parameters associated with other users in a social network of the given user; and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user.
  • a given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.).
  • the user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • a given posting can be immediately displayed on the website.
  • a predetermined high risk threshold value e.g., between the low risk threshold value and the high risk threshold value
  • additional method steps can be performed.
  • a posting confirmation can be requested from the given user.
  • This request can include a notice setting out the ramifications for violations of the community standards.
  • the order in which each of the postings is to be analyzed manually (e.g., by a web administrator) and/or automatically (e.g., by a content filter) can be dynamically determined based on the risk value.
  • the content of each of the postings can be analyzed to determine an objectionable content score, which can optionally be weighted based on the risk value. Based on this weighted objectionable content score, a final decision can be made regarding displaying the posting or removing it from the website.
  • FIG. 1 is a schematic box diagram illustrating an embodiment of a system of the invention
  • FIG. 2 is a flow diagram illustrating an embodiment of a method of the invention.
  • FIG. 3 is a schematic representation of a computer system suitable for implementing the method of the invention as described herein.
  • online communities allow groups of people to communicate and interact via various online media, such as blogs, wikis, internet forums, chat rooms, instant messaging, electronic mail lists, etc.
  • Each of these online communities may have its own community standards related to the content of online media postings.
  • such online media may have standards to prevent the dissemination of objectionable content as defined (preset, predetermined) by each community's standards.
  • workflows and methods are needed to ensure that community standards are maintained.
  • the filtering methods used are limited to contextual-based applications, such as image or text analysis of each posting's content.
  • the challenge of identifying and dealing with objectionable information postings to online communities differs from traditional content identification problems in that the figures of merit are based on “Delay” and “Bad Information On Site” (BIoS) time, rather than traditional precision and recall. More particularly, content of one of two types (i.e., good content (C g ) and bad content (C b )) can be posted to a web site at time T p and can be displayed at time T d . If the content is objectionable, it can be removed from the site at time T r .
  • the cost of the solution can be defined as follows. The delay cost is
  • Delay g ⁇ C g ⁇ T d - T p
  • the probability of objectionable content being uploaded onto a web site can be estimated without deep analysis of the information, based on prior behavior of the user and/or the social network of the user. That is, historically well-behaved users (i.e., users that historically post unobjectionable content) and historically misbehaved users (i.e., users that historically post objectionable content) tend to maintain their pattern of behavior. Additionally, both well-behaved users and misbehaved users tend to belong to independent social networks and the individuals within any given social network tend to have similar behavioral patterns.
  • the likelihood that an individual will be well-behaved or misbehaved user can be inferred from the historic behavior of the user as well as the behavior of others in that user's social network.
  • some classes of users e.g., based on age, gender, length of time as member of the online community, etc.
  • are more or less likely to offend For example, those who have been members of an online community for a relatively long period of time are less likely to post objectionable content, than those that have been members for a relatively short period of time.
  • a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to an on-line community e.g., to a community website.
  • the system and method incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics (i.e., scores) to determine a risk value indicative of the likelihood that a posting contains objectionable content.
  • metrics or scores are not content based. Rather, they can be based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings.
  • These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed on the website immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • a system 100 for managing an on-line community 110 (such as a website containing blogs, wikis, forums, chat rooms, etc.) and, particularly, for monitoring and filtering the content of electronic postings to an on-line community.
  • the system 100 can comprise a database 130 and a content management system 120 (e.g., a computer or computer program based web server or wireless application protocol (WAP) server).
  • WAP wireless application protocol
  • the database 130 can be adapted to compile and store a plurality of metrics based on collected information related to users of the on-line community (e.g., information related to members of the on-line community).
  • This information can comprise various parameters associated with each of the users (i.e., posters), such as age, gender, educational background, location, etc.
  • This information can also include records of objectionable content postings for each of the users onto the website and, if practicable, onto other websites.
  • the information can also include the social networks of each of the users.
  • the information that is stored in the database 130 can be collected using known techniques. For example, user parameters and social networks (e.g., friend lists, contact lists, etc.) can be input and periodically updated by users via remote computers 102 a - b .
  • user parameters can be determined by conventional mining applications that are adapted to scan the contents of the website for such information.
  • social network information can be determined by applications adapted maintain records of online communications between users (e.g., records of instant messaging, records of forum discussions, etc.) and to determine direct and indirect relationships based on those records.
  • the content management system 120 can be in communication with the database 130 and can be adapted to receive electronic postings (e.g., text, images, video, audio, etc. postings) from users, to monitor those postings for objectionable content, as defined by preset standards (e.g., community standards), and to determine whether or not to display the postings to the on-line community 110 (e.g., on the community website).
  • the system 100 of the invention can comprise both a pre-screener 121 and a content filter 122 .
  • the pre-screener 121 and content filter 122 can be integral components of the content management system 120 , as shown, or can be comprise separate components in communication with the content management system 120 .
  • the pre-screener 121 is adapted to determine, based on at least one predetermined metric, a risk value that indicates the likelihood that a given posting by a given user contains objectionable content.
  • the metric(s) are scores that are used to determine the risk value. These metrics are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. That is, a points system can be predetermined, wherein more points will be assigned (e.g., on a scale of 1-100, or any other scale) to information about a given user (or about that given user's social network) if the information predicts objectionable content postings by the user.
  • a metric based on parameters associated with the given user e.g., a male user age 16-24 may be more likely to post objectionable content than a female user 75-90 and thus such young male user would receive a relatively higher score based on user parameters than an older female
  • a metric based on a record of objectionable content postings by the given user e.g., a user that has posted a number of objectionable postings in the past is more likely to post objectionable postings in the future and thus such a user would receive a relatively higher score
  • a metric based on average predetermined parameters associated with other users in a social network of the given user e.g., a male in a social network with mostly other males having an average age between 16 and 24 may be more likely to post objectionable content than a female in a social network with mostly other females having an average age between 75 and 90 and thus such
  • a given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.).
  • the user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • the content management system (CMS) 120 can further be adapted to perform different processes, depending upon whether or not the risk value of a given posting, as determined by the pre-screener 121 , is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between. For example, to minimize delay for low risk postings, the CMS 120 can be adapted to allow the given posting to be immediately displayed to the on-line community 110 (e.g., displayed on a website), if the risk value is below the predetermined low risk threshold value.
  • the CMS 120 can be adapted to automatically remove a given posting from the on-line community 110 (e.g., from the website) without further review, if the risk value is above a predetermined high risk threshold value. However, if the risk value is above the low risk threshold value (e.g., between the low risk threshold value and the high risk threshold value), the CMS 120 can be adapted to request a posting confirmation and/or to analyze the posting itself for objectionable content.
  • the system 100 of the invention can comprise a content filter 122 .
  • This content filter 122 can be adapted to analyze the content of each of the postings to determine an objectionable content score.
  • the content filter 122 can be implemented using any conventional training based classifier, such as a na ⁇ ve Bayes classifier or a similarly-based (SB) classifier.
  • the score can optionally also be weighted based on the risk value.
  • scoring of uploaded content can be accomplished using a weighted fusion of risk value scores (e.g., based on a user's individual behavior or social network) combined with techniques using analytics based on content analysis (e.g., text, image, video, and/or voice analysis) to determine the probability that the posting contains objectionable material.
  • the CMS 120 can further be adapted to display or remove a posting from the on-line community 110 (e.g., a community website), based on this weighted objectionable content score. Additionally, the order in which each of the postings is analyzed by the content filter or, for that matter, manually by a website administrator can be dynamically determined by the CMS 120 based on the risk value.
  • posts to on-line communities 110 enter the system 100 from a number of sources 102 a - c .
  • Some of these posts may contain unobjectionable material and some may contain objectionable material, as defined by preset community standards.
  • Each of the posts will be pre-screened by the pre-screener 121 and a decision is made as to the degree of risk (i.e., the risk value) of each particular post. If the risk is low, the content can be displayed to the on-line community 110 (e.g. placed on the community website) immediately, keeping Delay low.
  • the content needs further evaluation and review (i.e., the risk is higher than a threshold value)
  • display of the content can be delayed and the posting can be subjected to more rigorous computation (e.g., by the content filter 122 and/or by human) review prior to displaying it, keeping BIoS low.
  • FIG. 2 also disclosed are embodiments of a method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings in on-line communities.
  • the method can comprise receiving from users (e.g., from members of an online community) electronic postings (e.g., text, video, images, audio, etc.) ( 202 ). These postings can be, for example, to a community website containing blogs, wikis, forums, chat rooms, etc. Then, prior to analyzing each of the postings for objectionable content, as determined by preset community standards, a risk value is determined for each given posting from each given user ( 204 ). This risk value indicates the likelihood that the given posting by the given user contains objectionable content and is determined based on at least one predetermined metric.
  • users e.g., from members of an online community
  • electronic postings e.g., text, video, images, audio, etc.
  • These postings can be, for example, to a community website containing blogs, wikis, forums, chat rooms, etc.
  • a risk value is determined for each given posting from each given user ( 204 ). This risk value indicates the likelihood that the given posting by the given user contains objection
  • the metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. Specifically, the following are examples of metrics that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user ( 205 ); (2) a metric based on a record of objectionable content postings by the given user ( 206 ); (3) a metric based on average predetermined parameters associated with other users in a social network of the given user ( 207 ); and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user ( 208 ).
  • a given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., a first layer of relationships—other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.).
  • the user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., second layer of relationships—contacts of contacts, friends of friends, etc.).
  • a given posting can be immediately displayed to the on-line community (e.g., on the community website) ( 212 ).
  • the risk value is above a predetermined high risk threshold value, then to minimize the risk of exposure of online community members to objectionable content, a given posting can be automatically removed from the website without further review ( 214 ).
  • additional method steps can be performed ( 215 - 219 ).
  • a posting confirmation can be requested from the given user ( 215 ).
  • This request can include a notice setting out the ramifications for violations of the community standards. That is, if a posting appears suspicious to the initial set of filters (i.e., has a relatively high risk value), the poster can be asked “are you sure?” with some kind of a notation that offenders will be dealt with harshly.
  • This confirmation is roughly analogous to the theory that people are less likely to vandalize a subway stop with a closed circuit TV visible.
  • each of the postings is to be analyzed manually (e.g., by a web administrator) and/or automatically (e.g., by a content filter) can be dynamically determined based on the risk value ( 216 - 217 ).
  • Such dynamic ordering may allow the human or automated analyzer to focus on the higher-risk content first, and the low-risk content, already displayed content, will be reviewed as time permits. This ordering process can allow a fixed quantity of human analyzers to be maximally effective on reducing BIoS.
  • the content of each of the postings can be analyzed to determine an objectionable content score.
  • a conventional training based classification technique such as a na ⁇ ve Bayes classification technique or a similarly-based (SB) classification technique, can be used to determine a score that indicates the probability that the content of the posting is objectionable, based on information contained in the posting (e.g., based on an analysis of the text, images, videos, and/or voices contained in the posting). This score can optionally also be weighted based on the previously determined risk value ( 219 ).
  • POC 0.5*I0+0.33* ⁇ I1>+0.17* ⁇ I2>, where the probability of objectionable content is weighted by 50% of the user's behavior (I0), 33% of the average score (I1) of the user's first layer social network (e.g., contact list), and 17% of the average score (I2) of the user's second layer social network (e.g., contacts of contacts).
  • scoring of uploaded content can be accomplished using a weighted fusion of a risk value (e.g., a value based on a user's individual behavior or social network) combined with a score based on content analysis (e.g., text, image, video, and/or voice analysis) to determine the probability that the posting contains objectionable material. Based on this weighted objectionable content score, a final decision can be made regarding displaying the posting or removing it from the website ( 220 ).
  • a risk value e.g., a value based on a user's individual behavior or social network
  • content analysis e.g., text, image, video, and/or voice analysis
  • the embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • FIG. 3 A representative hardware environment for practicing the embodiments of the invention is depicted in FIG. 3 .
  • the system comprises at least one processor or central processing unit (CPU) 10 .
  • the CPUs 10 are interconnected via system bus 12 to various devices such as a random access memory (RAM) 14 , read-only memory (ROM) 16 , and an input/output (I/O) adapter 18 .
  • RAM random access memory
  • ROM read-only memory
  • I/O input/output
  • the I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13 , or other program storage devices that are readable by the system.
  • the system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the invention.
  • the system further includes a user interface adapter 19 that connects a keyboard 15 , mouse 17 , speaker 24 , microphone 22 , and/or other user interface devices such as a touch screen device (not shown) to the bus 12 to gather user input.
  • a communication adapter 20 connects the bus 12 to a data processing network 25
  • a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
  • a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to on-line communities incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics (i.e., scores) to determine a risk value indicative of the likelihood that a posting contains objectionable content.
  • predetermined metrics i.e., scores
  • These metrics or scores are not based on content analytics. Rather, they are based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings.
  • These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed to the on-line community (e.g., on the community website) immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • the risk value is relatively low (e.g., below a low risk threshold value)
  • the posting can be displayed to the on-line community (e.g., on the community website) immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting

Abstract

Disclosed are embodiments a system and method for managing an on-line community. Electronic postings are pre-screened based on one or more metrics to determine a risk value indicative of the likelihood that an individual posting contains objectionable content. These metrics are based on the profile of a poster, including various parameters of the poster and/or the poster's record of objectionable content postings. These metrics can also be based on the social network profile of a poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low, the posting can be displayed to the on-line community immediately. If the risk value is relatively high, display of the posting can be delayed until further content analysis is completed. Finally, if the risk value is above a predetermined high risk threshold value, the posting can be removed automatically.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 11/622,112 filed Jan. 11, 2007, the complete disclosure of which, in its entirety, is herein incorporated by reference.
  • BACKGROUND
  • 1. Field of the Invention
  • The embodiments of the invention generally relate to on-line communities and, more particularly, to a system and method for filtering the content of postings to on-line communities.
  • 2. Description of the Related Art
  • Online communities allow groups of people to communicate and interact via various online media, such as blogs, wikis, internet forums, chat rooms, instant messaging, electronic mail lists, etc. Each of these online communities may have its own community standards related to the content of online media postings. For example, as with real world media, such online media may have standards to prevent the dissemination of objectionable content as defined by each community's own preset or pre-established standards. These standards are typically enforced by way of manual and/or automated content review systems which are used to filter out and remove such objectionable postings. However, as these online communities continue to increase in size, limitations in the ability of current manual and/or automated content review systems to remove objectionable material from community websites in a timely manner have become increasingly apparent. Thus, additional workflows and methods are needed to ensure that community standards are maintained.
  • SUMMARY
  • In view of the foregoing, disclosed herein are embodiments a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to an on-line community (e.g., to a community website). The system and method incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics to determine a risk value indicative of the likelihood that a posting contains objectionable content. These metrics are not content based. Rather, they can be based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings. These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed on the website immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • More particularly, disclosed herein are embodiments of a system for managing an on-line community and, particularly, for monitoring and filtering the content of postings to an on-line community. The system of the invention can comprise a database and a content management system (e.g., a computer or computer program-based web server or wireless application protocol (WAP) server) in communication with the database.
  • The database can be adapted to compile and store a plurality of metrics based on collected information related to users of a website (e.g., information related to members of the on-line community). This information can comprise various parameters associated with each of the users (i.e., posters), such as age, gender, educational background, location, length of time as a member of the online community, etc. This information can also include records of objectionable content postings for each of the users onto the website and, if practicable, onto other websites. The information can also include the social networks of each of the users.
  • The content management system (e.g., web server, WAP server, etc.) can be adapted to receive postings from users, to monitor those postings for objectionable content, as defined by preset standards (e.g., community standards), and to determine whether or not to display the postings (e.g., on a community website). In order to accomplish this, the system of the invention also comprises both a pre-screener and a content filter, which can be integral components of the content management system or can be separate components in communication with the content management system.
  • The pre-screener is adapted to determine, based on at least one predetermined metric, a risk value that indicates the likelihood that a given posting by a given user contains objectionable content. The metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. Specifically, the following are examples of metrics that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user; (2) a metric based on a record of objectionable content postings by the given user; (3) a metric based on average predetermined parameters associated with other users in a social network of the given user; and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user. A given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.). The user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • The content management system (CMS) (e.g., web server, WAP server, etc.) can further be adapted to perform different processes, depending upon whether or not the risk value of a given posting, as determined by the pre-screener, is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between. For example, to minimize delay for low risk postings, the CMS can be adapted to allow the given posting to be immediately displayed on the website, if the risk value is below the predetermined low risk threshold value. To minimize the risk of exposure of online community members to objectionable content, the CMS can be adapted to automatically remove a given posting from the website without further review, if the risk value is above a predetermined high risk threshold value. However, if the risk value is above the low risk threshold value (e.g., between the low risk threshold value and the high risk threshold value), the CMS can be adapted to request a posting confirmation and/or to analyze the posting itself for objectionable content.
  • Specifically, as mentioned above, the system of the invention can comprise a content filter. This content filter can be adapted to analyze the content of each of the postings to determine an objectionable content score, which can optionally be weighted based on the risk value. The content management system can further be adapted to display or remove a posting from the website, based on this weighted objectionable content score. The order in which each of the postings is analyzed automatically by the content filter or, for that matter, manually by a website administrator can be dynamically determined by the content management system based on the risk value.
  • Also disclosed are embodiments of a method for managing an on-line community and, particularly, for monitoring and filtering the content of postings to an on-line community.
  • The method can comprise receiving from users (e.g., from members of an online community) postings to the on-line community (e.g., to a community website). Then, prior to analyzing each of the postings for objectionable content, as determined by preset community standards, a risk value is determined for each given posting from each given user. This risk value indicates a likelihood that the given posting by the given user contains objectionable content and is determined based on at least one predetermined metric.
  • The metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. Specifically, the following are examples of metrics that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user; (2) a metric based on a record of objectionable content postings by the given user; (3) a metric based on average predetermined parameters associated with other users in a social network of the given user; and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user. A given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.). The user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • Once the risk value is determined, then different method steps are performed depending upon whether or not the risk value of a given posting is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between. For example, if the risk value is below the predetermined low risk threshold value, then to minimize delay for low risk postings, a given posting can be immediately displayed on the website. Whereas, if the risk value is above a predetermined high risk threshold value, then to minimize the risk of exposure of online community members to objectionable content, a given posting can be automatically removed from the site without further review. However, if the risk value is above the low risk threshold value (e.g., between the low risk threshold value and the high risk threshold value), additional method steps can be performed.
  • For example, a posting confirmation can be requested from the given user. This request can include a notice setting out the ramifications for violations of the community standards. Additionally, the order in which each of the postings is to be analyzed manually (e.g., by a web administrator) and/or automatically (e.g., by a content filter) can be dynamically determined based on the risk value. Then, the content of each of the postings can be analyzed to determine an objectionable content score, which can optionally be weighted based on the risk value. Based on this weighted objectionable content score, a final decision can be made regarding displaying the posting or removing it from the website.
  • These and other aspects of the embodiments of the invention will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments of the invention and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments of the invention without departing from the spirit thereof, and the embodiments of the invention include all such modifications.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The embodiments of the invention will be better understood from the following detailed description with reference to the drawings, in which:
  • FIG. 1 is a schematic box diagram illustrating an embodiment of a system of the invention;
  • FIG. 2 is a flow diagram illustrating an embodiment of a method of the invention; and
  • FIG. 3 is a schematic representation of a computer system suitable for implementing the method of the invention as described herein.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The embodiments of the invention and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. It should be noted that the features illustrated in the drawings are not necessarily drawn to scale. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments of the invention. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments of the invention may be practiced and to further enable those of skill in the art to practice the embodiments of the invention. Accordingly, the examples should not be construed as limiting the scope of the embodiments of the invention.
  • As mentioned above, online communities allow groups of people to communicate and interact via various online media, such as blogs, wikis, internet forums, chat rooms, instant messaging, electronic mail lists, etc. Each of these online communities may have its own community standards related to the content of online media postings. For example, as with real world media, such online media may have standards to prevent the dissemination of objectionable content as defined (preset, predetermined) by each community's standards. However, as these online communities continue to increase, workflows and methods are needed to ensure that community standards are maintained. For the most part, the filtering methods used are limited to contextual-based applications, such as image or text analysis of each posting's content. However, the challenge of identifying and dealing with objectionable information postings to online communities differs from traditional content identification problems in that the figures of merit are based on “Delay” and “Bad Information On Site” (BIoS) time, rather than traditional precision and recall. More particularly, content of one of two types (i.e., good content (Cg) and bad content (Cb)) can be posted to a web site at time Tp and can be displayed at time Td. If the content is objectionable, it can be removed from the site at time Tr. The cost of the solution can be defined as follows. The delay cost is
  • Delay g = C g T d - T p
  • without consideration of bad content. The cost of Bad Information (i.e., objectionable content) on the web site (BIoS) is
  • BIoS = C b Badness * ( T r - T d )
  • where Badness is an indication of how much of a problem having the objectionable content posted will cause among the members. For simplicity, Badness can be based on lack of unity among the members of the online community. The goal of a web site managing system and method should be to reduce both Delayg and BIoS as much as possible. The relative importance of these two will vary based on the application
  • In attempting to meet this goal, it has been determined that the probability of objectionable content being uploaded onto a web site (e.g., into an online community forum) can be estimated without deep analysis of the information, based on prior behavior of the user and/or the social network of the user. That is, historically well-behaved users (i.e., users that historically post unobjectionable content) and historically misbehaved users (i.e., users that historically post objectionable content) tend to maintain their pattern of behavior. Additionally, both well-behaved users and misbehaved users tend to belong to independent social networks and the individuals within any given social network tend to have similar behavioral patterns. As a result, the likelihood that an individual will be well-behaved or misbehaved user can be inferred from the historic behavior of the user as well as the behavior of others in that user's social network. Additionally, some classes of users (e.g., based on age, gender, length of time as member of the online community, etc.) are more or less likely to offend. For example, those who have been members of an online community for a relatively long period of time are less likely to post objectionable content, than those that have been members for a relatively short period of time.
  • In view of the foregoing, disclosed herein are embodiments a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to an on-line community (e.g., to a community website). The system and method incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics (i.e., scores) to determine a risk value indicative of the likelihood that a posting contains objectionable content. These metrics or scores are not content based. Rather, they can be based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings. These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed on the website immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • More particularly, referring to FIG. 1, disclosed herein are embodiments of a system 100 for managing an on-line community 110 (such as a website containing blogs, wikis, forums, chat rooms, etc.) and, particularly, for monitoring and filtering the content of electronic postings to an on-line community. The system 100 can comprise a database 130 and a content management system 120 (e.g., a computer or computer program based web server or wireless application protocol (WAP) server).
  • The database 130 can be adapted to compile and store a plurality of metrics based on collected information related to users of the on-line community (e.g., information related to members of the on-line community). This information can comprise various parameters associated with each of the users (i.e., posters), such as age, gender, educational background, location, etc. This information can also include records of objectionable content postings for each of the users onto the website and, if practicable, onto other websites. The information can also include the social networks of each of the users.
  • The information that is stored in the database 130 can be collected using known techniques. For example, user parameters and social networks (e.g., friend lists, contact lists, etc.) can be input and periodically updated by users via remote computers 102 a-b. Alternatively, user parameters can be determined by conventional mining applications that are adapted to scan the contents of the website for such information. Similarly, social network information can be determined by applications adapted maintain records of online communications between users (e.g., records of instant messaging, records of forum discussions, etc.) and to determine direct and indirect relationships based on those records.
  • The content management system 120 can be in communication with the database 130 and can be adapted to receive electronic postings (e.g., text, images, video, audio, etc. postings) from users, to monitor those postings for objectionable content, as defined by preset standards (e.g., community standards), and to determine whether or not to display the postings to the on-line community 110 (e.g., on the community website). In order to accomplish this, the system 100 of the invention can comprise both a pre-screener 121 and a content filter 122. Those skilled in the art will recognize that the pre-screener 121 and content filter 122 can be integral components of the content management system 120, as shown, or can be comprise separate components in communication with the content management system 120.
  • The pre-screener 121 is adapted to determine, based on at least one predetermined metric, a risk value that indicates the likelihood that a given posting by a given user contains objectionable content. The metric(s) are scores that are used to determine the risk value. These metrics are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. That is, a points system can be predetermined, wherein more points will be assigned (e.g., on a scale of 1-100, or any other scale) to information about a given user (or about that given user's social network) if the information predicts objectionable content postings by the user. Specifically, the following are examples of metrics or scores that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user (e.g., a male user age 16-24 may be more likely to post objectionable content than a female user 75-90 and thus such young male user would receive a relatively higher score based on user parameters than an older female); (2) a metric based on a record of objectionable content postings by the given user (e.g., a user that has posted a number of objectionable postings in the past is more likely to post objectionable postings in the future and thus such a user would receive a relatively higher score); (3) a metric based on average predetermined parameters associated with other users in a social network of the given user (e.g., a male in a social network with mostly other males having an average age between 16 and 24 may be more likely to post objectionable content than a female in a social network with mostly other females having an average age between 75 and 90 and thus such a male user would receive a relatively higher score; and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user (e.g., a user that is in a social network with other users that regularly post objectionable postings is more likely to post objectionable postings than a user that associates other users that do not post objectionable postings and thus such a user would receive a relatively higher score). It should be noted that a given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.). The user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., contacts of contacts, friends of friends, etc.).
  • The content management system (CMS) 120 can further be adapted to perform different processes, depending upon whether or not the risk value of a given posting, as determined by the pre-screener 121, is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between. For example, to minimize delay for low risk postings, the CMS 120 can be adapted to allow the given posting to be immediately displayed to the on-line community 110 (e.g., displayed on a website), if the risk value is below the predetermined low risk threshold value. To minimize the risk of exposure of online community members to objectionable content, the CMS 120 can be adapted to automatically remove a given posting from the on-line community 110 (e.g., from the website) without further review, if the risk value is above a predetermined high risk threshold value. However, if the risk value is above the low risk threshold value (e.g., between the low risk threshold value and the high risk threshold value), the CMS 120 can be adapted to request a posting confirmation and/or to analyze the posting itself for objectionable content.
  • Specifically, as mentioned above, the system 100 of the invention can comprise a content filter 122. This content filter 122 can be adapted to analyze the content of each of the postings to determine an objectionable content score. For example, the content filter 122 can be implemented using any conventional training based classifier, such as a naïve Bayes classifier or a similarly-based (SB) classifier. However, the score can optionally also be weighted based on the risk value. Thus, scoring of uploaded content can be accomplished using a weighted fusion of risk value scores (e.g., based on a user's individual behavior or social network) combined with techniques using analytics based on content analysis (e.g., text, image, video, and/or voice analysis) to determine the probability that the posting contains objectionable material. The CMS 120 can further be adapted to display or remove a posting from the on-line community 110 (e.g., a community website), based on this weighted objectionable content score. Additionally, the order in which each of the postings is analyzed by the content filter or, for that matter, manually by a website administrator can be dynamically determined by the CMS 120 based on the risk value.
  • Thus, in operation, posts to on-line communities 110 enter the system 100 from a number of sources 102 a-c. Some of these posts may contain unobjectionable material and some may contain objectionable material, as defined by preset community standards. Each of the posts will be pre-screened by the pre-screener 121 and a decision is made as to the degree of risk (i.e., the risk value) of each particular post. If the risk is low, the content can be displayed to the on-line community 110 (e.g. placed on the community website) immediately, keeping Delay low. If the content needs further evaluation and review (i.e., the risk is higher than a threshold value), display of the content can be delayed and the posting can be subjected to more rigorous computation (e.g., by the content filter 122 and/or by human) review prior to displaying it, keeping BIoS low.
  • Referring to FIG. 2, also disclosed are embodiments of a method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings in on-line communities.
  • The method can comprise receiving from users (e.g., from members of an online community) electronic postings (e.g., text, video, images, audio, etc.) (202). These postings can be, for example, to a community website containing blogs, wikis, forums, chat rooms, etc. Then, prior to analyzing each of the postings for objectionable content, as determined by preset community standards, a risk value is determined for each given posting from each given user (204). This risk value indicates the likelihood that the given posting by the given user contains objectionable content and is determined based on at least one predetermined metric.
  • The metric(s) that are used to determine this risk value are not content-based, but rather are based on the individual poster's profile and/or the social network profile of the individual poster. Specifically, the following are examples of metrics that may be used to determine a risk value of a given posting by a given user: (1) a metric based on parameters associated with the given user (205); (2) a metric based on a record of objectionable content postings by the given user (206); (3) a metric based on average predetermined parameters associated with other users in a social network of the given user (207); and (4) a metric based on a compiled record of objectionable content postings by other users in a social network of the given user (208). A given user's social network can be limited to other users within the online community with which the given user has a direct relationship (e.g., a first layer of relationships—other users on the given user's contacts or friends list, other users with which the given user has exchanged instant messages, other users with which the given user has communicated on a forum, etc.). The user's social network may also be expanded to include other users with which the given user has an indirect relationship (e.g., second layer of relationships—contacts of contacts, friends of friends, etc.).
  • Once the risk value is determined, then different method steps are performed depending upon whether or not the risk value of a given posting is below a predetermined low risk threshold value, above a predetermined high risk threshold value or somewhere in between (210). For example, if the risk value is below the predetermined low risk threshold value, then to minimize delay for low risk postings, a given posting can be immediately displayed to the on-line community (e.g., on the community website) (212). Whereas, if the risk value is above a predetermined high risk threshold value, then to minimize the risk of exposure of online community members to objectionable content, a given posting can be automatically removed from the website without further review (214). However, if the risk value is above the low risk threshold value (e.g., between the low risk threshold value and the high risk threshold value), additional method steps can be performed (215-219).
  • For example, a posting confirmation can be requested from the given user (215). This request can include a notice setting out the ramifications for violations of the community standards. That is, if a posting appears suspicious to the initial set of filters (i.e., has a relatively high risk value), the poster can be asked “are you sure?” with some kind of a notation that offenders will be dealt with harshly. This confirmation is roughly analogous to the theory that people are less likely to vandalize a subway stop with a closed circuit TV visible.
  • Additionally, the order in which each of the postings is to be analyzed manually (e.g., by a web administrator) and/or automatically (e.g., by a content filter) can be dynamically determined based on the risk value (216-217). Such dynamic ordering may allow the human or automated analyzer to focus on the higher-risk content first, and the low-risk content, already displayed content, will be reviewed as time permits. This ordering process can allow a fixed quantity of human analyzers to be maximally effective on reducing BIoS.
  • During the analysis process (218), the content of each of the postings can be analyzed to determine an objectionable content score. For example, a conventional training based classification technique, such as a naïve Bayes classification technique or a similarly-based (SB) classification technique, can be used to determine a score that indicates the probability that the content of the posting is objectionable, based on information contained in the posting (e.g., based on an analysis of the text, images, videos, and/or voices contained in the posting). This score can optionally also be weighted based on the previously determined risk value (219). For example, the following exemplary formula can be applied: POC=0.5*I0+0.33*<I1>+0.17*<I2>, where the probability of objectionable content is weighted by 50% of the user's behavior (I0), 33% of the average score (I1) of the user's first layer social network (e.g., contact list), and 17% of the average score (I2) of the user's second layer social network (e.g., contacts of contacts). Thus, scoring of uploaded content can be accomplished using a weighted fusion of a risk value (e.g., a value based on a user's individual behavior or social network) combined with a score based on content analysis (e.g., text, image, video, and/or voice analysis) to determine the probability that the posting contains objectionable material. Based on this weighted objectionable content score, a final decision can be made regarding displaying the posting or removing it from the website (220).
  • The embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Furthermore, the embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • A representative hardware environment for practicing the embodiments of the invention is depicted in FIG. 3. This schematic drawing illustrates a hardware configuration of an information handling/computer system in accordance with the embodiments of the invention. The system comprises at least one processor or central processing unit (CPU) 10. The CPUs 10 are interconnected via system bus 12 to various devices such as a random access memory (RAM) 14, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the invention. The system further includes a user interface adapter 19 that connects a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) to the bus 12 to gather user input. Additionally, a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
  • In view of the foregoing, disclosed herein are embodiments a system and an associated method for managing an on-line community and, particularly, for monitoring and filtering the content of electronic postings to on-line communities. The system and method incorporate a pre-screening process in which individual postings are pre-screened based on one or more predetermined metrics (i.e., scores) to determine a risk value indicative of the likelihood that a posting contains objectionable content. These metrics or scores are not based on content analytics. Rather, they are based on the profile of an individual poster, including various parameters of the poster and/or the poster's record of objectionable content postings. These metrics can also be based on the social network profile of an individual poster, including the average of various parameters of other users in the poster's social network and/or a compiled record of objectionable content postings of other users in the poster's social network. If the risk value is relatively low (e.g., below a low risk threshold value), the posting can be displayed to the on-line community (e.g., on the community website) immediately, thereby keeping delay low. If the risk value is relatively high (e.g., above the low threshold value), display of the posting can be delayed until further automated and/or manual content analysis is completed. Finally, if the risk value is above a high risk threshold value, removal of the posting can be made automatic without requiring additional analysis.
  • The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, those skilled in the art will recognize that the embodiments of the invention can be practiced with modification within the spirit and scope of the appended claims.

Claims (20)

1. A system for managing an on-line community, said system comprising:
a database adapted to store a plurality of metrics based on information related to users of said on-line community, wherein said information comprises parameters associated with each of said users, records of objectionable content postings for each of said users, and social networks of each of said users wherein said objectionable content is defined by preset standards; and
a content management system in communication with said database and adapted to monitor postings from users to said on-line community for said objectionable content,
wherein said content management system comprises a pre-screener adapted to determine, based on at least one predetermined metric, a risk value that indicates a likelihood that a given posting by a given user contains said objectionable content, and
wherein said content management system is further adapted to allow said given posting to be displayed to said on-line community, if said risk value is below a threshold value.
2. The system of claim 1, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises at least one of the following:
a metric based on parameters associated with said given user;
a metric based on a record of objectionable content postings by said given user;
a metric based on average predetermined parameters associated with other users in a social network of said given user;
a metric based on a compiled record of objectionable content postings by other users in a social network of said given user, wherein said other users in said social network have a direct relationship with said given user; and
a metric based on a compiled record of objectionable content postings by other users in a social network of said given user, wherein at least some of said other users in said social network have an indirect relationship with said given user.
3. The system of claim 1, all the limitations of which are incorporated herein by reference, wherein said content management system is further adapted to automatically remove said given posting of said given user, if said risk value is above a second threshold value.
4. The system of claim 1, all the limitations of which are incorporated herein by reference, wherein said content management system further comprises a content filter adapted to analyze content of each of said postings to determine an objectionable content score, wherein an order in which each of said postings is analyzed is based on said risk value.
5. The system of claim 1, all the limitations of which are incorporated herein by reference, wherein said content management system further comprises a content filter adapted to analyze content of each of said postings to determine an objectionable content score, wherein said objectionable content score is weighted based on said risk value.
6. A method for managing an on-line community, said method comprising:
receiving postings to said on-line community from users;
prior to analyzing each of said postings for objectionable content, as determined by preset standards, determining for each given posting from each given user a risk value based on at least one predetermined metric, wherein said risk value indicates a likelihood that said given posting by said given user contains said objectionable content; and,
if said risk value is below a threshold value, displaying said given posting to said on-line community.
7. The method of claim 6, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises a metric based on parameters associated with said given user.
8. The method of claim 6, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises a metric based on a record of objectionable content postings by said given user.
9. The method of claim 6, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises a metric based on average parameters associated with other users in a social network of said given user.
10. The method of claim 6, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises a metric based on a compiled record of objectionable content postings by other users in a social network of said given user, wherein said other users in said social network have a direct relationship with said given user.
11. The method of claim 6, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises a metric based on a compiled record of objectionable content postings by other users in a social network of said given user, wherein at least some of said other users in said social network have an indirect relationship with said given user.
12. The method of claim 6, all the limitations of which are incorporated herein by reference, further comprising, if said risk value is above said threshold value, automatically removing said posting.
13. The method of claim 6, all the limitations of which are incorporated herein by reference, further comprising, if said risk value is above said threshold value, requesting posting confirmation from said given user and notifying said given user of ramifications for violations of said standards.
14. The method of claim 6, all the limitations of which are incorporated herein by reference, further comprising, dynamically determining an order for analyzing said postings for said objectionable content, wherein said order is based on said risk value of each of said postings.
15. The method of claim 6, all the limitations of which are incorporated herein by reference, further comprising, analyzing each of said postings to determine an objectionable content score, wherein said objectionable content score is weighted based on said risk value.
16. A computer program product comprising a computer useable medium having a computer readable program, wherein said computer readable program when executed causes said computer to perform a method an on-line community, said method comprising:
prior to analyzing postings to said on-line community from users for objectionable content, as determined by preset standards, determining for each given posting from each given user a risk value based on at least one predetermined metric, wherein said risk value indicates a likelihood that said given posting by said given user contains said objectionable content; and
if said risk value is below a threshold value, displaying said given posting to said on-line community.
17. The computer program product of claim 16, all the limitations of which are incorporated herein by reference, wherein said at least one predetermined metric comprises at least one of the following metrics:
a metric based on parameters associated with said given user;
a metric based on a record of objectionable content postings by said given user;
a metric based on average parameters associated with other users in a social network of said given user; and
a metric based on a compiled record of objectionable content postings by other users in a social network of said given user, wherein at least some of said other users in said social network have an indirect relationship with said given user.
18. The computer program product of claim 16, all the limitations of which are incorporated herein by reference, wherein said method further comprises, if said risk value is above said threshold value, automatically removing said posting.
19. The computer program product of claim 16, all the limitations of which are incorporated herein by reference, wherein said method further comprises, dynamically determining an order for analyzing said postings for said objectionable content, wherein said order is based on said risk value of each of said postings.
20. The computer program product of claim 16, all the limitations of which are incorporated herein by reference, wherein said method further comprises, analyzing each of said postings to determine an objectionable content score, wherein said objectionable content score is weighted based on said risk value.
US12/055,618 2007-01-11 2008-03-26 Content monitoring in a high volume on-line community application Abandoned US20080177834A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/055,618 US20080177834A1 (en) 2007-01-11 2008-03-26 Content monitoring in a high volume on-line community application

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/622,112 US7523138B2 (en) 2007-01-11 2007-01-11 Content monitoring in a high volume on-line community application
US12/055,618 US20080177834A1 (en) 2007-01-11 2008-03-26 Content monitoring in a high volume on-line community application

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/622,112 Continuation US7523138B2 (en) 2007-01-11 2007-01-11 Content monitoring in a high volume on-line community application

Publications (1)

Publication Number Publication Date
US20080177834A1 true US20080177834A1 (en) 2008-07-24

Family

ID=39618572

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/622,112 Expired - Fee Related US7523138B2 (en) 2007-01-11 2007-01-11 Content monitoring in a high volume on-line community application
US12/055,618 Abandoned US20080177834A1 (en) 2007-01-11 2008-03-26 Content monitoring in a high volume on-line community application

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/622,112 Expired - Fee Related US7523138B2 (en) 2007-01-11 2007-01-11 Content monitoring in a high volume on-line community application

Country Status (1)

Country Link
US (2) US7523138B2 (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090241198A1 (en) * 2008-03-18 2009-09-24 Fujitsu Shikoku Systems Limited Inappropriate content determination apparatus, content provision system, inappropriate content determination method, and computer program
US20110016107A1 (en) * 2009-07-19 2011-01-20 Harumi Kuno Execution of query plans for database query within environments of databases
US20110225239A1 (en) * 2010-03-12 2011-09-15 Associated Content, Inc. Generation of content creation requests for a content distribution system
US20110302144A1 (en) * 2010-06-03 2011-12-08 International Business Machines Corporation Dynamic Real-Time Reports Based on Social Networks
US8214446B1 (en) * 2009-06-04 2012-07-03 Imdb.Com, Inc. Segmenting access to electronic message boards
US20120180135A1 (en) * 2010-12-09 2012-07-12 Wavemarket, Inc. System and method for improved detection and monitoring of online accounts
US8752170B1 (en) * 2008-08-20 2014-06-10 Symantec Corporation Verification and validation of externally maintained profile attributes
US8825777B2 (en) 2011-10-05 2014-09-02 Blackberry Limited Selective delivery of social network messages within a social network
US20140325662A1 (en) * 2013-03-15 2014-10-30 ZeroFOX Inc Protecting against suspect social entities
US8949239B2 (en) * 2011-01-20 2015-02-03 Linkedin Corporation Methods and systems for utilizing activity data with clustered events
US8972511B2 (en) * 2012-06-18 2015-03-03 OpenQ, Inc. Methods and apparatus for analyzing social media for enterprise compliance issues
US9027134B2 (en) 2013-03-15 2015-05-05 Zerofox, Inc. Social threat scoring
US9055097B1 (en) * 2013-03-15 2015-06-09 Zerofox, Inc. Social network scanning
US9247015B2 (en) 2011-01-20 2016-01-26 Linkedin Corporation Methods and systems for recommending a context based on content interaction
US9282087B1 (en) * 2013-07-15 2016-03-08 Google Inc. System and methods for reviewing user generated content and providing authorization
US9460299B2 (en) 2010-12-09 2016-10-04 Location Labs, Inc. System and method for monitoring and reporting peer communications
US20160350675A1 (en) * 2015-06-01 2016-12-01 Facebook, Inc. Systems and methods to identify objectionable content
US9544325B2 (en) 2014-12-11 2017-01-10 Zerofox, Inc. Social network security monitoring
US9554190B2 (en) 2012-12-20 2017-01-24 Location Labs, Inc. System and method for controlling communication device use
US9674212B2 (en) 2013-03-15 2017-06-06 Zerofox, Inc. Social network data removal
US9674214B2 (en) 2013-03-15 2017-06-06 Zerofox, Inc. Social network profile data removal
US9721229B1 (en) * 2010-12-30 2017-08-01 United Services Automobile Association (Usaa) Systems and methods for monitored social media participation
US20190068632A1 (en) * 2017-08-22 2019-02-28 ZeroFOX, Inc Malicious social media account identification
US20190065748A1 (en) * 2017-08-31 2019-02-28 Zerofox, Inc. Troll account detection
WO2019187920A1 (en) * 2018-03-27 2019-10-03 日本電信電話株式会社 Illegal content search device, illegal content search method, and program
WO2019187919A1 (en) * 2018-03-27 2019-10-03 日本電信電話株式会社 Illegal content search device, illegal content search method, and program
US10447838B2 (en) 2014-04-03 2019-10-15 Location Labs, Inc. Telephone fraud management system and method
US10516567B2 (en) 2015-07-10 2019-12-24 Zerofox, Inc. Identification of vulnerability to social phishing
US10853572B2 (en) 2013-07-30 2020-12-01 Oracle International Corporation System and method for detecting the occureances of irrelevant and/or low-score strings in community based or user generated content
US10868824B2 (en) 2017-07-31 2020-12-15 Zerofox, Inc. Organizational social threat reporting
US11086942B2 (en) 2010-11-23 2021-08-10 Microsoft Technology Licensing, Llc Segmentation of professional network update data
US11108885B2 (en) 2017-07-27 2021-08-31 Global Tel*Link Corporation Systems and methods for providing a visual content gallery within a controlled environment
US11115716B2 (en) * 2017-07-27 2021-09-07 Global Tel*Link Corporation System and method for audio visual content creation and publishing within a controlled environment
US11134097B2 (en) 2017-10-23 2021-09-28 Zerofox, Inc. Automated social account removal
US11165801B2 (en) 2017-08-15 2021-11-02 Zerofox, Inc. Social threat correlation
US11256812B2 (en) 2017-01-31 2022-02-22 Zerofox, Inc. End user social network protection portal
US11290412B2 (en) 2011-01-20 2022-03-29 Microsoft Technology Licensing, Llc Techniques for ascribing social attributes to content
US11394722B2 (en) 2017-04-04 2022-07-19 Zerofox, Inc. Social media rule engine

Families Citing this family (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730144B2 (en) 2005-12-09 2010-06-01 Ebuddy Holding B.V. High level network layer system and method
US7523138B2 (en) * 2007-01-11 2009-04-21 International Business Machines Corporation Content monitoring in a high volume on-line community application
US20090156955A1 (en) * 2007-12-13 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for comparing media content
US8615479B2 (en) * 2007-12-13 2013-12-24 The Invention Science Fund I, Llc Methods and systems for indicating behavior in a population cohort
US8195593B2 (en) * 2007-12-20 2012-06-05 The Invention Science Fund I Methods and systems for indicating behavior in a population cohort
US20090157751A1 (en) * 2007-12-13 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for specifying an avatar
US20090164302A1 (en) * 2007-12-20 2009-06-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for specifying a cohort-linked avatar attribute
US20090157481A1 (en) * 2007-12-13 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for specifying a cohort-linked avatar attribute
US9211077B2 (en) * 2007-12-13 2015-12-15 The Invention Science Fund I, Llc Methods and systems for specifying an avatar
US20090157660A1 (en) * 2007-12-13 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems employing a cohort-linked avatar
US20090171164A1 (en) * 2007-12-17 2009-07-02 Jung Edward K Y Methods and systems for identifying an avatar-linked population cohort
US20090157813A1 (en) * 2007-12-17 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for identifying an avatar-linked population cohort
US8356004B2 (en) * 2007-12-13 2013-01-15 Searete Llc Methods and systems for comparing media content
US20090157625A1 (en) * 2007-12-13 2009-06-18 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for identifying an avatar-linked population cohort
US20090164458A1 (en) * 2007-12-20 2009-06-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems employing a cohort-linked avatar
US8069125B2 (en) * 2007-12-13 2011-11-29 The Invention Science Fund I Methods and systems for comparing media content
US9418368B2 (en) * 2007-12-20 2016-08-16 Invention Science Fund I, Llc Methods and systems for determining interest in a cohort-linked avatar
US20090164503A1 (en) * 2007-12-20 2009-06-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for specifying a media content-linked population cohort
US20090164131A1 (en) * 2007-12-20 2009-06-25 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Methods and systems for specifying a media content-linked population cohort
US8150796B2 (en) * 2007-12-20 2012-04-03 The Invention Science Fund I Methods and systems for inducing behavior in a population cohort
US9775554B2 (en) * 2007-12-31 2017-10-03 Invention Science Fund I, Llc Population cohort-linked avatar
US9105045B1 (en) * 2008-02-22 2015-08-11 Amdocs Software Systems Limited System, method, and computer program product for altering an experience of a user, based on information associated with a party to a communication associated with the user
US9071650B1 (en) 2008-09-17 2015-06-30 Socialware, Inc. Method, system and computer program product for enforcing access controls to features and subfeatures on uncontrolled web application
US8504681B1 (en) * 2008-09-17 2013-08-06 Socialware, Inc. Method, system, and storage medium for adaptive monitoring and filtering traffic to and from social networking sites
US8171458B2 (en) * 2008-10-10 2012-05-01 International Business Machines Corporation Method for source-related risk detection and alert generation
US8655792B1 (en) * 2009-03-27 2014-02-18 Symantec Corporation Deriving the content of a social network private site based on friend analysis
US8589516B2 (en) * 2009-09-10 2013-11-19 Motorola Mobility Llc Method and system for intermediating content provider website and mobile device
US8990338B2 (en) * 2009-09-10 2015-03-24 Google Technology Holdings LLC Method of exchanging photos with interface content provider website
US8332412B2 (en) 2009-10-21 2012-12-11 At&T Intellectual Property I, Lp Method and apparatus for staged content analysis
US8832734B2 (en) * 2009-11-17 2014-09-09 At&T Intellectual Property I, Lp Apparatus and method for providing distributed media consumption
US8812342B2 (en) * 2010-06-15 2014-08-19 International Business Machines Corporation Managing and monitoring continuous improvement in detection of compliance violations
US10248960B2 (en) * 2010-11-16 2019-04-02 Disney Enterprises, Inc. Data mining to determine online user responses to broadcast messages
US9037656B2 (en) 2010-12-20 2015-05-19 Google Technology Holdings LLC Method and system for facilitating interaction with multiple content provider websites
JP2013074498A (en) * 2011-09-28 2013-04-22 Sanyo Electric Co Ltd Television receiver, portable information terminal, and information exchange system including them
US8706648B2 (en) 2011-10-03 2014-04-22 International Business Machines Corporation Assessing social risk due to exposure from linked contacts
US8712921B2 (en) 2011-10-03 2014-04-29 International Business Machines Corporation Receiving security risk feedback from linked contacts due to a user's system actions and behaviors
US9046993B2 (en) * 2012-04-10 2015-06-02 Torrential Data Solutions, Inc. System and method for content management
GB2506381B (en) * 2012-09-27 2016-06-08 F Secure Corp Automated detection of harmful content
WO2014081430A2 (en) 2012-11-21 2014-05-30 Empire Technology Development Conditional disclosure of a response to content posted in a social network
US11003711B2 (en) * 2013-01-04 2021-05-11 Dropbox, Inc. Accessing audio files from an online content management system
KR101985283B1 (en) * 2013-01-28 2019-06-03 샌더링 매니지먼트 리미티드 Dynamic promotional layout management and distribution rules
US9282076B2 (en) 2013-05-30 2016-03-08 International Business Machines Corporation Aligning content and social network audience using analytics and/or visualization
US20150067046A1 (en) * 2013-09-03 2015-03-05 International Business Machines Corporation Social networking information consumption gap resolution
US10013701B2 (en) * 2013-10-09 2018-07-03 Selligent, Inc. System and method for managing message campaign data
US10387972B2 (en) 2014-02-10 2019-08-20 International Business Machines Corporation Impact assessment for shared media submission
US10049138B1 (en) 2014-03-05 2018-08-14 Google Llc Reputation and engagement system for online community management
US9576030B1 (en) 2014-05-07 2017-02-21 Consumerinfo.Com, Inc. Keeping up with the joneses
AU2016202659A1 (en) * 2015-04-28 2016-11-17 Red Marker Pty Ltd Device, process and system for risk mitigation
US9824313B2 (en) * 2015-05-01 2017-11-21 Flipboard, Inc. Filtering content in an online system based on text and image signals extracted from the content
US10229219B2 (en) * 2015-05-01 2019-03-12 Facebook, Inc. Systems and methods for demotion of content items in a feed
US9967266B2 (en) 2015-11-09 2018-05-08 Flipboard, Inc. Pre-filtering digital content in a digital content system
US10679264B1 (en) 2015-11-18 2020-06-09 Dev Anand Shah Review data entry, scoring, and sharing
US10320938B2 (en) 2016-02-02 2019-06-11 International Business Machines Corporation Monitoring and maintaining social group cohesiveness
US9949103B2 (en) 2016-08-09 2018-04-17 International Business Machines Corporation Notification of potentially problematic textual messages
US10313348B2 (en) * 2016-09-19 2019-06-04 Fortinet, Inc. Document classification by a hybrid classifier
US10440134B1 (en) * 2016-12-07 2019-10-08 Microsoft Technology Licensing, Llc Systems and methods for compliance enforcement in internet-based social networks
US10395693B2 (en) 2017-04-10 2019-08-27 International Business Machines Corporation Look-ahead for video segments
US10104403B1 (en) 2017-04-17 2018-10-16 International Business Machines Corporation Snippet augmentation for videos
US20180349796A1 (en) * 2017-06-02 2018-12-06 Facebook, Inc. Classification and quarantine of data through machine learning
US10860858B2 (en) * 2018-06-15 2020-12-08 Adobe Inc. Utilizing a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices
US20200110895A1 (en) * 2018-10-03 2020-04-09 International Business Machines Corporation Social post management based on security considerations
CN113723300A (en) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 Artificial intelligence-based fire monitoring method and device and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5615284A (en) * 1993-11-29 1997-03-25 International Business Machines Corporation Stylus-input recognition correction manager computer program product
US6652287B1 (en) * 2000-12-21 2003-11-25 Unext.Com Administrator and instructor course management application for an online education course
US20040015376A1 (en) * 2002-07-03 2004-01-22 Conoco Inc. Method and system to value projects taking into account political risks
US20040177271A1 (en) * 2003-02-25 2004-09-09 Susquehanna International Group, Llp Electronic message filter
US6859807B1 (en) * 1999-05-11 2005-02-22 Maquis Techtrix, Llc Online content tabulating system and method
US20060042483A1 (en) * 2004-09-02 2006-03-02 Work James D Method and system for reputation evaluation of online users in a social networking scheme
US7082458B1 (en) * 2001-08-01 2006-07-25 Luigi Guadagno Dialog facilitation system for generating contextual order-preserving dialog postings and posting summaries from electronic messages
US20060229896A1 (en) * 2005-04-11 2006-10-12 Howard Rosen Match-based employment system and method
US20070118460A1 (en) * 2005-11-18 2007-05-24 Bauerschmidt Paul A Detection of intra-firm matching and response thereto
US20070129123A1 (en) * 2005-12-02 2007-06-07 Robert Eryou System and method for game creation
US20080219422A1 (en) * 2002-04-29 2008-09-11 Evercom Systems, Inc. System and method for independently authorizing auxiliary communication services
US7523138B2 (en) * 2007-01-11 2009-04-21 International Business Machines Corporation Content monitoring in a high volume on-line community application
US7711684B2 (en) * 2006-12-28 2010-05-04 Ebay Inc. Collaborative content evaluation
US7761436B2 (en) * 2006-01-03 2010-07-20 Yahoo! Inc. Apparatus and method for controlling content access based on shared annotations for annotated users in a folksonomy scheme
US7822631B1 (en) * 2003-08-22 2010-10-26 Amazon Technologies, Inc. Assessing content based on assessed trust in users

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915284B2 (en) 2002-04-18 2005-07-05 Hewlett-Packard Development Company, Lp. System and method for automated message response, within a system for harvesting community knowledge

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5615284A (en) * 1993-11-29 1997-03-25 International Business Machines Corporation Stylus-input recognition correction manager computer program product
US6859807B1 (en) * 1999-05-11 2005-02-22 Maquis Techtrix, Llc Online content tabulating system and method
US6652287B1 (en) * 2000-12-21 2003-11-25 Unext.Com Administrator and instructor course management application for an online education course
US7082458B1 (en) * 2001-08-01 2006-07-25 Luigi Guadagno Dialog facilitation system for generating contextual order-preserving dialog postings and posting summaries from electronic messages
US20080219422A1 (en) * 2002-04-29 2008-09-11 Evercom Systems, Inc. System and method for independently authorizing auxiliary communication services
US20040015376A1 (en) * 2002-07-03 2004-01-22 Conoco Inc. Method and system to value projects taking into account political risks
US20040177271A1 (en) * 2003-02-25 2004-09-09 Susquehanna International Group, Llp Electronic message filter
US7822631B1 (en) * 2003-08-22 2010-10-26 Amazon Technologies, Inc. Assessing content based on assessed trust in users
US20060042483A1 (en) * 2004-09-02 2006-03-02 Work James D Method and system for reputation evaluation of online users in a social networking scheme
US20060229896A1 (en) * 2005-04-11 2006-10-12 Howard Rosen Match-based employment system and method
US20070118460A1 (en) * 2005-11-18 2007-05-24 Bauerschmidt Paul A Detection of intra-firm matching and response thereto
US20070118454A1 (en) * 2005-11-18 2007-05-24 Bauerschmidt Paul A Cross-currency implied spreads
US20070129123A1 (en) * 2005-12-02 2007-06-07 Robert Eryou System and method for game creation
US7761436B2 (en) * 2006-01-03 2010-07-20 Yahoo! Inc. Apparatus and method for controlling content access based on shared annotations for annotated users in a folksonomy scheme
US7711684B2 (en) * 2006-12-28 2010-05-04 Ebay Inc. Collaborative content evaluation
US7523138B2 (en) * 2007-01-11 2009-04-21 International Business Machines Corporation Content monitoring in a high volume on-line community application

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090241198A1 (en) * 2008-03-18 2009-09-24 Fujitsu Shikoku Systems Limited Inappropriate content determination apparatus, content provision system, inappropriate content determination method, and computer program
US8752170B1 (en) * 2008-08-20 2014-06-10 Symantec Corporation Verification and validation of externally maintained profile attributes
US8214446B1 (en) * 2009-06-04 2012-07-03 Imdb.Com, Inc. Segmenting access to electronic message boards
US8312097B1 (en) 2009-06-04 2012-11-13 Imdb.Com, Inc. Segmenting access to electronic message boards
US20110016107A1 (en) * 2009-07-19 2011-01-20 Harumi Kuno Execution of query plans for database query within environments of databases
US20110225239A1 (en) * 2010-03-12 2011-09-15 Associated Content, Inc. Generation of content creation requests for a content distribution system
US8965965B2 (en) * 2010-03-12 2015-02-24 Yahoo! Inc. Generation of content creation requests for a content distribution system
US20110302144A1 (en) * 2010-06-03 2011-12-08 International Business Machines Corporation Dynamic Real-Time Reports Based on Social Networks
US8661009B2 (en) * 2010-06-03 2014-02-25 International Business Machines Corporation Dynamic real-time reports based on social networks
US11086942B2 (en) 2010-11-23 2021-08-10 Microsoft Technology Licensing, Llc Segmentation of professional network update data
US9460299B2 (en) 2010-12-09 2016-10-04 Location Labs, Inc. System and method for monitoring and reporting peer communications
US9571590B2 (en) * 2010-12-09 2017-02-14 Location Labs, Inc. System and method for improved detection and monitoring of online accounts
US20120180135A1 (en) * 2010-12-09 2012-07-12 Wavemarket, Inc. System and method for improved detection and monitoring of online accounts
US10970680B1 (en) * 2010-12-30 2021-04-06 United Services Automobile Association (Usaa) Systems and methods for monitored social media participation
US9721229B1 (en) * 2010-12-30 2017-08-01 United Services Automobile Association (Usaa) Systems and methods for monitored social media participation
US8949239B2 (en) * 2011-01-20 2015-02-03 Linkedin Corporation Methods and systems for utilizing activity data with clustered events
US11290412B2 (en) 2011-01-20 2022-03-29 Microsoft Technology Licensing, Llc Techniques for ascribing social attributes to content
US10311365B2 (en) 2011-01-20 2019-06-04 Microsoft Technology Licensing, Llc Methods and systems for recommending a context based on content interaction
US9247015B2 (en) 2011-01-20 2016-01-26 Linkedin Corporation Methods and systems for recommending a context based on content interaction
US9805127B2 (en) 2011-01-20 2017-10-31 Linkedin Corporation Methods and systems for utilizing activity data with clustered events
US8825777B2 (en) 2011-10-05 2014-09-02 Blackberry Limited Selective delivery of social network messages within a social network
US8972511B2 (en) * 2012-06-18 2015-03-03 OpenQ, Inc. Methods and apparatus for analyzing social media for enterprise compliance issues
US9554190B2 (en) 2012-12-20 2017-01-24 Location Labs, Inc. System and method for controlling communication device use
US10993187B2 (en) 2012-12-20 2021-04-27 Location Labs, Inc. System and method for controlling communication device use
US10412681B2 (en) 2012-12-20 2019-09-10 Location Labs, Inc. System and method for controlling communication device use
US20140325662A1 (en) * 2013-03-15 2014-10-30 ZeroFOX Inc Protecting against suspect social entities
US9674212B2 (en) 2013-03-15 2017-06-06 Zerofox, Inc. Social network data removal
US9674214B2 (en) 2013-03-15 2017-06-06 Zerofox, Inc. Social network profile data removal
US9027134B2 (en) 2013-03-15 2015-05-05 Zerofox, Inc. Social threat scoring
US9055097B1 (en) * 2013-03-15 2015-06-09 Zerofox, Inc. Social network scanning
US9191411B2 (en) * 2013-03-15 2015-11-17 Zerofox, Inc. Protecting against suspect social entities
US9282087B1 (en) * 2013-07-15 2016-03-08 Google Inc. System and methods for reviewing user generated content and providing authorization
US10853572B2 (en) 2013-07-30 2020-12-01 Oracle International Corporation System and method for detecting the occureances of irrelevant and/or low-score strings in community based or user generated content
US10447838B2 (en) 2014-04-03 2019-10-15 Location Labs, Inc. Telephone fraud management system and method
US9544325B2 (en) 2014-12-11 2017-01-10 Zerofox, Inc. Social network security monitoring
US10491623B2 (en) 2014-12-11 2019-11-26 Zerofox, Inc. Social network security monitoring
US20160350675A1 (en) * 2015-06-01 2016-12-01 Facebook, Inc. Systems and methods to identify objectionable content
US10999130B2 (en) 2015-07-10 2021-05-04 Zerofox, Inc. Identification of vulnerability to social phishing
US10516567B2 (en) 2015-07-10 2019-12-24 Zerofox, Inc. Identification of vulnerability to social phishing
US11256812B2 (en) 2017-01-31 2022-02-22 Zerofox, Inc. End user social network protection portal
US11394722B2 (en) 2017-04-04 2022-07-19 Zerofox, Inc. Social media rule engine
US11108885B2 (en) 2017-07-27 2021-08-31 Global Tel*Link Corporation Systems and methods for providing a visual content gallery within a controlled environment
US11115716B2 (en) * 2017-07-27 2021-09-07 Global Tel*Link Corporation System and method for audio visual content creation and publishing within a controlled environment
US11750723B2 (en) 2017-07-27 2023-09-05 Global Tel*Link Corporation Systems and methods for providing a visual content gallery within a controlled environment
US10868824B2 (en) 2017-07-31 2020-12-15 Zerofox, Inc. Organizational social threat reporting
US11165801B2 (en) 2017-08-15 2021-11-02 Zerofox, Inc. Social threat correlation
US20190068632A1 (en) * 2017-08-22 2019-02-28 ZeroFOX, Inc Malicious social media account identification
US11418527B2 (en) * 2017-08-22 2022-08-16 ZeroFOX, Inc Malicious social media account identification
US11403400B2 (en) * 2017-08-31 2022-08-02 Zerofox, Inc. Troll account detection
US20190065748A1 (en) * 2017-08-31 2019-02-28 Zerofox, Inc. Troll account detection
US11134097B2 (en) 2017-10-23 2021-09-28 Zerofox, Inc. Automated social account removal
WO2019187920A1 (en) * 2018-03-27 2019-10-03 日本電信電話株式会社 Illegal content search device, illegal content search method, and program
JP2019174926A (en) * 2018-03-27 2019-10-10 日本電信電話株式会社 Illegal content search device, and illegal content search method and program
WO2019187919A1 (en) * 2018-03-27 2019-10-03 日本電信電話株式会社 Illegal content search device, illegal content search method, and program
US11947635B2 (en) 2018-03-27 2024-04-02 Nippon Telegraph And Telephone Corporation Illegal content search device, illegal content search method, and program

Also Published As

Publication number Publication date
US20080172412A1 (en) 2008-07-17
US7523138B2 (en) 2009-04-21

Similar Documents

Publication Publication Date Title
US7523138B2 (en) Content monitoring in a high volume on-line community application
Trifiro et al. Social media usage patterns: Research note regarding the lack of universal validated measures for active and passive use
McClellan et al. Using social media to monitor mental health discussions− evidence from Twitter
McComas Defining moments in risk communication research: 1996–2005
US8972894B2 (en) Targeting questions to users of a social networking system
US9576045B2 (en) Tagging questions from users on a social networking system
CN108874832B (en) Target comment determination method and device
US20160055541A1 (en) Personalized recommendation system and methods using automatic identification of user preferences
US8359225B1 (en) Trust-based video content evaluation
US8122371B1 (en) Criteria-based structured ratings
Ahmed et al. Measuring the effect of public health campaigns on Twitter: the case of World Autism Awareness Day
US20170351961A1 (en) Information appropriateness assessment tool
CN110457566B (en) Information screening method and device, electronic equipment and storage medium
Maier et al. Communicating scientific evidence: Scientists’, journalists’ and audiences’ expectations and evaluations regarding the representation of scientific uncertainty
US20140122504A1 (en) Systems and Methods for Collection and Automatic Analysis of Opinions on Various Types of Media
Lin Communicating haze crisis online: Comparing traditional media news and new media perspectives in Singapore
Qian et al. Fighting cheapfakes: using a digital media literacy intervention to motivate reverse search of out-of-context visual misinformation
CN116206318A (en) Hotel evaluation feedback method, system, equipment and storage medium
CN114065051A (en) Private domain platform video recommendation method and device, electronic equipment and medium
CN113362095A (en) Information delivery method and device
Wilson Youth Justice Interventions-Findings from the Juvenile Cohort Study (JCS)
US20170180290A1 (en) Selective content dissemination based on social media content analysis
Goodwin et al. Text mining and the examination of language used to report child maltreatment: How language impacts child welfare intake reports
JP2021149682A (en) Learning device, learning method, and learning program
JP2021149681A (en) Determination device, determination method, and determination program

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION