US20100191689A1 - Video content analysis for automatic demographics recognition of users and videos - Google Patents

Video content analysis for automatic demographics recognition of users and videos Download PDF

Info

Publication number
US20100191689A1
US20100191689A1 US12/392,987 US39298709A US2010191689A1 US 20100191689 A1 US20100191689 A1 US 20100191689A1 US 39298709 A US39298709 A US 39298709A US 2010191689 A1 US2010191689 A1 US 2010191689A1
Authority
US
United States
Prior art keywords
video
videos
demographic
feature vectors
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/392,987
Inventor
Corinna Cortes
Sanjiv Kumar
Ameesh Makadia
Gideon Mann
Jay Yagnik
Ming Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US12/392,987 priority Critical patent/US20100191689A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CORTES, CORINNA, KUMAR, SANJIV, MAKADIA, AMEESH, MANN, GIDEON, YAGNIK, JAY, ZHAO, MING
Priority to EP09839466.1A priority patent/EP2382782A4/en
Priority to EP18154276.2A priority patent/EP3367676A1/en
Priority to PCT/US2009/068108 priority patent/WO2010087909A1/en
Publication of US20100191689A1 publication Critical patent/US20100191689A1/en
Priority to US13/488,126 priority patent/US8301498B1/en
Priority to US13/632,591 priority patent/US20160014440A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2407Monitoring of transmitted content, e.g. distribution time, number of downloads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25883Management of end-user data being end-user demographical data, e.g. age, family status or address
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score

Definitions

  • the present invention generally relates to the field of digital video, and more specifically, to methods of correlating demographic data with characteristics of video content.
  • Video hosting sites such as YouTube or Google Video
  • YouTube or Google Video currently have millions of users and tens of millions of videos. Users may sometimes have difficulty in determining which videos would be of interest to them, and may be daunted by the sheer volume of videos available for viewing. Thus, the ability to suggest which videos would be of interest to a given user is highly valuable.
  • conventional systems typically merely rely on external metadata associated with the video, such as keywords or textual video descriptions, to predict demographics that would be interested in the video. For example, conventional systems might recommend videos having keywords matching those specified in a viewer profile as being of interest to that viewer. However, if the video is new and has not yet been viewed and rated, and if the associated title is “spam” that misrepresents the true content of the video, then the conventional approach produces spurious predictions. Thus, one shortcoming of conventional approaches is that they rely on external metadata that may be false when assessing the pertinence of a given video to a particular viewer, rather than examining the actual video content itself.
  • a video demographics analysis system creates demographic prediction models that predict the demographic characteristics of viewers of a video, based on quantitative video content data extracted from the videos.
  • the system selects a training set of videos to use to correlate viewer demographic attributes—such as age and gender—and video content data.
  • the video demographics analysis system determines which viewers have viewed videos in the training set, and extracts demographic data from the viewer profiles of these viewers.
  • the demographic data can include any information describing demographic attributes of the viewers, including but not limited to age, gender, occupation, household income, location, interests, and the like.
  • the system creates a set of demographic distributions for each video in the training set.
  • the video demographics analysis system also extracts video data from videos in the training set, the video data comprising quantitative information on visual and/or audio features of the videos. Then, a machine learning process is applied to correlate the viewer demographics for the training set videos with the video data of the training set videos , thereby creating a prediction model for the training set videos.
  • the system uses a prediction model produced by the machine learning process to predict, for a video about which there is little or no prior information about the demographics of viewers, a demographic distribution specifying probabilities of the video appealing to viewers in various different demographic categories, such as viewers of different ages, genders, and so forth.
  • the ability to obtain predicted demographic distributions for a video has a number of useful applications, such as determining a group to which to recommend a new video, estimating the demographics of a viewer lacking a reliable user profile, and recommending videos to a viewer based on the viewer's demographic attributes.
  • a computer-implemented method of generating a prediction model for videos receives a plurality of videos from a video repository, each video having an associated list of viewers. For each video, the method creates a demographic distribution for a specified demographic based at least in part on user profile data associated with viewers of the video, and generates feature vectors based on the content of the video. The method further generates a prediction model that correlates the feature vectors for the videos and the demographic distributions, and stores the generated prediction model.
  • a computer-implemented method for determining demographics of a video stores a prediction model that correlates viewer demographic attributes with feature vectors extracted from videos viewed by viewers, wherein the viewer demographic attributes include age and gender. The method further generates from content of the video a set of feature vectors, and uses the trained prediction model to determine likely demographic attributes of video viewers given that feature vector.
  • FIG. 1 illustrates the architecture of a video demographics analysis system, according to one embodiment.
  • FIG. 2 illustrates the components of a video analysis server, according to one embodiment.
  • FIG. 3 is a flowchart illustrating a high-level view of a process of performing the correlation, according to one embodiment.
  • FIG. 1 illustrates the architecture of a system for performing video demographics analysis of viewer profile information and digital video content and correlating demographic and video feature data, according to one embodiment.
  • a video hosting website 100 comprises a front end server 140 , a video serving module 110 , an ingest module 115 , a video analysis server 130 , a video search server 145 , a video access log 160 , a user database 150 , and a video database 155 .
  • Many conventional features, such as firewalls, load balancers, application servers, failover servers, site management tools and so forth are not shown so as not to obscure the features of the system.
  • the video hosting website 100 represents any system that allows users (equivalently “viewers”) to access video content via searching and/or browsing interfaces.
  • the sources of videos can be from user uploads of videos, searches or crawls of other websites or databases of videos, or the like, or any combination thereof.
  • a video hosting site 100 can be configured to allow for user uploads of content; in another embodiment a video hosting website 100 can be configured to only obtain videos from other sources by crawling such sources or searching such sources in real time.
  • a suitable website 100 for implementation of the system is the YOUTUBETM website, found at www.youtube.com; other video hosting sites are known as well, and can be adapted to operate according to the teaching disclosed herein.
  • web site represents any computer system adapted to serve content using any internetworking protocols, and is not intended to be limited to content uploaded or downloaded via the Internet or the HTTP protocol.
  • functions described in one embodiment as being performed on the server side can also be performed on the client side in other embodiments if appropriate.
  • functionality attributed to a particular component can be performed by different or multiple components operating together.
  • Each of the various servers and modules is implemented as a server program executing on server-class computer comprising a CPU, memory, network interface, peripheral interfaces, and other well known components.
  • the computers themselves preferably run an open-source operating system such as LINUX, have generally high performance CPUs, 1 G or more of memory, and 100 G or more of disk storage.
  • LINUX open-source operating system
  • other types of computers can be used, and it is expected that as more powerful computers are developed in the future, they can be configured in accordance with the teachings here.
  • the functionality implemented by any of the elements can be provided from computer program products that are stored in tangible computer accessible storage mediums (e.g., RAM, hard disk, or optical/magnetic media).
  • a client 170 executes a browser 171 and can connect to the front end server 140 via a network 180 , which is typically the internet, but can also be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network. While only a single client 170 and browser 171 are shown, it is understood that very large numbers (e.g., millions) of clients are supported and can be in communication with the video hosting website 100 at any time.
  • the client 170 may include a variety of different computing devices. Examples of client devices 170 are personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones or laptop computers. As will be obvious to one of ordinary skill in the art, the present invention is not limited to the devices listed above.
  • the browser 171 can include a video player (e.g., FlashTM from Adobe Systems, Inc.), or any other player adapted for the video file formats used in the video hosting website 100 .
  • videos can be accessed by a standalone program separate from the browser 171 .
  • a user can access a video from the video hosting website 100 by browsing a catalog of videos, conducting searches on keywords, reviewing play lists from other users or the system administrator (e.g., collections of videos forming channels), or viewing videos associated with particular user groups (e.g., communities).
  • Users of clients 170 can also search for videos based on keywords, tags or other metadata. These requests are received as queries by the front end server 140 and provided to the video search server 145 , which is responsible for searching the video database 155 for videos that satisfy the user queries.
  • the video search server 145 supports searching on any fielded data for a video, including its title, description, tags, author, category and so forth.
  • the uploaded content can include, for example, video, audio or a combination of video and audio.
  • the uploaded content is processed by an ingest module 115 , which processes the video for storage in the video database 155 . This processing can include format conversion (transcoding), compression, metadata tagging, and other data processing.
  • An uploaded content file is associated with the uploading user, and so the user's account record is updated in the user database 150 as needed.
  • the uploaded content will be referred to as “videos,” “video files,” or “video items,” but no limitation on the types of content that can be uploaded is intended by this terminology.
  • the operations described herein for identifying related items can be applied to any type of content, not only videos; other suitable type of content items include audio files (e.g. music, podcasts, audio books, and the like), documents, multimedia presentations, and so forth.
  • audio files e.g. music, podcasts, audio books, and the like
  • documents e.g. music, podcasts, audio books, and the like
  • related items need not be of the same type.
  • the related items may include one or more audio files, documents, and so forth in addition to other videos.
  • the video database 155 is used to store the ingested videos.
  • the video database 155 stores video content and associated metadata provided by their respective content owners.
  • Each uploaded video is assigned a video identifier (id) when it is processed by the ingest module 115 .
  • the video files have metadata associated with each file such as a video ID, artist, video title, label, genre, time length, and optionally geo-restrictions that can be used for data collection or content blocking on a geographic basis.
  • the video files are can be encoded as H.263, H.264, WMV, VC-1 or the like; audio can be encoded as MP3, AAC, or the like.
  • the files can be stored in any suitable container format, such as Flash, AVI, MP4, MPEG-2, RealMedia, DivX and the like.
  • the video hosting website 100 further comprises viewer profile repository 105 .
  • the viewer profile repository 105 comprises a plurality of profiles of users/viewers of digital videos, such as the users of video hosting systems such as YouTubeTM and Google VideoTM.
  • a viewer profile stores demographic information on various attributes of an associated viewer, such as the viewer's gender, age, location, income, occupation, level of education, stated preferences, and the like. The information may be provided by viewers themselves, when they create a profile, and can be further supplemented with information extracted automatically from other sources. For example, one profile entry could specify that the viewer was a 24-year-old male, with a college education, living in Salt Lake City, and with specified interests in archaeology and tennis. The exact demographic categories stored in the viewer profile can vary in different embodiments, depending on how the profiles are defined by the system administrator.
  • the video hosting website 100 further comprises a video access log 160 , which stores information describing each access to any video by any viewer.
  • each video effectively has an associated list of viewers.
  • Each individual viewer is assigned an ID, for example, based on his or her IP address to differentiate the individual viewers.
  • this viewer ID is an anonymized viewer ID that is assigned to each individual viewer to keep viewer identities private, such as an opaque identifier such as a unique random number or a hash value.
  • the system then can access each viewer's demographic information without obtaining his or her identity.
  • the actual identity of the viewers may be known or determinable.
  • the video access log 160 tracks the viewer's interactions with videos.
  • each entry in the video access log 160 identifies a video being accessed, a time of access, an IP address of the viewer, a viewer ID if available, cookies, the viewer's search query that led to the current access, and data identifying the type of interaction with the video.
  • Interaction types can include any viewer interactions in the viewer interface of the website, such as playing, pausing, rewinding and forwarding a video.
  • the various viewer interaction types are considered viewer events that are associated with a given video. For example, one entry might store that a viewer at a given IP address started viewing a particular video at time 0:00:00 and stopped viewing at time 0:34:00.
  • the video hosting website 100 further comprises a video analysis server 130 , which correlates demographic information about videos with the content of the videos themselves. This involves generating demographic distributions from demographic data, analyzing video content, and generating a prediction model relating the demographic distributions and the video content.
  • the video analysis module 130 also can predict a demographic distribution for a video and serve demographic queries (e.g., provide information about demographic information across videos).
  • the analysis module 130 comprises a demographics database 210 and a feature vector repository 215 , a demographics module 250 , a video content analysis module 255 , and a correlation module 260 , and additionally comprises a prediction model 220 .
  • the demographics database 210 stores data regarding distributions of demographic data with respect to videos. For example, certain videos can have an associated demographic distribution for various demographic attributes of interest, such as age and gender. In some embodiments, distributions are created for combined attributes, such gender-age, e.g. for a given video, that 4% of viewers are females aged 13 to 17. For instance, a given video may have an age-gender distribution such as the following:
  • This distribution states that 5.6% of its viewers are male of ages 13 to 17, 4% are females of ages 13 to 17, 12.3% are males of ages 18 to 21, and the like.
  • the values in the example distribution represent percentages of the viewers having the corresponding demographic characteristics, but they could also be normalized with respect to the general population, e.g. a value of 1.3 for males aged 13-17 indicating that 30% more of the viewings were by males aged 13-17 than their respective share of the population.
  • any demographic attribute stored in a viewer profile may have corresponding distributions.
  • a given demographic attribute may be represented at various different levels of granularity, such as 1-year, 3-year, or 5-year bins for ages, for example.
  • a given video can have a gender distribution in which 54% of its viewers are female, 38% of its viewers are male, and 8% are unknown, where the unknown values represent viewers lacking profiles or viewers with profiles lacking a value for the gender attribute.
  • profiles lacking a value for an attribute of interest could be excluded during training.
  • the distributions are represented as vectors, e.g. an array of integers ⁇ 0, 6, 11, . . . > where each component represents a previously assigned age-bin, representing that 0% of viewers are from ages 13 to 17, 6% are 18 to 21, and 11% are 22 to 25.
  • Other storage implementations would be equally possible to one of skill in the art.
  • the demographics module 250 takes as input the data in the viewer profile repository 105 and creates the data on distributions stored in the demographic database 210 .
  • the feature extraction module 255 takes as input the video data in the video repository 110 and the video access log data 160 and extracts feature vectors representing characteristics of the videos, such as visual and/or audio characteristics, and stores them in the feature vector repository 215 .
  • the correlation module 260 performs operations such as regression analysis on the data in the demographic database 210 and the feature vector repository 215 , generating a prediction model 220 that can be, for example, used to predict particular viewer demographics to which a video represented by given feature vectors would be of interest. The operations of the modules 250 - 260 are described in more detail below with respect to FIG. 3 .
  • the various data 210 - 220 and the modules 250 - 260 are depicted as all being located on a single server 130 , they could be partitioned across multiple machines, databases or other storage units, and the like.
  • the data 210 - 220 could be stored in a variety of manners as known to one of skill in the art. For example, they could be implemented as tables of a relational database management system, as individual binary or text files, etc.
  • FIG. 3 is a flowchart illustrating a high-level view of a process for performing the correlation of the correlation module 260 , according to one embodiment.
  • a training set of videos is selected 305 from the video database 155 .
  • the training set is a subset of the videos of the video database 155 , given that analyzing only a representative training set of videos is more computationally efficient than analyzing the entire set, though in other embodiments it is also possible to analyze all videos.
  • the training set can be selected based on various filtering criteria. These filtering criteria include a number of views, number of viewers, number of unique views, date of views, date of upload and so forth. The filtering criteria can be used in any combination.
  • K, M, N, and T are design decisions selected by the system administrator.
  • the most recently viewed videos, or the videos viewed over a certain time period can be determined by examining the start and stop dates and times of the video access log 160 , for example.
  • a video can be deemed to be “viewed” if it is watched for a minimum length of time, or a minimum percentage of its total time.
  • the process of correlating video data e.g. feature vectors representing the images of the video
  • demographic data performs two independent operations, which may be performed in parallel: creation of demographic database 310 and extraction of video data 320 . Based on the results of these operations, correlation of the demographic and video data can be performed. These processes are repeated for each video in the training set.
  • the demographics are first extracted 311 from the viewer profiles associated with a given video. This entails identifying the viewers specified in the video access log 160 as having watched the given video within the relevant time period or number of viewings, retrieving their associated viewer profiles in the viewer profile repository 105 , and retrieving the demographic attributes of interest from the identified viewer profiles. Those viewer profiles lacking the demographics attributes of interest may be excluded from demographic creation, or they may be considered as “unspecified” entries with respect to those attributes, for example. For example, if age and gender are the attributes of interest, then all viewer profiles having these attributes are examined, and those viewer profiles for which the attributes are not specified are not examined. Attributes may also be filtered to discard those that appear to be inaccurate. For example, age attributes below or above a certain threshold age, e.g. under the age of 3 or over the age of 110, could be discarded on the assumption that it is unlikely that a person of that age would genuinely be a viewer.
  • Demographic distributions are then created 312 based on the extracted attributes.
  • data representing continuous values such as age or income can be segregated into bins.
  • the range for each bin for a given attribute can be varied as desired for the degree of granularity of interest.
  • the distribution data may be stored in different types of data structures, such as an array, with the value of an array element being derivable from the array index.
  • Values representing discrete unrelated values, such as location or level of education can be stored in an arbitrary order, with one value per element.
  • Each attribute bin stores a count for the number of values in the bin from the viewer profiles.
  • these distributions include age distribution, gender distribution, income distribution, education distribution, location distribution, and the like. Any of these can be combined into multi-attribute distributions, e.g., age-gender, or age-income, or gender-location.
  • the video content analysis module 255 extracts 320 video data from each video in the training set of videos, representing the data as a set of “feature vectors.”
  • a feature vector quantitatively describes a visual (or auditory) aspect of the video. Different embodiments analyze either or both of these categories of aspects.
  • feature vectors are associated with frames of the video.
  • the feature vectors are associated not merely with a certain frame, but with particular visual objects within that frame.
  • the video content analysis module 255 when extracting data relating to visual aspects, the video content analysis module 255 performs 321 object segmentation on a video, resulting in a set of visually distinct objects for the video.
  • Object segmentation preferably identifies objects that would be considered foreground objects, rather than background objects. For example, for a video about life in the Antarctic, the objects picked out as part of the segmentation process could include regions corresponding to penguins, polar bears, boats, and the like, though the objects need not actually be identified as such by name.
  • a mean shift algorithm is used, which employs clustering within a single image frame of a video.
  • segmentation based on the mean shift algorithm an image is converted into tokens, e.g. by converting each pixel of the image into a corresponding value, such as color value, gradient value, texture measurement value, etc.
  • windows are positioned uniformly around the data, and for each window the centroid—the mean location of the data values in the window—is computed, and each window re-centered around that point. This is repeated until the windows converge, i.e. a local center is found.
  • the data traversed by windows that converged to the same point are then clustered together, producing a set of separate image regions.
  • the same or similar image regions typically exist across video frames, e.g. a region representing the same face at the same location across a number of frames, or at slightly offset locations.
  • one of the set of similar regions can be chosen as representative and the rest discarded, or the data associated with the images may be averaged.
  • the result of application of a segmentation algorithm to a video is a set of distinct objects, each occupying one of the regions found by the segmentation algorithm. Since different segmentation algorithms—or differently parameterized versions of the same algorithm—tend to produce non-identical results, in one embodiment multiple segmentation algorithms are used, and objects that are sufficiently common across all the segmentation algorithm results sets are retained as representing valid objects.
  • An object segmented by one algorithm could be considered the same as that of segmented by another algorithm if it occupies substantially the same region of the image content object as the other segmented object, e.g. having N % of its pixels in common, where N can be, for example, 90% or more; a higher value of N results in a greater assurance that the same object was identified by the different algorithms.
  • the object could be considered sufficiently common if it is the same as objects in the result sets of all the other segmentation algorithms, or a majority or a set number or percentage thereof.
  • Characteristics are extracted 322 from content of the video.
  • the characteristics are represented as feature vectors, lists of data pertaining to various attributes, such as color (e.g. RGB, HSV, and LAB color spaces), texture (as represented by Gabor and Haar wavelets), edge direction, motion, optical flow, luminosity, transform data, and the like.
  • a given frame or object of a frame
  • the extracted feature vectors are then stored within the feature vector repository 215 in association with the video to which they correspond.
  • Some embodiments create feature vectors for audio features, instead of or in addition to video features.
  • audio samples can be taken periodically over a chosen interval.
  • the mel-frequency cepstrum coefficients (MFCCs) can be computed at 10 millisecond intervals over a duration of 30 seconds, starting after a suitable delay from the beginning of the video, e.g. 5 seconds.
  • the resulting MFCCs may then be averaged or aggregated across the 30 second sampling period, and are stored in the feature vector repository 215 .
  • Feature vectors can also be derived based on beat, pitch, or discrete wavelet outputs, or from speech recognition output or music/speaker identification systems.
  • Some embodiments create feature vectors based on metadata associated with the video.
  • metadata can include, for example, video title, video description, date of video uploading, the user who uploaded, text of a video comment, a number of comments, a rating or the number of ratings, a number of views by users, user co-views of the video, user keywords or tags for the video, and the like.
  • the feature vector data when extracted are frequently not in an ideal state, containing a large number of feature vectors, some of which are irrelevant, adding no additional information.
  • the potentially large number and low quality of the feature vectors increases the computational cost and reduces the accuracy of later techniques that analyze the feature vectors.
  • the video content analysis module 255 therefore performs 323 dimensionality reduction.
  • Different embodiments may employ different algorithms for this purpose, including principal component analysis (PCA), linear discriminant analysis (LDA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), and other similar algorithms known to those of skill in the art.
  • PCA principal component analysis
  • LDA linear discriminant analysis
  • MDS multi-dimensional scaling
  • Isomap Isomap
  • LLE locally linear embedding
  • the result of application of a dimensionality reduction algorithm to a first set of feature vectors is a second, smaller set of vectors representative of the first set, which can replace their prior, non-reduced versions in the feature vector repository 215 .
  • the correlation module 260 correlates 330 (i.e., forms some association between) the demographics and the video content as represented by the feature vectors, creating as output a prediction model 220 that represents all videos in the training set.
  • the correlation is performed based on machine learning techniques, such as supervised algorithms such as support vector machines (SVM), boosting, nearest neighbor, or decision tree, semi-supervised algorithms such as transductive learning, or unsupervised learning, such as clustering.
  • SVM kernel logistic regression techniques are employed.
  • the output is a predicted distribution for the demographic categories in question, and is stored as a prediction model 220 .
  • the distribution can be stored as a set of discrete values, e.g. a probability for each year in an age distribution, thus creating a discrete approximation of a continuous distribution.
  • coefficients of an equation generating a function representing the distribution can be stored.
  • a set of probabilities may be provided, one per value, for example.
  • the prediction model 220 will have a set of corresponding predicted distributions for various demographic attributes.
  • one prediction model storing data for the age demographic attribute could be as in the below table, where each of the three rows represents a set of feature vectors and their corresponding age distribution for ages 13-17, 18-21, etc. It is appreciated that such a table is merely for purposes of example, and that a typical implementation would have much additional data for more sets of feature vectors, a greater number and granularity of ages, more demographic attributes or combinations thereof, and the like.
  • the video hosting website 100 provides a number of different usage scenarios.
  • One usage scenario is prediction of demographic attribute values for a video, such as newly submitted video.
  • a video that has not been previously classified for its demographic attributes is received.
  • This can be a video that has been previously uploaded to the video hosting website 100 , or a video that is currently in the process of being uploaded.
  • This video's visual and/or audio feature vectors are extracted by the feature extraction module 255 . Then, the extracted feature vectors are matched against those of the prediction model 220 , and a set of feature vectors are identified that provide the closest match, each feature vector having a match strength.
  • the match strength is determined by use of a measure matrix.
  • the prediction model uses a predefined similarity measure, e.g. Gaussian kernel between pairs of feature vectors.
  • a predefined similarity measure e.g. Gaussian kernel between pairs of feature vectors.
  • only one closest feature vector is identified—i.e. the set contains only one feature vector—and the corresponding demographic distributions for the demographic attributes in question are retrieved from the prediction model 220 .
  • the set may contain multiple feature vectors, in which case the demographic distributions may be linearly combined, with the respective match strengths providing the combination weightings.
  • the set of feature vectors as a whole is used to look up corresponding demographic distributions in the prediction model 220 .
  • predicted distributions could be produced that comprise probabilities that viewers of the video would be in the various possible ages and of the male and female genders.
  • the ability to obtain predicted demographic distributions with respect to a given video has various useful applications.
  • a second usage scenario is to identify top demographic values of an attribute of interest for which a new video would be likely be relevant. For example, when a video is analyzed the probabilities that a viewer would be of the various ages within the age demographic category could be computed as in the first scenario, the probabilities sorted, and a determination made that the video appeals most strongly to people of the age range(s) with the top probability, e.g. 13-15 year olds.
  • a third usage scenario is to determine likely demographic values associated with a viewer who either lacks a viewer profile, or whose viewer profile is untrustworthy (e.g., indicates an improbable attribute, such as being above age 110).
  • the viewer's previously-watched videos are identified by examining the video access log 160 for the videos retrieved by the same IP address as the viewer. From this list of videos one or more videos are selected, and their feature vectors retrieved from the feature vector repository 215 (if present) or their feature vectors are extracted by the video content analysis module 255 . The resulting feature vectors are then input into the prediction model 220 to obtain the predicted demographics for each video.
  • the demographic strengths for each video watched by that viewer can be combined, such as by averaging the demographics for each video, by averaging that includes weighting the demographics for the videos according to how frequently the respective videos were watched by that viewer, and the like.
  • combined probabilities can be computed for each demographic category, and a top value or values chosen in each, e.g. 21 as the age value, and male as the gender value, representing that the viewer is believed to most probably be a 21 year old male.
  • Another usage scenario is to predict, for a given set of demographic attribute values, what videos would be of interest to viewers with such demographics. This is useful, for example, to create a list of recommended videos for such a viewer.
  • This scenario involves further processing of the demographic probability data to identify the top-scoring videos for a given demographic value, and the processed data can then be used as one factor for identifying what videos may be of interest to a given viewer. For example, when a new video is submitted, the video demographics analysis server 130 computes a set of demographic values having the highest match probabilities for the video for categories of interest.
  • the highest value for the gender category might be female with match strength 0.7
  • the highest attribute values for the age category might be 60, 62, 63, 55, and 65, with respective match strengths 0.8, 0.7, 0.75, 0.85, and 0.8
  • the highest attribute values for the gender-age combination category might be female/60 and female/62, with respective match probabilities 0.95 and 0.9.
  • These computed demographic probabilities can be stored for each video, e.g. as part of the video database 155 , and a list of the videos with the top scores for each demographic category attribute stored.
  • the top-scoring videos for people of age 41 might be a video trailer for the film “Pride & Prejudice” and a video on landscaping
  • the top-scoring videos for males with college degrees might be a video about mortgage foreclosures and an instructional video on golf.
  • the video demographics analysis server 130 can refer to his profile, determine that he is a male college graduate, and potentially recommend the videos on mortgage foreclosures and golf instruction, based upon the videos associated with these demographics via the prediction model.
  • These recommendations can be made in addition to those recommended based on other data, such as the keyword “penguins,” keywords specified in the viewer's profile as being of interest to that viewer, and the like.
  • the demographics-derived recommendations can be displayed unconditionally, in addition to the other recommendations, or conditionally, based on comparisons of computed relevance values, for example.
  • the various recommendations may be ordered according to computed relevance values, with each recommendation source—e.g. derived from demographics, or from keyword matches—possibly having its own particular formula for computing a relevance value.
  • Still another usage scenario is serving demographic queries, i.e. providing demographic information across videos.
  • a user could submit a query requesting the average age of the viewers across all the videos in the video database 155 , or some subset of these videos, the answer factoring in estimated ages of users who otherwise lack profiles.
  • a user could submit a query requesting the top 10 videos for women aged 55 or older.
  • Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
  • the present invention also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • the present invention is well suited to a wide variety of computer network systems over numerous topologies.
  • the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.

Abstract

A video demographics analysis system selects a training set of videos to use to correlate viewer demographics and video content data. The video demographics analysis system extracts demographic data from viewer profiles related to videos in the training set and creates a set of demographic distributions, and also extracts video data from videos in the training set. The video demographics analysis system correlates the viewer demographics with the video data of videos viewed by that viewer. Using the prediction model produced by the machine learning process, a new video about which there is no a priori knowledge can be associated with a predicted demographic distribution specifying probabilities of the video appealing to different types of people within a given demographic category, such as people of different ages within an age demographic category.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The application claims the benefit of Provisional Application No. 61/147,736, filed on Jan. 27, 2009, which is hereby incorporated herein by reference.
  • BACKGROUND
  • 1. Field of Art
  • The present invention generally relates to the field of digital video, and more specifically, to methods of correlating demographic data with characteristics of video content.
  • 2. Background of the Invention
  • Video hosting sites, such as YouTube or Google Video, currently have millions of users and tens of millions of videos. Users may sometimes have difficulty in determining which videos would be of interest to them, and may be daunted by the sheer volume of videos available for viewing. Thus, the ability to suggest which videos would be of interest to a given user is highly valuable.
  • However, conventional systems typically merely rely on external metadata associated with the video, such as keywords or textual video descriptions, to predict demographics that would be interested in the video. For example, conventional systems might recommend videos having keywords matching those specified in a viewer profile as being of interest to that viewer. However, if the video is new and has not yet been viewed and rated, and if the associated title is “spam” that misrepresents the true content of the video, then the conventional approach produces spurious predictions. Thus, one shortcoming of conventional approaches is that they rely on external metadata that may be false when assessing the pertinence of a given video to a particular viewer, rather than examining the actual video content itself.
  • SUMMARY
  • A video demographics analysis system creates demographic prediction models that predict the demographic characteristics of viewers of a video, based on quantitative video content data extracted from the videos.
  • In one aspect, the system selects a training set of videos to use to correlate viewer demographic attributes—such as age and gender—and video content data. The video demographics analysis system determines which viewers have viewed videos in the training set, and extracts demographic data from the viewer profiles of these viewers. The demographic data can include any information describing demographic attributes of the viewers, including but not limited to age, gender, occupation, household income, location, interests, and the like. From the extracted demographic data, the system creates a set of demographic distributions for each video in the training set. The video demographics analysis system also extracts video data from videos in the training set, the video data comprising quantitative information on visual and/or audio features of the videos. Then, a machine learning process is applied to correlate the viewer demographics for the training set videos with the video data of the training set videos , thereby creating a prediction model for the training set videos.
  • In another aspect, the system uses a prediction model produced by the machine learning process to predict, for a video about which there is little or no prior information about the demographics of viewers, a demographic distribution specifying probabilities of the video appealing to viewers in various different demographic categories, such as viewers of different ages, genders, and so forth. The ability to obtain predicted demographic distributions for a video has a number of useful applications, such as determining a group to which to recommend a new video, estimating the demographics of a viewer lacking a reliable user profile, and recommending videos to a viewer based on the viewer's demographic attributes.
  • In one embodiment, a computer-implemented method of generating a prediction model for videos receives a plurality of videos from a video repository, each video having an associated list of viewers. For each video, the method creates a demographic distribution for a specified demographic based at least in part on user profile data associated with viewers of the video, and generates feature vectors based on the content of the video. The method further generates a prediction model that correlates the feature vectors for the videos and the demographic distributions, and stores the generated prediction model.
  • In one embodiment, a computer-implemented method for determining demographics of a video stores a prediction model that correlates viewer demographic attributes with feature vectors extracted from videos viewed by viewers, wherein the viewer demographic attributes include age and gender. The method further generates from content of the video a set of feature vectors, and uses the trained prediction model to determine likely demographic attributes of video viewers given that feature vector.
  • The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 illustrates the architecture of a video demographics analysis system, according to one embodiment.
  • FIG. 2 illustrates the components of a video analysis server, according to one embodiment.
  • FIG. 3 is a flowchart illustrating a high-level view of a process of performing the correlation, according to one embodiment.
  • The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
  • DETAILED DESCRIPTION System Architecture
  • FIG. 1 illustrates the architecture of a system for performing video demographics analysis of viewer profile information and digital video content and correlating demographic and video feature data, according to one embodiment.
  • As shown in FIG. 1, a video hosting website 100 comprises a front end server 140, a video serving module 110, an ingest module 115, a video analysis server 130, a video search server 145, a video access log 160, a user database 150, and a video database 155. Many conventional features, such as firewalls, load balancers, application servers, failover servers, site management tools and so forth are not shown so as not to obscure the features of the system.
  • Most generally, the video hosting website 100 represents any system that allows users (equivalently “viewers”) to access video content via searching and/or browsing interfaces. The sources of videos can be from user uploads of videos, searches or crawls of other websites or databases of videos, or the like, or any combination thereof. For example, in one embodiment a video hosting site 100 can be configured to allow for user uploads of content; in another embodiment a video hosting website 100 can be configured to only obtain videos from other sources by crawling such sources or searching such sources in real time. A suitable website 100 for implementation of the system is the YOUTUBE™ website, found at www.youtube.com; other video hosting sites are known as well, and can be adapted to operate according to the teaching disclosed herein. It will be understood that the term “web site” represents any computer system adapted to serve content using any internetworking protocols, and is not intended to be limited to content uploaded or downloaded via the Internet or the HTTP protocol. In general, functions described in one embodiment as being performed on the server side can also be performed on the client side in other embodiments if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together.
  • Each of the various servers and modules is implemented as a server program executing on server-class computer comprising a CPU, memory, network interface, peripheral interfaces, and other well known components. The computers themselves preferably run an open-source operating system such as LINUX, have generally high performance CPUs, 1 G or more of memory, and 100 G or more of disk storage. Of course, other types of computers can be used, and it is expected that as more powerful computers are developed in the future, they can be configured in accordance with the teachings here. The functionality implemented by any of the elements can be provided from computer program products that are stored in tangible computer accessible storage mediums (e.g., RAM, hard disk, or optical/magnetic media).
  • A client 170 executes a browser 171 and can connect to the front end server 140 via a network 180, which is typically the internet, but can also be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network. While only a single client 170 and browser 171 are shown, it is understood that very large numbers (e.g., millions) of clients are supported and can be in communication with the video hosting website 100 at any time. The client 170 may include a variety of different computing devices. Examples of client devices 170 are personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones or laptop computers. As will be obvious to one of ordinary skill in the art, the present invention is not limited to the devices listed above.
  • The browser 171 can include a video player (e.g., Flash™ from Adobe Systems, Inc.), or any other player adapted for the video file formats used in the video hosting website 100. Alternatively, videos can be accessed by a standalone program separate from the browser 171. A user can access a video from the video hosting website 100 by browsing a catalog of videos, conducting searches on keywords, reviewing play lists from other users or the system administrator (e.g., collections of videos forming channels), or viewing videos associated with particular user groups (e.g., communities).
  • Users of clients 170 can also search for videos based on keywords, tags or other metadata. These requests are received as queries by the front end server 140 and provided to the video search server 145, which is responsible for searching the video database 155 for videos that satisfy the user queries. The video search server 145 supports searching on any fielded data for a video, including its title, description, tags, author, category and so forth.
  • Users of the clients 170 and browser 171 can upload content to the video hosting website 100 via network 180. The uploaded content can include, for example, video, audio or a combination of video and audio. The uploaded content is processed by an ingest module 115, which processes the video for storage in the video database 155. This processing can include format conversion (transcoding), compression, metadata tagging, and other data processing. An uploaded content file is associated with the uploading user, and so the user's account record is updated in the user database 150 as needed. For purposes of convenience and the description of one embodiment, the uploaded content will be referred to as “videos,” “video files,” or “video items,” but no limitation on the types of content that can be uploaded is intended by this terminology. Thus, the operations described herein for identifying related items can be applied to any type of content, not only videos; other suitable type of content items include audio files (e.g. music, podcasts, audio books, and the like), documents, multimedia presentations, and so forth. In addition, related items need not be of the same type. Thus, given a video, the related items may include one or more audio files, documents, and so forth in addition to other videos.
  • The video database 155 is used to store the ingested videos. The video database 155 stores video content and associated metadata provided by their respective content owners. Each uploaded video is assigned a video identifier (id) when it is processed by the ingest module 115. The video files have metadata associated with each file such as a video ID, artist, video title, label, genre, time length, and optionally geo-restrictions that can be used for data collection or content blocking on a geographic basis. The video files are can be encoded as H.263, H.264, WMV, VC-1 or the like; audio can be encoded as MP3, AAC, or the like. The files can be stored in any suitable container format, such as Flash, AVI, MP4, MPEG-2, RealMedia, DivX and the like.
  • The video hosting website 100 further comprises viewer profile repository 105. The viewer profile repository 105 comprises a plurality of profiles of users/viewers of digital videos, such as the users of video hosting systems such as YouTube™ and Google Video™. A viewer profile stores demographic information on various attributes of an associated viewer, such as the viewer's gender, age, location, income, occupation, level of education, stated preferences, and the like. The information may be provided by viewers themselves, when they create a profile, and can be further supplemented with information extracted automatically from other sources. For example, one profile entry could specify that the viewer was a 24-year-old male, with a college education, living in Salt Lake City, and with specified interests in archaeology and tennis. The exact demographic categories stored in the viewer profile can vary in different embodiments, depending on how the profiles are defined by the system administrator.
  • The video hosting website 100 further comprises a video access log 160, which stores information describing each access to any video by any viewer. Thus, each video effectively has an associated list of viewers. Each individual viewer is assigned an ID, for example, based on his or her IP address to differentiate the individual viewers. In one embodiment, this viewer ID is an anonymized viewer ID that is assigned to each individual viewer to keep viewer identities private, such as an opaque identifier such as a unique random number or a hash value. The system then can access each viewer's demographic information without obtaining his or her identity. In an alternative embodiment, the actual identity of the viewers may be known or determinable. In any case, for each viewer, the video access log 160 tracks the viewer's interactions with videos. In one embodiment, each entry in the video access log 160 identifies a video being accessed, a time of access, an IP address of the viewer, a viewer ID if available, cookies, the viewer's search query that led to the current access, and data identifying the type of interaction with the video. Interaction types can include any viewer interactions in the viewer interface of the website, such as playing, pausing, rewinding and forwarding a video. The various viewer interaction types are considered viewer events that are associated with a given video. For example, one entry might store that a viewer at a given IP address started viewing a particular video at time 0:00:00 and stopped viewing at time 0:34:00.
  • The video hosting website 100 further comprises a video analysis server 130, which correlates demographic information about videos with the content of the videos themselves. This involves generating demographic distributions from demographic data, analyzing video content, and generating a prediction model relating the demographic distributions and the video content. The video analysis module 130 also can predict a demographic distribution for a video and serve demographic queries (e.g., provide information about demographic information across videos).
  • Referring now to FIG. 2, there are shown the modules in one embodiment of the video analysis module 130. The analysis module 130 comprises a demographics database 210 and a feature vector repository 215, a demographics module 250, a video content analysis module 255, and a correlation module 260, and additionally comprises a prediction model 220.
  • The demographics database 210 stores data regarding distributions of demographic data with respect to videos. For example, certain videos can have an associated demographic distribution for various demographic attributes of interest, such as age and gender. In some embodiments, distributions are created for combined attributes, such gender-age, e.g. for a given video, that 4% of viewers are females aged 13 to 17. For instance, a given video may have an age-gender distribution such as the following:
  • 13-17 18-21 22-25 26-30 . . .
    Male 5.6 12.3 13.8 8.5 . . .
    Female 4.0 8.6 10.2 9.6 . . .
  • This distribution states that 5.6% of its viewers are male of ages 13 to 17, 4% are females of ages 13 to 17, 12.3% are males of ages 18 to 21, and the like. The values in the example distribution represent percentages of the viewers having the corresponding demographic characteristics, but they could also be normalized with respect to the general population, e.g. a value of 1.3 for males aged 13-17 indicating that 30% more of the viewings were by males aged 13-17 than their respective share of the population.
  • Generally, any demographic attribute stored in a viewer profile may have corresponding distributions. A given demographic attribute may be represented at various different levels of granularity, such as 1-year, 3-year, or 5-year bins for ages, for example. Similarly, a given video can have a gender distribution in which 54% of its viewers are female, 38% of its viewers are male, and 8% are unknown, where the unknown values represent viewers lacking profiles or viewers with profiles lacking a value for the gender attribute. As an alternative to storing “unknown” values in the distributions, profiles lacking a value for an attribute of interest could be excluded during training.
  • In one embodiment, the distributions are represented as vectors, e.g. an array of integers <0, 6, 11, . . . > where each component represents a previously assigned age-bin, representing that 0% of viewers are from ages 13 to 17, 6% are 18 to 21, and 11% are 22 to 25. Other storage implementations would be equally possible to one of skill in the art.
  • The demographics module 250 takes as input the data in the viewer profile repository 105 and creates the data on distributions stored in the demographic database 210. The feature extraction module 255 takes as input the video data in the video repository 110 and the video access log data 160 and extracts feature vectors representing characteristics of the videos, such as visual and/or audio characteristics, and stores them in the feature vector repository 215. The correlation module 260 performs operations such as regression analysis on the data in the demographic database 210 and the feature vector repository 215, generating a prediction model 220 that can be, for example, used to predict particular viewer demographics to which a video represented by given feature vectors would be of interest. The operations of the modules 250-260 are described in more detail below with respect to FIG. 3.
  • Note that although the various data 210-220 and the modules 250-260 are depicted as all being located on a single server 130, they could be partitioned across multiple machines, databases or other storage units, and the like. The data 210-220 could be stored in a variety of manners as known to one of skill in the art. For example, they could be implemented as tables of a relational database management system, as individual binary or text files, etc.
  • Process of Demographic Correlation
  • FIG. 3 is a flowchart illustrating a high-level view of a process for performing the correlation of the correlation module 260, according to one embodiment. First, a training set of videos is selected 305 from the video database 155. In some embodiments, the training set is a subset of the videos of the video database 155, given that analyzing only a representative training set of videos is more computationally efficient than analyzing the entire set, though in other embodiments it is also possible to analyze all videos. The training set can be selected based on various filtering criteria. These filtering criteria include a number of views, number of viewers, number of unique views, date of views, date of upload and so forth. The filtering criteria can be used in any combination. For example, the training set can be established as the N videos (e.g., N=1000) which have been viewed at least K times (e.g., K=1,000,000) in the previous M (e.g., M=15) days, and which are at least T seconds (e.g., 30 seconds) in length. Here, K, M, N, and T are design decisions selected by the system administrator. The most recently viewed videos, or the videos viewed over a certain time period, can be determined by examining the start and stop dates and times of the video access log 160, for example. A video can be deemed to be “viewed” if it is watched for a minimum length of time, or a minimum percentage of its total time.
  • With the training set of videos identified, the process of correlating video data (e.g. feature vectors representing the images of the video) with demographic data performs two independent operations, which may be performed in parallel: creation of demographic database 310 and extraction of video data 320. Based on the results of these operations, correlation of the demographic and video data can be performed. These processes are repeated for each video in the training set.
  • During distribution creation 310, the demographics are first extracted 311 from the viewer profiles associated with a given video. This entails identifying the viewers specified in the video access log 160 as having watched the given video within the relevant time period or number of viewings, retrieving their associated viewer profiles in the viewer profile repository 105, and retrieving the demographic attributes of interest from the identified viewer profiles. Those viewer profiles lacking the demographics attributes of interest may be excluded from demographic creation, or they may be considered as “unspecified” entries with respect to those attributes, for example. For example, if age and gender are the attributes of interest, then all viewer profiles having these attributes are examined, and those viewer profiles for which the attributes are not specified are not examined. Attributes may also be filtered to discard those that appear to be inaccurate. For example, age attributes below or above a certain threshold age, e.g. under the age of 3 or over the age of 110, could be discarded on the assumption that it is unlikely that a person of that age would genuinely be a viewer.
  • Demographic distributions are then created 312 based on the extracted attributes. As previously noted, data representing continuous values such as age or income can be segregated into bins. The range for each bin for a given attribute can be varied as desired for the degree of granularity of interest. The distribution data may be stored in different types of data structures, such as an array, with the value of an array element being derivable from the array index. Values representing discrete unrelated values, such as location or level of education, can be stored in an arbitrary order, with one value per element. Each attribute bin stores a count for the number of values in the bin from the viewer profiles. Once all the relevant attributes have been factored into their corresponding distributions, the result is a set of distributions, one per video, for every relevant attribute and/or combinations thereof. As mentioned above, these distributions include age distribution, gender distribution, income distribution, education distribution, location distribution, and the like. Any of these can be combined into multi-attribute distributions, e.g., age-gender, or age-income, or gender-location.
  • Independently of the distribution creation 310, the video content analysis module 255 extracts 320 video data from each video in the training set of videos, representing the data as a set of “feature vectors.” A feature vector quantitatively describes a visual (or auditory) aspect of the video. Different embodiments analyze either or both of these categories of aspects.
  • In general, feature vectors are associated with frames of the video. In one embodiment, the feature vectors are associated not merely with a certain frame, but with particular visual objects within that frame. In such an embodiment, when extracting data relating to visual aspects, the video content analysis module 255 performs 321 object segmentation on a video, resulting in a set of visually distinct objects for the video. Object segmentation preferably identifies objects that would be considered foreground objects, rather than background objects. For example, for a video about life in the Antarctic, the objects picked out as part of the segmentation process could include regions corresponding to penguins, polar bears, boats, and the like, though the objects need not actually be identified as such by name.
  • Different object segmentation algorithms may be employed in different embodiments, such as adaptive background subtraction, spatial and temporal segmentation with clustering algorithms, and other algorithms known to those of skill in the art. In one embodiment, a mean shift algorithm is used, which employs clustering within a single image frame of a video. In segmentation based on the mean shift algorithm, an image is converted into tokens, e.g. by converting each pixel of the image into a corresponding value, such as color value, gradient value, texture measurement value, etc. Then windows are positioned uniformly around the data, and for each window the centroid—the mean location of the data values in the window—is computed, and each window re-centered around that point. This is repeated until the windows converge, i.e. a local center is found. The data traversed by windows that converged to the same point are then clustered together, producing a set of separate image regions. In the case of a video, the same or similar image regions typically exist across video frames, e.g. a region representing the same face at the same location across a number of frames, or at slightly offset locations. In this case, one of the set of similar regions can be chosen as representative and the rest discarded, or the data associated with the images may be averaged.
  • The result of application of a segmentation algorithm to a video is a set of distinct objects, each occupying one of the regions found by the segmentation algorithm. Since different segmentation algorithms—or differently parameterized versions of the same algorithm—tend to produce non-identical results, in one embodiment multiple segmentation algorithms are used, and objects that are sufficiently common across all the segmentation algorithm results sets are retained as representing valid objects. An object segmented by one algorithm could be considered the same as that of segmented by another algorithm if it occupies substantially the same region of the image content object as the other segmented object, e.g. having N % of its pixels in common, where N can be, for example, 90% or more; a higher value of N results in a greater assurance that the same object was identified by the different algorithms. The object could be considered sufficiently common if it is the same as objects in the result sets of all the other segmentation algorithms, or a majority or a set number or percentage thereof.
  • Characteristics are extracted 322 from content of the video. In one embodiment, the characteristics are represented as feature vectors, lists of data pertaining to various attributes, such as color (e.g. RGB, HSV, and LAB color spaces), texture (as represented by Gabor and Haar wavelets), edge direction, motion, optical flow, luminosity, transform data, and the like. In different embodiments, a given frame (or object of a frame) may be represented by one feature vector, or by a number of feature vectors corresponding to different portions of the frame/object, e.g. to points at which there is a sharp change between color values, or different attributes. In any case, the extracted feature vectors are then stored within the feature vector repository 215 in association with the video to which they correspond.
  • Some embodiments create feature vectors for audio features, instead of or in addition to video features. For example, audio samples can be taken periodically over a chosen interval. As a more specific example, the mel-frequency cepstrum coefficients (MFCCs) can be computed at 10 millisecond intervals over a duration of 30 seconds, starting after a suitable delay from the beginning of the video, e.g. 5 seconds. The resulting MFCCs may then be averaged or aggregated across the 30 second sampling period, and are stored in the feature vector repository 215. Feature vectors can also be derived based on beat, pitch, or discrete wavelet outputs, or from speech recognition output or music/speaker identification systems.
  • Some embodiments create feature vectors based on metadata associated with the video. Such metadata can include, for example, video title, video description, date of video uploading, the user who uploaded, text of a video comment, a number of comments, a rating or the number of ratings, a number of views by users, user co-views of the video, user keywords or tags for the video, and the like.
  • The feature vector data when extracted are frequently not in an ideal state, containing a large number of feature vectors, some of which are irrelevant, adding no additional information. The potentially large number and low quality of the feature vectors increases the computational cost and reduces the accuracy of later techniques that analyze the feature vectors. In order to reduce the size and improve the quality of the feature vector data, the video content analysis module 255 therefore performs 323 dimensionality reduction. Different embodiments may employ different algorithms for this purpose, including principal component analysis (PCA), linear discriminant analysis (LDA), multi-dimensional scaling (MDS), Isomap, locally linear embedding (LLE), and other similar algorithms known to those of skill in the art. The result of application of a dimensionality reduction algorithm to a first set of feature vectors is a second, smaller set of vectors representative of the first set, which can replace their prior, non-reduced versions in the feature vector repository 215.
  • With the demographic database 210 and feature vector repository 215 populated with data as a result of steps 310 and 320, respectively, the correlation module 260 correlates 330 (i.e., forms some association between) the demographics and the video content as represented by the feature vectors, creating as output a prediction model 220 that represents all videos in the training set. The correlation is performed based on machine learning techniques, such as supervised algorithms such as support vector machines (SVM), boosting, nearest neighbor, or decision tree, semi-supervised algorithms such as transductive learning, or unsupervised learning, such as clustering. In one embodiment, SVM kernel logistic regression techniques are employed.
  • Regardless of the particular algorithm employed, the output is a predicted distribution for the demographic categories in question, and is stored as a prediction model 220. In the case of a demographic category such as age that can be represented with a continuous distribution function, the distribution can be stored as a set of discrete values, e.g. a probability for each year in an age distribution, thus creating a discrete approximation of a continuous distribution. Alternately, coefficients of an equation generating a function representing the distribution can be stored. For demographic categories inherently having discrete values, such as gender or location, a set of probabilities may be provided, one per value, for example. Thus, given a set of feature vectors representing a video, the prediction model 220 will have a set of corresponding predicted distributions for various demographic attributes.
  • For example, one prediction model storing data for the age demographic attribute could be as in the below table, where each of the three rows represents a set of feature vectors and their corresponding age distribution for ages 13-17, 18-21, etc. It is appreciated that such a table is merely for purposes of example, and that a typical implementation would have much additional data for more sets of feature vectors, a greater number and granularity of ages, more demographic attributes or combinations thereof, and the like.
  • Feature
    vectors 13-17 18-21 22-25 26-30 . . .
    F1, F2, F3 10% 18% 32% 19% . . .
    F4, F5 15% 22% 38% 16% . . .
    F6, F7 30% 20% 10%  5% . . .
  • Applications of the Prediction Model
  • The video hosting website 100 provides a number of different usage scenarios. One usage scenario is prediction of demographic attribute values for a video, such as newly submitted video. In this scenario, a video that has not been previously classified for its demographic attributes is received. This can be a video that has been previously uploaded to the video hosting website 100, or a video that is currently in the process of being uploaded. This video's visual and/or audio feature vectors are extracted by the feature extraction module 255. Then, the extracted feature vectors are matched against those of the prediction model 220, and a set of feature vectors are identified that provide the closest match, each feature vector having a match strength. In one embodiment, the match strength is determined by use of a measure matrix. In one embodiment, the prediction model uses a predefined similarity measure, e.g. Gaussian kernel between pairs of feature vectors. In one embodiment, only one closest feature vector is identified—i.e. the set contains only one feature vector—and the corresponding demographic distributions for the demographic attributes in question are retrieved from the prediction model 220. In another embodiment, the set may contain multiple feature vectors, in which case the demographic distributions may be linearly combined, with the respective match strengths providing the combination weightings. In another embodiment, the set of feature vectors as a whole is used to look up corresponding demographic distributions in the prediction model 220. For example, if the age and gender demographic categories are of interest, then for a given video, predicted distributions could be produced that comprise probabilities that viewers of the video would be in the various possible ages and of the male and female genders. The ability to obtain predicted demographic distributions with respect to a given video has various useful applications.
  • A second usage scenario, related to the first scenario, is to identify top demographic values of an attribute of interest for which a new video would be likely be relevant. For example, when a video is analyzed the probabilities that a viewer would be of the various ages within the age demographic category could be computed as in the first scenario, the probabilities sorted, and a determination made that the video appeals most strongly to people of the age range(s) with the top probability, e.g. 13-15 year olds.
  • A third usage scenario is to determine likely demographic values associated with a viewer who either lacks a viewer profile, or whose viewer profile is untrustworthy (e.g., indicates an improbable attribute, such as being above age 110). In this application, the viewer's previously-watched videos are identified by examining the video access log 160 for the videos retrieved by the same IP address as the viewer. From this list of videos one or more videos are selected, and their feature vectors retrieved from the feature vector repository 215 (if present) or their feature vectors are extracted by the video content analysis module 255. The resulting feature vectors are then input into the prediction model 220 to obtain the predicted demographics for each video. To estimate the viewer's demographic, the demographic strengths for each video watched by that viewer can be combined, such as by averaging the demographics for each video, by averaging that includes weighting the demographics for the videos according to how frequently the respective videos were watched by that viewer, and the like. As a result, combined probabilities can be computed for each demographic category, and a top value or values chosen in each, e.g. 21 as the age value, and male as the gender value, representing that the viewer is believed to most probably be a 21 year old male.
  • Another usage scenario is to predict, for a given set of demographic attribute values, what videos would be of interest to viewers with such demographics. This is useful, for example, to create a list of recommended videos for such a viewer. This scenario involves further processing of the demographic probability data to identify the top-scoring videos for a given demographic value, and the processed data can then be used as one factor for identifying what videos may be of interest to a given viewer. For example, when a new video is submitted, the video demographics analysis server 130 computes a set of demographic values having the highest match probabilities for the video for categories of interest. For instance, for a video containing content related to social security benefits, the highest value for the gender category might be female with match strength 0.7, the highest attribute values for the age category might be 60, 62, 63, 55, and 65, with respective match strengths 0.8, 0.7, 0.75, 0.85, and 0.8, and the highest attribute values for the gender-age combination category might be female/60 and female/62, with respective match probabilities 0.95 and 0.9. These computed demographic probabilities can be stored for each video, e.g. as part of the video database 155, and a list of the videos with the top scores for each demographic category attribute stored. For example, the top-scoring videos for people of age 41 might be a video trailer for the film “Pride & Prejudice” and a video on landscaping, and the top-scoring videos for males with college degrees might be a video about mortgage foreclosures and an instructional video on golf.
  • These lists of top videos for different demographics can then be applied to identify recommendations for related videos. For example, if a viewer is viewing a video about the Antarctic with submitter-supplied description “Look at the cute penguins,” the video demographics analysis server 130 can refer to his profile, determine that he is a male college graduate, and potentially recommend the videos on mortgage foreclosures and golf instruction, based upon the videos associated with these demographics via the prediction model. These recommendations can be made in addition to those recommended based on other data, such as the keyword “penguins,” keywords specified in the viewer's profile as being of interest to that viewer, and the like. The demographics-derived recommendations can be displayed unconditionally, in addition to the other recommendations, or conditionally, based on comparisons of computed relevance values, for example. Similarly, the various recommendations may be ordered according to computed relevance values, with each recommendation source—e.g. derived from demographics, or from keyword matches—possibly having its own particular formula for computing a relevance value.
  • Still another usage scenario is serving demographic queries, i.e. providing demographic information across videos. For example, a user (either a human or a program) could submit a query requesting the average age of the viewers across all the videos in the video database 155, or some subset of these videos, the answer factoring in estimated ages of users who otherwise lack profiles. As another example, a user could submit a query requesting the top 10 videos for women aged 55 or older.
  • The present invention has been described in particular detail with respect to one possible embodiment. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components and variables, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.
  • Some portions of above description present the features of the present invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or by functional names, without loss of generality.
  • Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
  • The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present invention is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references to specific languages are provided for invention of enablement and best mode of the present invention.
  • The present invention is well suited to a wide variety of computer network systems over numerous topologies. Within this field, the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.
  • Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims (20)

1. A computer-implemented method of generating a prediction model for videos, comprising:
receiving a plurality of videos from a video repository, each video having an associated list of viewers;
for each video, creating a demographic distribution for at least one demographic attribute based at least in part on viewer demographic data associated with viewers of the video;
for each video, generating feature vectors based at least in part on the content of the video;
generating a prediction model that correlates the feature vectors for the videos and the demographic distributions; and
storing the prediction model.
2. The computer-implemented method of claim 1, wherein the demographic attribute is one of age and gender.
3. The computer-implemented method of claim 1, wherein the demographic attribute is one of occupation, household income, and location.
4. The computer-implemented method of claim 1, wherein the prediction model is generated using support vector machines.
5. The computer-implemented method of claim 1, further comprising altering the generated feature vectors using a dimensionality reduction algorithm.
6. The computer-implemented method of claim 1, wherein the generated feature vectors include features vectors generated based on audio content of the video and vectors generated based on visual content of the video.
7. The computer-implemented method of claim 1, wherein the feature vectors are generated at least in part on metadata associated with the video.
8. The computer-implemented method of claim 1, further comprising:
performing object segmentation on a frame of the video, thereby identifying a visual object of the frame;
wherein generating feature vectors based at least in part on the content of the video comprises generating feature vectors for the identified visual object.
9. A computer-implemented method for determining demographics of a video, comprising:
storing a prediction model that correlates viewer demographic attributes with feature vectors extracted from videos viewed by viewers, wherein the viewer demographic attributes include age and gender;
receiving a video;
generating from content of the video a set of feature vectors; and
identifying demographic attribute values by applying the prediction model to the generated set of feature vectors.
10. The computer-implemented method of claim 8, wherein identifying demographic attribute values comprises:
identifying a set of feature vectors of the prediction model that is most similar to the generated set of feature vectors; and
identifying, in the prediction model, demographic attribute values most strongly correlated with the identified feature vectors.
11. A computer-implemented method for identifying demographics associated with a viewer, comprising:
storing a prediction model that correlates viewer demographic attributes with feature vectors generated from videos viewed by viewers;
identifying a set of videos viewed by a given viewer;
generating, from content of the set of videos, feature vectors;
applying the feature vectors to the prediction model to identify viewer demographic attribute values most strongly correlated with the feature vectors of the prediction model; and
identifying viewer demographic attribute values most strongly correlated with the given viewer based at least in part on the identified viewer demographic attribute values.
12. A computer-implemented method for identifying videos associated with given demographic attribute values, comprising:
storing a prediction model that correlates viewer demographic attributes with feature vectors generated from videos viewed by viewers;
receiving a plurality of videos;
for each video of the plurality of videos:
generating feature vectors from the video;
applying the feature vectors generated from the video to the prediction model to identify viewer demographic attribute values most strongly correlated with the feature vectors of the prediction model;
storing the identified viewer demographic attribute values in association with the video;
selecting videos having highest values for the given demographic attribute values; and
displaying identifiers of the selected videos.
13. A computer readable storage medium storing a computer program executable by a processor for generating a prediction model for videos, the actions of the computer program comprising:
receiving a plurality of videos from a video repository, each video having an associated list of viewers;
for each video, creating a demographic distribution for at least one demographic attribute based at least in part on viewer demographic data associated with viewers of the video;
for each video, generating feature vectors based at least in part on the content of the video;
generating a prediction model that correlates the feature vectors for the videos and the demographic distributions; and
storing the prediction model.
14. The computer readable storage medium of claim 12, wherein the generated feature vectors include features vectors generated based on audio content of the video and vectors generated based on visual content of the video.
15. The computer readable storage medium of claim 12, wherein the prediction model is generated using support vector machines
16. A computer system for generating a prediction model for videos, comprising:
a video repository storing a plurality of videos, each video having an associated list of viewers;
a video analysis server adapted to:
receive a plurality of videos from the video repository;
for each video, create a demographic distribution for at least one demographic attribute based at least in part on viewer demographic data associated with viewers of the video;
for each video, generate feature vectors based at least in part on the content of the video;
generate a prediction model that correlates the feature vectors for the videos and the demographic distributions; and
store the prediction model.
17. The computer system of claim 16, wherein the demographic attribute is one of age and gender.
18. The computer system of claim 16, wherein the prediction model is generated using support vector machines.
19. The computer system of claim 16, wherein the generated feature vectors include features vectors generated based on audio content of the video and vectors generated based on visual content of the video.
20. The computer system of claim 16, wherein the feature vectors are generated at least in part on metadata associated with the video.
US12/392,987 2009-01-27 2009-02-25 Video content analysis for automatic demographics recognition of users and videos Abandoned US20100191689A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US12/392,987 US20100191689A1 (en) 2009-01-27 2009-02-25 Video content analysis for automatic demographics recognition of users and videos
EP09839466.1A EP2382782A4 (en) 2009-01-27 2009-12-15 Video content analysis for automatic demographics recognition of users and videos
EP18154276.2A EP3367676A1 (en) 2009-01-27 2009-12-15 Video content analysis for automatic demographics recognition of users and videos
PCT/US2009/068108 WO2010087909A1 (en) 2009-01-27 2009-12-15 Video content analysis for automatic demographics recognition of users and videos
US13/488,126 US8301498B1 (en) 2009-01-27 2012-06-04 Video content analysis for automatic demographics recognition of users and videos
US13/632,591 US20160014440A1 (en) 2009-01-27 2012-10-01 Video content analysis for automatic demographics recognition of users and videos

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14773609P 2009-01-27 2009-01-27
US12/392,987 US20100191689A1 (en) 2009-01-27 2009-02-25 Video content analysis for automatic demographics recognition of users and videos

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/488,126 Continuation US8301498B1 (en) 2009-01-27 2012-06-04 Video content analysis for automatic demographics recognition of users and videos

Publications (1)

Publication Number Publication Date
US20100191689A1 true US20100191689A1 (en) 2010-07-29

Family

ID=42354955

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/392,987 Abandoned US20100191689A1 (en) 2009-01-27 2009-02-25 Video content analysis for automatic demographics recognition of users and videos
US13/488,126 Active US8301498B1 (en) 2009-01-27 2012-06-04 Video content analysis for automatic demographics recognition of users and videos
US13/632,591 Abandoned US20160014440A1 (en) 2009-01-27 2012-10-01 Video content analysis for automatic demographics recognition of users and videos

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/488,126 Active US8301498B1 (en) 2009-01-27 2012-06-04 Video content analysis for automatic demographics recognition of users and videos
US13/632,591 Abandoned US20160014440A1 (en) 2009-01-27 2012-10-01 Video content analysis for automatic demographics recognition of users and videos

Country Status (3)

Country Link
US (3) US20100191689A1 (en)
EP (2) EP3367676A1 (en)
WO (1) WO2010087909A1 (en)

Cited By (119)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332590A1 (en) * 2009-06-26 2010-12-30 Fujitsu Limited Inheritance communication administrating apparatus
US20110270848A1 (en) * 2002-10-03 2011-11-03 Polyphonic Human Media Interface S.L. Method and System for Video and Film Recommendation
US20120072937A1 (en) * 2010-09-21 2012-03-22 Kddi Corporation Context-based automatic selection of factor for use in estimating characteristics of viewers viewing same content
US20130110854A1 (en) * 2011-10-26 2013-05-02 Kimber Lockhart Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience
US20130262469A1 (en) * 2012-03-29 2013-10-03 The Echo Nest Corporation Demographic and media preference prediction using media content data analysis
WO2013149077A1 (en) * 2012-03-29 2013-10-03 Yahoo! Inc. Finding engaging media with initialized explore-exploit
US8565539B2 (en) 2011-05-31 2013-10-22 Hewlett-Packard Development Company, L.P. System and method for determining estimated age using an image collection
WO2013184667A1 (en) * 2012-06-05 2013-12-12 Rank Miner, Inc. System, method and apparatus for voice analytics of recorded audio
US8719445B2 (en) 2012-07-03 2014-05-06 Box, Inc. System and method for load balancing multiple file transfer protocol (FTP) servers to service FTP connections for a cloud-based service
US8745267B2 (en) 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US8750682B1 (en) * 2011-07-06 2014-06-10 Google Inc. Video interface
US20140195544A1 (en) * 2012-03-29 2014-07-10 The Echo Nest Corporation Demographic and media preference prediction using media content data analysis
US8868574B2 (en) 2012-07-30 2014-10-21 Box, Inc. System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment
US20140324885A1 (en) * 2013-04-25 2014-10-30 Trent R. McKenzie Color-based rating system
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US8914900B2 (en) 2012-05-23 2014-12-16 Box, Inc. Methods, architectures and security mechanisms for a third-party application to access content in a cloud-based platform
US20150018990A1 (en) * 2012-02-23 2015-01-15 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US8990151B2 (en) 2011-10-14 2015-03-24 Box, Inc. Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution
US8990307B2 (en) 2011-11-16 2015-03-24 Box, Inc. Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform
US8996607B1 (en) * 2010-06-04 2015-03-31 Amazon Technologies, Inc. Identity-based casting of network addresses
US9009083B1 (en) * 2012-02-15 2015-04-14 Google Inc. Mechanism for automatic quantification of multimedia production quality
US9015601B2 (en) 2011-06-21 2015-04-21 Box, Inc. Batch uploading of content to a web-based collaboration environment
US9019123B2 (en) 2011-12-22 2015-04-28 Box, Inc. Health check services for web-based collaboration environments
US9027108B2 (en) 2012-05-23 2015-05-05 Box, Inc. Systems and methods for secure file portability between mobile applications on a mobile device
US9054919B2 (en) 2012-04-05 2015-06-09 Box, Inc. Device pinning capability for enterprise cloud service and storage accounts
US9063912B2 (en) 2011-06-22 2015-06-23 Box, Inc. Multimedia content preview rendering in a cloud content management system
US9117087B2 (en) 2012-09-06 2015-08-25 Box, Inc. System and method for creating a secure channel for inter-application communication based on intents
US9135462B2 (en) 2012-08-29 2015-09-15 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US20150310307A1 (en) * 2014-04-29 2015-10-29 At&T Intellectual Property I, Lp Method and apparatus for analyzing media content
US9195519B2 (en) 2012-09-06 2015-11-24 Box, Inc. Disabling the self-referential appearance of a mobile application in an intent via a background registration
US9197718B2 (en) 2011-09-23 2015-11-24 Box, Inc. Central management and control of user-contributed content in a web-based collaboration environment and management console thereof
US9195636B2 (en) 2012-03-07 2015-11-24 Box, Inc. Universal file type preview for mobile devices
US9207964B1 (en) 2012-11-15 2015-12-08 Google Inc. Distributed batch matching of videos with dynamic resource allocation based on global score and prioritized scheduling score in a heterogeneous computing environment
US9213684B2 (en) 2013-09-13 2015-12-15 Box, Inc. System and method for rendering document in web browser or mobile device regardless of third-party plug-in software
US9237170B2 (en) 2012-07-19 2016-01-12 Box, Inc. Data loss prevention (DLP) methods and architectures by a cloud service
US9292833B2 (en) 2012-09-14 2016-03-22 Box, Inc. Batching notifications of activities that occur in a web-based collaboration environment
US9311071B2 (en) 2012-09-06 2016-04-12 Box, Inc. Force upgrade of a mobile application via a server side configuration file
US9369520B2 (en) 2012-08-19 2016-06-14 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9396216B2 (en) 2012-05-04 2016-07-19 Box, Inc. Repository redundancy implementation of a system which incrementally updates clients with events that occurred via a cloud-enabled platform
US9396245B2 (en) 2013-01-02 2016-07-19 Box, Inc. Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9413587B2 (en) 2012-05-02 2016-08-09 Box, Inc. System and method for a third-party application to access content within a cloud-based platform
CN105843953A (en) * 2016-04-12 2016-08-10 乐视控股(北京)有限公司 Multimedia recommendation method and device
US9483473B2 (en) 2013-09-13 2016-11-01 Box, Inc. High availability architecture for a cloud-based concurrent-access collaboration platform
US9495364B2 (en) 2012-10-04 2016-11-15 Box, Inc. Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
US9519886B2 (en) 2013-09-13 2016-12-13 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US9519526B2 (en) 2007-12-05 2016-12-13 Box, Inc. File management system and collaboration service and integration capabilities with third party applications
US9535909B2 (en) 2013-09-13 2017-01-03 Box, Inc. Configurable event-based automation architecture for cloud-based collaboration platforms
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US20170006342A1 (en) * 2015-07-02 2017-01-05 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over-the-top devices
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US9558202B2 (en) 2012-08-27 2017-01-31 Box, Inc. Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
CN106454529A (en) * 2016-10-21 2017-02-22 乐视控股(北京)有限公司 Family member analyzing method and device based on television
US9602514B2 (en) 2014-06-16 2017-03-21 Box, Inc. Enterprise mobility management and verification of a managed application by a content provider
CN106529384A (en) * 2015-09-11 2017-03-22 英特尔公司 Technologies for object recognition for internet-of-things edge devices
US9628268B2 (en) 2012-10-17 2017-04-18 Box, Inc. Remote key management in a cloud-based environment
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
CN106649282A (en) * 2015-10-30 2017-05-10 阿里巴巴集团控股有限公司 Machine translation method and device based on statistics, and electronic equipment
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
US9665349B2 (en) 2012-10-05 2017-05-30 Box, Inc. System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform
US9681158B1 (en) * 2009-05-27 2017-06-13 Google Inc. Predicting engagement in video content
US9691051B2 (en) 2012-05-21 2017-06-27 Box, Inc. Security enhancement through application access control
US9705967B2 (en) 2012-10-04 2017-07-11 Box, Inc. Corporate user discovery and identification of recommended collaborators in a cloud platform
US9712510B2 (en) 2012-07-06 2017-07-18 Box, Inc. Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform
US9756022B2 (en) 2014-08-29 2017-09-05 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9792320B2 (en) 2012-07-06 2017-10-17 Box, Inc. System and method for performing shard migration to support functions of a cloud-based service
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US9798823B2 (en) 2015-11-17 2017-10-24 Spotify Ab System, methods and computer products for determining affinity to a content creator
US9805050B2 (en) 2013-06-21 2017-10-31 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US9894119B2 (en) 2014-08-29 2018-02-13 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9959420B2 (en) 2012-10-02 2018-05-01 Box, Inc. System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment
US9965745B2 (en) 2012-02-24 2018-05-08 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US9978040B2 (en) 2011-07-08 2018-05-22 Box, Inc. Collaboration sessions in a workspace on a cloud-based content management system
US10038731B2 (en) 2014-08-29 2018-07-31 Box, Inc. Managing flow-based interactions with cloud-based shared content
US10079911B2 (en) 2015-12-04 2018-09-18 International Business Machines Corporation Content analysis based selection of user communities or groups of users
US10110656B2 (en) 2013-06-25 2018-10-23 Box, Inc. Systems and methods for providing shell communication in a cloud-based platform
US20180349496A1 (en) * 2017-06-01 2018-12-06 LLC "Synesis" Method for indexing of videodata for faceted classification
US20190005513A1 (en) * 2017-06-30 2019-01-03 Rovi Guides, Inc. Systems and methods for generating consumption probability metrics
US10200256B2 (en) 2012-09-17 2019-02-05 Box, Inc. System and method of a manipulative handle in an interactive mobile user interface
US10229134B2 (en) 2013-06-25 2019-03-12 Box, Inc. Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
CN109729395A (en) * 2018-12-14 2019-05-07 广州市百果园信息技术有限公司 Video quality evaluation method, device, storage medium and computer equipment
WO2019094398A1 (en) * 2017-11-08 2019-05-16 ViralGains Inc. Machine learning-based media content sequencing and placement
US10380633B2 (en) 2015-07-02 2019-08-13 The Nielsen Company (Us), Llc Methods and apparatus to generate corrected online audience measurement data
CN110363046A (en) * 2018-02-08 2019-10-22 西南石油大学 Passenger flow analysis system and dispositions method based on recognition of face
US10452667B2 (en) 2012-07-06 2019-10-22 Box Inc. Identification of people as search results from key-word based searches of content in a cloud-based environment
US10509527B2 (en) 2013-09-13 2019-12-17 Box, Inc. Systems and methods for configuring event-based automation in cloud-based collaboration platforms
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
JP2020017126A (en) * 2018-07-26 2020-01-30 Zホールディングス株式会社 Leaning device, learning method, and learning program
US10554426B2 (en) 2011-01-20 2020-02-04 Box, Inc. Real time notification of activities that occur in a web-based collaboration environment
US10574442B2 (en) 2014-08-29 2020-02-25 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US20200159744A1 (en) * 2013-03-18 2020-05-21 Spotify Ab Cross media recommendation
CN111241981A (en) * 2020-01-07 2020-06-05 武汉旷视金智科技有限公司 Video structuring system
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10803475B2 (en) 2014-03-13 2020-10-13 The Nielsen Company (Us), Llc Methods and apparatus to compensate for server-generated errors in database proprietor impression data due to misattribution and/or non-coverage
CN111782878A (en) * 2020-07-06 2020-10-16 聚好看科技股份有限公司 Server, display equipment and video searching and sorting method thereof
US10834449B2 (en) 2016-12-31 2020-11-10 The Nielsen Company (Us), Llc Methods and apparatus to associate audience members with over-the-top device media impressions
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10866931B2 (en) 2013-10-22 2020-12-15 Box, Inc. Desktop application for accessing a cloud collaboration platform
US10915492B2 (en) 2012-09-19 2021-02-09 Box, Inc. Cloud-based platform enabled with media content indexed for text-based searches and/or metadata extraction
US11003708B2 (en) 2013-04-25 2021-05-11 Trent R. McKenzie Interactive music feedback system
US11017025B2 (en) * 2009-08-24 2021-05-25 Google Llc Relevance-based image selection
CN113312515A (en) * 2021-04-30 2021-08-27 北京奇艺世纪科技有限公司 Play data prediction method, system, electronic equipment and medium
US11113318B2 (en) * 2013-03-15 2021-09-07 The Nielsen Company (Us), Llc Character based media analytics
WO2021171099A3 (en) * 2020-02-28 2021-10-07 Lomotif Private Limited Method for atomically tracking and storing video segments in multi-segment audiovisual compositions
US11210610B2 (en) 2011-10-26 2021-12-28 Box, Inc. Enhanced multimedia content preview rendering in a cloud content management system
US20220007068A1 (en) * 2014-08-04 2022-01-06 Adap.Tv, Inc. Systems and methods for addressable targeting of electronic content
US11227195B2 (en) * 2019-10-02 2022-01-18 King Fahd University Of Petroleum And Minerals Multi-modal detection engine of sentiment and demographic characteristics for social media videos
US11232481B2 (en) 2012-01-30 2022-01-25 Box, Inc. Extended applications of multimedia content previews in the cloud-based content management system
WO2022150405A1 (en) * 2021-01-07 2022-07-14 Dish Network L.L.C. Searching for and prioritizing audiovisual content using the viewer's age
US11423077B2 (en) 2013-04-25 2022-08-23 Trent R. McKenzie Interactive music feedback system
US11425181B1 (en) * 2021-05-11 2022-08-23 CLIPr Co. System and method to ingest one or more video streams across a web platform
US11451875B2 (en) * 2018-06-04 2022-09-20 Samsung Electronics Co., Ltd. Machine learning-based approach to demographic attribute inference using time-sensitive features
US20230011422A1 (en) * 2017-04-27 2023-01-12 Korrus, Inc. Methods and Systems for an Automated Design, Fulfillment, Deployment and Operation Platform for Lighting Installations
US11743544B2 (en) 2013-04-25 2023-08-29 Trent R McKenzie Interactive content feedback system

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8769584B2 (en) 2009-05-29 2014-07-01 TVI Interactive Systems, Inc. Methods for displaying contextually targeted content on a connected television
US10949458B2 (en) 2009-05-29 2021-03-16 Inscape Data, Inc. System and method for improving work load management in ACR television monitoring system
US9055335B2 (en) 2009-05-29 2015-06-09 Cognitive Networks, Inc. Systems and methods for addressing a media database using distance associative hashing
US10375451B2 (en) 2009-05-29 2019-08-06 Inscape Data, Inc. Detection of common media segments
US10116972B2 (en) 2009-05-29 2018-10-30 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
US9449090B2 (en) 2009-05-29 2016-09-20 Vizio Inscape Technologies, Llc Systems and methods for addressing a media database using distance associative hashing
US8438122B1 (en) * 2010-05-14 2013-05-07 Google Inc. Predictive analytic modeling platform
US8473431B1 (en) 2010-05-14 2013-06-25 Google Inc. Predictive analytic modeling platform
US10192138B2 (en) 2010-05-27 2019-01-29 Inscape Data, Inc. Systems and methods for reducing data density in large datasets
US9838753B2 (en) 2013-12-23 2017-12-05 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
US8533224B2 (en) 2011-05-04 2013-09-10 Google Inc. Assessing accuracy of trained predictive models
US8209274B1 (en) * 2011-05-09 2012-06-26 Google Inc. Predictive model importation
US8868472B1 (en) 2011-06-15 2014-10-21 Google Inc. Confidence scoring in predictive modeling
US8762299B1 (en) 2011-06-27 2014-06-24 Google Inc. Customized predictive analytical model training
US8745084B2 (en) * 2011-07-20 2014-06-03 Docscorp Australia Repository content analysis and management
US8442265B1 (en) * 2011-10-19 2013-05-14 Facebook Inc. Image selection from captured video sequence based on social components
WO2013148853A1 (en) 2012-03-29 2013-10-03 The Echo Nest Corporation Real time mapping of user models to an inverted data index for retrieval, filtering and recommendation
US9158754B2 (en) 2012-03-29 2015-10-13 The Echo Nest Corporation Named entity extraction from a block of text
US20130263181A1 (en) * 2012-03-30 2013-10-03 Set Media, Inc. Systems and methods for defining video advertising channels
US8843951B1 (en) * 2012-08-27 2014-09-23 Google Inc. User behavior indicator
WO2014151351A1 (en) * 2013-03-15 2014-09-25 The Echo Nest Corporation Demographic and media preference prediction using media content data analysis
US20160110730A1 (en) * 2013-05-02 2016-04-21 New York University System, method and computer-accessible medium for predicting user demographics of online items
US9955192B2 (en) 2013-12-23 2018-04-24 Inscape Data, Inc. Monitoring individual viewing of television events using tracking pixels and cookies
KR102222318B1 (en) * 2014-03-18 2021-03-03 삼성전자주식회사 User recognition method and apparatus
US9430477B2 (en) 2014-05-12 2016-08-30 International Business Machines Corporation Predicting knowledge gaps of media consumers
US9277276B1 (en) * 2014-08-18 2016-03-01 Google Inc. Systems and methods for active training of broadcast personalization and audience measurement systems using a presence band
CN107534800B (en) 2014-12-01 2020-07-03 构造数据有限责任公司 System and method for continuous media segment identification
AU2016211254B2 (en) 2015-01-30 2019-09-19 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
WO2016168556A1 (en) 2015-04-17 2016-10-20 Vizio Inscape Technologies, Llc Systems and methods for reducing data density in large datasets
BR112018000820A2 (en) 2015-07-16 2018-09-04 Inscape Data Inc computerized method, system, and product of computer program
JP6903653B2 (en) 2015-07-16 2021-07-14 インスケイプ データ インコーポレイテッド Common media segment detection
BR112018000801A2 (en) 2015-07-16 2018-09-04 Inscape Data Inc system, and method
US10080062B2 (en) 2015-07-16 2018-09-18 Inscape Data, Inc. Optimizing media fingerprint retention to improve system resource utilization
US11343587B2 (en) * 2017-02-23 2022-05-24 Disney Enterprises, Inc. Techniques for estimating person-level viewing behavior
WO2018187592A1 (en) 2017-04-06 2018-10-11 Inscape Data, Inc. Systems and methods for improving accuracy of device maps using media viewing data
CN107609570B (en) * 2017-08-01 2020-09-22 天津大学 Micro video popularity prediction method based on attribute classification and multi-view feature fusion
US20220114205A1 (en) * 2019-02-05 2022-04-14 Google Llc Comprehensibility-based identification of educational content of multiple content types
US20210133769A1 (en) * 2019-10-30 2021-05-06 Veda Data Solutions, Inc. Efficient data processing to identify information and reformant data files, and applications thereof
WO2021231299A1 (en) 2020-05-13 2021-11-18 The Nielsen Company (Us), Llc Methods and apparatus to generate computer-trained machine learning models to correct computer-generated errors in audience data

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4646145A (en) * 1980-04-07 1987-02-24 R. D. Percy & Company Television viewer reaction determining systems
US5790935A (en) * 1996-01-30 1998-08-04 Hughes Aircraft Company Virtual on-demand digital information delivery system and method
US6381746B1 (en) * 1999-05-26 2002-04-30 Unisys Corporation Scaleable video system having shared control circuits for sending multiple video streams to respective sets of viewers
US20030154486A1 (en) * 1995-05-05 2003-08-14 Dunn Matthew W. Interactive entertainment network system and method for customizing operation thereof according to viewer preferences
US20030187730A1 (en) * 2002-03-27 2003-10-02 Jai Natarajan System and method of measuring exposure of assets on the client side
US20040019900A1 (en) * 2002-07-23 2004-01-29 Philip Knightbridge Integration platform for interactive communications and management of video on demand services
US20040215663A1 (en) * 2001-11-30 2004-10-28 Microsoft Corporation Media agent
US20050027766A1 (en) * 2003-07-29 2005-02-03 Ben Jan I. Content identification system
US6993535B2 (en) * 2001-06-18 2006-01-31 International Business Machines Corporation Business method and apparatus for employing induced multimedia classifiers based on unified representation of features reflecting disparate modalities
US20070086741A1 (en) * 2005-08-30 2007-04-19 Hideo Ando Information playback system using information storage medium
US20080036917A1 (en) * 2006-04-07 2008-02-14 Mark Pascarella Methods and systems for generating and delivering navigatable composite videos
US20080097821A1 (en) * 2006-10-24 2008-04-24 Microsoft Corporation Recommendations utilizing meta-data based pair-wise lift predictions
US20080144943A1 (en) * 2005-05-09 2008-06-19 Salih Burak Gokturk System and method for enabling image searching using manual enrichment, classification, and/or segmentation
US20080152231A1 (en) * 2005-05-09 2008-06-26 Salih Burak Gokturk System and method for enabling image recognition and searching of images
US20080193016A1 (en) * 2004-02-06 2008-08-14 Agency For Science, Technology And Research Automatic Video Event Detection and Indexing
US20090055862A1 (en) * 2007-08-20 2009-02-26 Ads-Vantage, Ltd. System and method for providing real time targeted rating to enable content placement for video audiences
US20090132355A1 (en) * 2007-11-19 2009-05-21 Att Knowledge Ventures L.P. System and method for automatically selecting advertising for video data
US20090150947A1 (en) * 2007-10-05 2009-06-11 Soderstrom Robert W Online search, storage, manipulation, and delivery of video content
US20090190473A1 (en) * 2008-01-30 2009-07-30 Alcatel Lucent Method and apparatus for targeted content delivery based on internet video traffic analysis
US7620551B2 (en) * 2006-07-20 2009-11-17 Mspot, Inc. Method and apparatus for providing search capability and targeted advertising for audio, image, and video content over the internet
US7631336B2 (en) * 2004-07-30 2009-12-08 Broadband Itv, Inc. Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform
US7813822B1 (en) * 2000-10-05 2010-10-12 Hoffberg Steven M Intelligent electronic appliance system and method
US20110184807A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Filtering Targeted Advertisements for Video Content Delivery
US20110288939A1 (en) * 2010-05-24 2011-11-24 Jon Elvekrog Targeting users based on persona data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11512903A (en) * 1995-09-29 1999-11-02 ボストン テクノロジー インク Multimedia architecture for interactive advertising
CA2947649C (en) * 2006-03-27 2020-04-14 The Nielsen Company (Us), Llc Methods and systems to meter media content presented on a wireless communication device
WO2007140609A1 (en) * 2006-06-06 2007-12-13 Moreideas Inc. Method and system for image and video analysis, enhancement and display for communication
US9177209B2 (en) * 2007-12-17 2015-11-03 Sinoeast Concept Limited Temporal segment based extraction and robust matching of video fingerprints
WO2010119996A1 (en) * 2009-04-13 2010-10-21 (주)엔써즈 Method and apparatus for providing moving image advertisements

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4646145A (en) * 1980-04-07 1987-02-24 R. D. Percy & Company Television viewer reaction determining systems
US20030154486A1 (en) * 1995-05-05 2003-08-14 Dunn Matthew W. Interactive entertainment network system and method for customizing operation thereof according to viewer preferences
US5790935A (en) * 1996-01-30 1998-08-04 Hughes Aircraft Company Virtual on-demand digital information delivery system and method
US6381746B1 (en) * 1999-05-26 2002-04-30 Unisys Corporation Scaleable video system having shared control circuits for sending multiple video streams to respective sets of viewers
US7813822B1 (en) * 2000-10-05 2010-10-12 Hoffberg Steven M Intelligent electronic appliance system and method
US6993535B2 (en) * 2001-06-18 2006-01-31 International Business Machines Corporation Business method and apparatus for employing induced multimedia classifiers based on unified representation of features reflecting disparate modalities
US20040215663A1 (en) * 2001-11-30 2004-10-28 Microsoft Corporation Media agent
US20030187730A1 (en) * 2002-03-27 2003-10-02 Jai Natarajan System and method of measuring exposure of assets on the client side
US20040019900A1 (en) * 2002-07-23 2004-01-29 Philip Knightbridge Integration platform for interactive communications and management of video on demand services
US20050027766A1 (en) * 2003-07-29 2005-02-03 Ben Jan I. Content identification system
US20080193016A1 (en) * 2004-02-06 2008-08-14 Agency For Science, Technology And Research Automatic Video Event Detection and Indexing
US7631336B2 (en) * 2004-07-30 2009-12-08 Broadband Itv, Inc. Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform
US20080144943A1 (en) * 2005-05-09 2008-06-19 Salih Burak Gokturk System and method for enabling image searching using manual enrichment, classification, and/or segmentation
US20080152231A1 (en) * 2005-05-09 2008-06-26 Salih Burak Gokturk System and method for enabling image recognition and searching of images
US20070086741A1 (en) * 2005-08-30 2007-04-19 Hideo Ando Information playback system using information storage medium
US20080036917A1 (en) * 2006-04-07 2008-02-14 Mark Pascarella Methods and systems for generating and delivering navigatable composite videos
US7620551B2 (en) * 2006-07-20 2009-11-17 Mspot, Inc. Method and apparatus for providing search capability and targeted advertising for audio, image, and video content over the internet
US20080097821A1 (en) * 2006-10-24 2008-04-24 Microsoft Corporation Recommendations utilizing meta-data based pair-wise lift predictions
US20090055862A1 (en) * 2007-08-20 2009-02-26 Ads-Vantage, Ltd. System and method for providing real time targeted rating to enable content placement for video audiences
US20090150947A1 (en) * 2007-10-05 2009-06-11 Soderstrom Robert W Online search, storage, manipulation, and delivery of video content
US20090132355A1 (en) * 2007-11-19 2009-05-21 Att Knowledge Ventures L.P. System and method for automatically selecting advertising for video data
US20090190473A1 (en) * 2008-01-30 2009-07-30 Alcatel Lucent Method and apparatus for targeted content delivery based on internet video traffic analysis
US20110184807A1 (en) * 2010-01-28 2011-07-28 Futurewei Technologies, Inc. System and Method for Filtering Targeted Advertisements for Video Content Delivery
US20110288939A1 (en) * 2010-05-24 2011-11-24 Jon Elvekrog Targeting users based on persona data

Cited By (184)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8338685B2 (en) * 2002-10-03 2012-12-25 Polyphonic Human Media Interface, S.L. Method and system for video and film recommendation
US20110270848A1 (en) * 2002-10-03 2011-11-03 Polyphonic Human Media Interface S.L. Method and System for Video and Film Recommendation
US9519526B2 (en) 2007-12-05 2016-12-13 Box, Inc. File management system and collaboration service and integration capabilities with third party applications
US9681158B1 (en) * 2009-05-27 2017-06-13 Google Inc. Predicting engagement in video content
US10080042B1 (en) * 2009-05-27 2018-09-18 Google Llc Predicting engagement in video content
KR101351715B1 (en) 2009-06-26 2014-01-14 후지쯔 가부시끼가이샤 Inheritance communication administrating apparatus
US20100332590A1 (en) * 2009-06-26 2010-12-30 Fujitsu Limited Inheritance communication administrating apparatus
US20210349944A1 (en) * 2009-08-24 2021-11-11 Google Llc Relevance-Based Image Selection
US11017025B2 (en) * 2009-08-24 2021-05-25 Google Llc Relevance-based image selection
US11693902B2 (en) * 2009-08-24 2023-07-04 Google Llc Relevance-based image selection
US8996607B1 (en) * 2010-06-04 2015-03-31 Amazon Technologies, Inc. Identity-based casting of network addresses
US20120072937A1 (en) * 2010-09-21 2012-03-22 Kddi Corporation Context-based automatic selection of factor for use in estimating characteristics of viewers viewing same content
US8930976B2 (en) * 2010-09-21 2015-01-06 Kddi Corporation Context-based automatic selection of factor for use in estimating characteristics of viewers viewing same content
US10554426B2 (en) 2011-01-20 2020-02-04 Box, Inc. Real time notification of activities that occur in a web-based collaboration environment
US8565539B2 (en) 2011-05-31 2013-10-22 Hewlett-Packard Development Company, L.P. System and method for determining estimated age using an image collection
US9015601B2 (en) 2011-06-21 2015-04-21 Box, Inc. Batch uploading of content to a web-based collaboration environment
US9063912B2 (en) 2011-06-22 2015-06-23 Box, Inc. Multimedia content preview rendering in a cloud content management system
US8750682B1 (en) * 2011-07-06 2014-06-10 Google Inc. Video interface
US9049485B1 (en) * 2011-07-06 2015-06-02 Google Inc. Video interface
US9652741B2 (en) 2011-07-08 2017-05-16 Box, Inc. Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof
US9978040B2 (en) 2011-07-08 2018-05-22 Box, Inc. Collaboration sessions in a workspace on a cloud-based content management system
US9197718B2 (en) 2011-09-23 2015-11-24 Box, Inc. Central management and control of user-contributed content in a web-based collaboration environment and management console thereof
US8990151B2 (en) 2011-10-14 2015-03-24 Box, Inc. Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution
US9098474B2 (en) * 2011-10-26 2015-08-04 Box, Inc. Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience
US20130110854A1 (en) * 2011-10-26 2013-05-02 Kimber Lockhart Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience
US11210610B2 (en) 2011-10-26 2021-12-28 Box, Inc. Enhanced multimedia content preview rendering in a cloud content management system
US8990307B2 (en) 2011-11-16 2015-03-24 Box, Inc. Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform
US9015248B2 (en) 2011-11-16 2015-04-21 Box, Inc. Managing updates at clients used by a user to access a cloud-based collaboration service
US9773051B2 (en) 2011-11-29 2017-09-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US10909141B2 (en) 2011-11-29 2021-02-02 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US11537630B2 (en) 2011-11-29 2022-12-27 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US11853320B2 (en) 2011-11-29 2023-12-26 Box, Inc. Mobile platform file and folder selection functionalities for offline access and synchronization
US9019123B2 (en) 2011-12-22 2015-04-28 Box, Inc. Health check services for web-based collaboration environments
US11232481B2 (en) 2012-01-30 2022-01-25 Box, Inc. Extended applications of multimedia content previews in the cloud-based content management system
US9009083B1 (en) * 2012-02-15 2015-04-14 Google Inc. Mechanism for automatic quantification of multimedia production quality
US9999825B2 (en) * 2012-02-23 2018-06-19 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US20150018990A1 (en) * 2012-02-23 2015-01-15 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US10758807B2 (en) 2012-02-23 2020-09-01 Playsight Interactive Ltd. Smart court system
US20180264342A1 (en) * 2012-02-23 2018-09-20 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US10391378B2 (en) * 2012-02-23 2019-08-27 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US20190351306A1 (en) * 2012-02-23 2019-11-21 Playsight Interactive Ltd. Smart-court system and method for providing real-time debriefing and training services of sport games
US10713624B2 (en) 2012-02-24 2020-07-14 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US9965745B2 (en) 2012-02-24 2018-05-08 Box, Inc. System and method for promoting enterprise adoption of a web-based collaboration environment
US9195636B2 (en) 2012-03-07 2015-11-24 Box, Inc. Universal file type preview for mobile devices
US8923621B2 (en) 2012-03-29 2014-12-30 Yahoo! Inc. Finding engaging media with initialized explore-exploit
US9547679B2 (en) * 2012-03-29 2017-01-17 Spotify Ab Demographic and media preference prediction using media content data analysis
US20140195544A1 (en) * 2012-03-29 2014-07-10 The Echo Nest Corporation Demographic and media preference prediction using media content data analysis
WO2013149077A1 (en) * 2012-03-29 2013-10-03 Yahoo! Inc. Finding engaging media with initialized explore-exploit
US9406072B2 (en) * 2012-03-29 2016-08-02 Spotify Ab Demographic and media preference prediction using media content data analysis
US20130262469A1 (en) * 2012-03-29 2013-10-03 The Echo Nest Corporation Demographic and media preference prediction using media content data analysis
US9054919B2 (en) 2012-04-05 2015-06-09 Box, Inc. Device pinning capability for enterprise cloud service and storage accounts
US9575981B2 (en) 2012-04-11 2017-02-21 Box, Inc. Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system
US9413587B2 (en) 2012-05-02 2016-08-09 Box, Inc. System and method for a third-party application to access content within a cloud-based platform
US9396216B2 (en) 2012-05-04 2016-07-19 Box, Inc. Repository redundancy implementation of a system which incrementally updates clients with events that occurred via a cloud-enabled platform
US9691051B2 (en) 2012-05-21 2017-06-27 Box, Inc. Security enhancement through application access control
US9280613B2 (en) 2012-05-23 2016-03-08 Box, Inc. Metadata enabled third-party application access of content at a cloud-based platform via a native client to the cloud-based platform
US8914900B2 (en) 2012-05-23 2014-12-16 Box, Inc. Methods, architectures and security mechanisms for a third-party application to access content in a cloud-based platform
US9027108B2 (en) 2012-05-23 2015-05-05 Box, Inc. Systems and methods for secure file portability between mobile applications on a mobile device
US9552444B2 (en) 2012-05-23 2017-01-24 Box, Inc. Identification verification mechanisms for a third-party application to access content in a cloud-based platform
US8781880B2 (en) 2012-06-05 2014-07-15 Rank Miner, Inc. System, method and apparatus for voice analytics of recorded audio
WO2013184667A1 (en) * 2012-06-05 2013-12-12 Rank Miner, Inc. System, method and apparatus for voice analytics of recorded audio
US9021099B2 (en) 2012-07-03 2015-04-28 Box, Inc. Load balancing secure FTP connections among multiple FTP servers
US8719445B2 (en) 2012-07-03 2014-05-06 Box, Inc. System and method for load balancing multiple file transfer protocol (FTP) servers to service FTP connections for a cloud-based service
US10452667B2 (en) 2012-07-06 2019-10-22 Box Inc. Identification of people as search results from key-word based searches of content in a cloud-based environment
US9712510B2 (en) 2012-07-06 2017-07-18 Box, Inc. Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform
US9792320B2 (en) 2012-07-06 2017-10-17 Box, Inc. System and method for performing shard migration to support functions of a cloud-based service
US9237170B2 (en) 2012-07-19 2016-01-12 Box, Inc. Data loss prevention (DLP) methods and architectures by a cloud service
US9794256B2 (en) 2012-07-30 2017-10-17 Box, Inc. System and method for advanced control tools for administrators in a cloud-based service
US8868574B2 (en) 2012-07-30 2014-10-21 Box, Inc. System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment
US8745267B2 (en) 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9369520B2 (en) 2012-08-19 2016-06-14 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9729675B2 (en) 2012-08-19 2017-08-08 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US9558202B2 (en) 2012-08-27 2017-01-31 Box, Inc. Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment
US9450926B2 (en) 2012-08-29 2016-09-20 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US9135462B2 (en) 2012-08-29 2015-09-15 Box, Inc. Upload and download streaming encryption to/from a cloud-based platform
US9117087B2 (en) 2012-09-06 2015-08-25 Box, Inc. System and method for creating a secure channel for inter-application communication based on intents
US9195519B2 (en) 2012-09-06 2015-11-24 Box, Inc. Disabling the self-referential appearance of a mobile application in an intent via a background registration
US9311071B2 (en) 2012-09-06 2016-04-12 Box, Inc. Force upgrade of a mobile application via a server side configuration file
US9292833B2 (en) 2012-09-14 2016-03-22 Box, Inc. Batching notifications of activities that occur in a web-based collaboration environment
US10200256B2 (en) 2012-09-17 2019-02-05 Box, Inc. System and method of a manipulative handle in an interactive mobile user interface
US9553758B2 (en) 2012-09-18 2017-01-24 Box, Inc. Sandboxing individual applications to specific user folders in a cloud-based service
US10915492B2 (en) 2012-09-19 2021-02-09 Box, Inc. Cloud-based platform enabled with media content indexed for text-based searches and/or metadata extraction
US9959420B2 (en) 2012-10-02 2018-05-01 Box, Inc. System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment
US9495364B2 (en) 2012-10-04 2016-11-15 Box, Inc. Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform
US9705967B2 (en) 2012-10-04 2017-07-11 Box, Inc. Corporate user discovery and identification of recommended collaborators in a cloud platform
US9665349B2 (en) 2012-10-05 2017-05-30 Box, Inc. System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform
US9628268B2 (en) 2012-10-17 2017-04-18 Box, Inc. Remote key management in a cloud-based environment
US9690629B1 (en) 2012-11-15 2017-06-27 Google Inc. Distributed batch matching of videos based on recency of occurrence of events associated with the videos
US9207964B1 (en) 2012-11-15 2015-12-08 Google Inc. Distributed batch matching of videos with dynamic resource allocation based on global score and prioritized scheduling score in a heterogeneous computing environment
US10235383B2 (en) 2012-12-19 2019-03-19 Box, Inc. Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment
US9396245B2 (en) 2013-01-02 2016-07-19 Box, Inc. Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9953036B2 (en) 2013-01-09 2018-04-24 Box, Inc. File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9507795B2 (en) 2013-01-11 2016-11-29 Box, Inc. Functionalities, features, and user interface of a synchronization client to a cloud-based environment
US10599671B2 (en) 2013-01-17 2020-03-24 Box, Inc. Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform
US11604815B2 (en) 2013-03-15 2023-03-14 The Nielsen Company (Us), Llc Character based media analytics
US11188573B2 (en) * 2013-03-15 2021-11-30 The Nielsen Company (Us), Llc Character based media analytics
US11113318B2 (en) * 2013-03-15 2021-09-07 The Nielsen Company (Us), Llc Character based media analytics
US11645301B2 (en) * 2013-03-18 2023-05-09 Spotify Ab Cross media recommendation
US20200159744A1 (en) * 2013-03-18 2020-05-21 Spotify Ab Cross media recommendation
US10795929B2 (en) 2013-04-25 2020-10-06 Trent R. McKenzie Interactive music feedback system
US11423077B2 (en) 2013-04-25 2022-08-23 Trent R. McKenzie Interactive music feedback system
US11003708B2 (en) 2013-04-25 2021-05-11 Trent R. McKenzie Interactive music feedback system
US20140324885A1 (en) * 2013-04-25 2014-10-30 Trent R. McKenzie Color-based rating system
US10102224B2 (en) * 2013-04-25 2018-10-16 Trent R. McKenzie Interactive music feedback system
US11743544B2 (en) 2013-04-25 2023-08-29 Trent R McKenzie Interactive content feedback system
US10725968B2 (en) 2013-05-10 2020-07-28 Box, Inc. Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform
US10846074B2 (en) 2013-05-10 2020-11-24 Box, Inc. Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client
US10877937B2 (en) 2013-06-13 2020-12-29 Box, Inc. Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9633037B2 (en) 2013-06-13 2017-04-25 Box, Inc Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform
US9805050B2 (en) 2013-06-21 2017-10-31 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US11531648B2 (en) 2013-06-21 2022-12-20 Box, Inc. Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform
US10110656B2 (en) 2013-06-25 2018-10-23 Box, Inc. Systems and methods for providing shell communication in a cloud-based platform
US10229134B2 (en) 2013-06-25 2019-03-12 Box, Inc. Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform
US9535924B2 (en) 2013-07-30 2017-01-03 Box, Inc. Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform
US9483473B2 (en) 2013-09-13 2016-11-01 Box, Inc. High availability architecture for a cloud-based concurrent-access collaboration platform
US9519886B2 (en) 2013-09-13 2016-12-13 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US9535909B2 (en) 2013-09-13 2017-01-03 Box, Inc. Configurable event-based automation architecture for cloud-based collaboration platforms
US11822759B2 (en) 2013-09-13 2023-11-21 Box, Inc. System and methods for configuring event-based automation in cloud-based collaboration platforms
US9704137B2 (en) 2013-09-13 2017-07-11 Box, Inc. Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform
US11435865B2 (en) 2013-09-13 2022-09-06 Box, Inc. System and methods for configuring event-based automation in cloud-based collaboration platforms
US10509527B2 (en) 2013-09-13 2019-12-17 Box, Inc. Systems and methods for configuring event-based automation in cloud-based collaboration platforms
US8892679B1 (en) 2013-09-13 2014-11-18 Box, Inc. Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform
US10044773B2 (en) 2013-09-13 2018-08-07 Box, Inc. System and method of a multi-functional managing user interface for accessing a cloud-based platform via mobile devices
US9213684B2 (en) 2013-09-13 2015-12-15 Box, Inc. System and method for rendering document in web browser or mobile device regardless of third-party plug-in software
US10866931B2 (en) 2013-10-22 2020-12-15 Box, Inc. Desktop application for accessing a cloud collaboration platform
US10803475B2 (en) 2014-03-13 2020-10-13 The Nielsen Company (Us), Llc Methods and apparatus to compensate for server-generated errors in database proprietor impression data due to misattribution and/or non-coverage
US11568431B2 (en) 2014-03-13 2023-01-31 The Nielsen Company (Us), Llc Methods and apparatus to compensate for server-generated errors in database proprietor impression data due to misattribution and/or non-coverage
WO2015167885A1 (en) * 2014-04-29 2015-11-05 At&T Intellectual Property I, Lp Method and apparatus for analyzing media content
US10133961B2 (en) * 2014-04-29 2018-11-20 At&T Intellectual Property I, L.P. Method and apparatus for analyzing media content
US10713529B2 (en) * 2014-04-29 2020-07-14 At&T Intellectual Property I, L.P. Method and apparatus for analyzing media content
US20150310307A1 (en) * 2014-04-29 2015-10-29 At&T Intellectual Property I, Lp Method and apparatus for analyzing media content
US9898685B2 (en) * 2014-04-29 2018-02-20 At&T Intellectual Property I, L.P. Method and apparatus for analyzing media content
US20190057283A1 (en) * 2014-04-29 2019-02-21 At&T Intellectual Property I, L.P. Method and apparatus for analyzing media content
US10530854B2 (en) 2014-05-30 2020-01-07 Box, Inc. Synchronization of permissioned content in cloud-based environments
US9602514B2 (en) 2014-06-16 2017-03-21 Box, Inc. Enterprise mobility management and verification of a managed application by a content provider
US11949936B2 (en) * 2014-08-04 2024-04-02 Adap.Tv, Inc. Systems and methods for addressable targeting of electronic content
US20220007068A1 (en) * 2014-08-04 2022-01-06 Adap.Tv, Inc. Systems and methods for addressable targeting of electronic content
US9894119B2 (en) 2014-08-29 2018-02-13 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US10574442B2 (en) 2014-08-29 2020-02-25 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US11146600B2 (en) 2014-08-29 2021-10-12 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US10708323B2 (en) 2014-08-29 2020-07-07 Box, Inc. Managing flow-based interactions with cloud-based shared content
US10708321B2 (en) 2014-08-29 2020-07-07 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US10038731B2 (en) 2014-08-29 2018-07-31 Box, Inc. Managing flow-based interactions with cloud-based shared content
US11876845B2 (en) 2014-08-29 2024-01-16 Box, Inc. Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms
US9756022B2 (en) 2014-08-29 2017-09-05 Box, Inc. Enhanced remote key management for an enterprise in a cloud-based environment
US10045082B2 (en) * 2015-07-02 2018-08-07 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over-the-top devices
US11645673B2 (en) 2015-07-02 2023-05-09 The Nielsen Company (Us), Llc Methods and apparatus to generate corrected online audience measurement data
US10785537B2 (en) 2015-07-02 2020-09-22 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over the top devices
US11706490B2 (en) 2015-07-02 2023-07-18 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over-the-top devices
US10380633B2 (en) 2015-07-02 2019-08-13 The Nielsen Company (Us), Llc Methods and apparatus to generate corrected online audience measurement data
US11259086B2 (en) 2015-07-02 2022-02-22 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over the top devices
US20170006342A1 (en) * 2015-07-02 2017-01-05 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over-the-top devices
US10368130B2 (en) 2015-07-02 2019-07-30 The Nielsen Company (Us), Llc Methods and apparatus to correct errors in audience measurements for media accessed using over the top devices
CN106529384A (en) * 2015-09-11 2017-03-22 英特尔公司 Technologies for object recognition for internet-of-things edge devices
CN106649282A (en) * 2015-10-30 2017-05-10 阿里巴巴集团控股有限公司 Machine translation method and device based on statistics, and electronic equipment
US11210355B2 (en) 2015-11-17 2021-12-28 Spotify Ab System, methods and computer products for determining affinity to a content creator
US9798823B2 (en) 2015-11-17 2017-10-24 Spotify Ab System, methods and computer products for determining affinity to a content creator
US10079911B2 (en) 2015-12-04 2018-09-18 International Business Machines Corporation Content analysis based selection of user communities or groups of users
CN105843953A (en) * 2016-04-12 2016-08-10 乐视控股(北京)有限公司 Multimedia recommendation method and device
WO2017177643A1 (en) * 2016-04-12 2017-10-19 乐视控股(北京)有限公司 Multimedia recommendation method and device
CN106454529A (en) * 2016-10-21 2017-02-22 乐视控股(北京)有限公司 Family member analyzing method and device based on television
US10834449B2 (en) 2016-12-31 2020-11-10 The Nielsen Company (Us), Llc Methods and apparatus to associate audience members with over-the-top device media impressions
US20230011422A1 (en) * 2017-04-27 2023-01-12 Korrus, Inc. Methods and Systems for an Automated Design, Fulfillment, Deployment and Operation Platform for Lighting Installations
US20180349496A1 (en) * 2017-06-01 2018-12-06 LLC "Synesis" Method for indexing of videodata for faceted classification
US10810602B2 (en) * 2017-06-30 2020-10-20 Rovi Guides, Inc. Systems and methods for generating consumption probability metrics
US11763324B2 (en) 2017-06-30 2023-09-19 Rovi Product Corporation Systems and methods for generating consumption probability metrics
US20190005513A1 (en) * 2017-06-30 2019-01-03 Rovi Guides, Inc. Systems and methods for generating consumption probability metrics
US11321720B2 (en) * 2017-06-30 2022-05-03 Rovi Guides, Inc. Systems and methods for generating consumption probability metrics
WO2019094398A1 (en) * 2017-11-08 2019-05-16 ViralGains Inc. Machine learning-based media content sequencing and placement
US11270337B2 (en) 2017-11-08 2022-03-08 ViralGains Inc. Machine learning-based media content sequencing and placement
CN110363046A (en) * 2018-02-08 2019-10-22 西南石油大学 Passenger flow analysis system and dispositions method based on recognition of face
US11451875B2 (en) * 2018-06-04 2022-09-20 Samsung Electronics Co., Ltd. Machine learning-based approach to demographic attribute inference using time-sensitive features
JP2020017126A (en) * 2018-07-26 2020-01-30 Zホールディングス株式会社 Leaning device, learning method, and learning program
JP7096093B2 (en) 2018-07-26 2022-07-05 ヤフー株式会社 Learning equipment, learning methods and learning programs
CN109729395A (en) * 2018-12-14 2019-05-07 广州市百果园信息技术有限公司 Video quality evaluation method, device, storage medium and computer equipment
US11227195B2 (en) * 2019-10-02 2022-01-18 King Fahd University Of Petroleum And Minerals Multi-modal detection engine of sentiment and demographic characteristics for social media videos
CN111241981A (en) * 2020-01-07 2020-06-05 武汉旷视金智科技有限公司 Video structuring system
WO2021171099A3 (en) * 2020-02-28 2021-10-07 Lomotif Private Limited Method for atomically tracking and storing video segments in multi-segment audiovisual compositions
US11243995B2 (en) 2020-02-28 2022-02-08 Lomotif Private Limited Method for atomically tracking and storing video segments in multi-segment audio-video compositions
CN111782878A (en) * 2020-07-06 2020-10-16 聚好看科技股份有限公司 Server, display equipment and video searching and sorting method thereof
US11785309B2 (en) 2021-01-07 2023-10-10 Dish Network L.L.C. Searching for and prioritizing audiovisual content using the viewer's age
WO2022150405A1 (en) * 2021-01-07 2022-07-14 Dish Network L.L.C. Searching for and prioritizing audiovisual content using the viewer's age
CN113312515A (en) * 2021-04-30 2021-08-27 北京奇艺世纪科技有限公司 Play data prediction method, system, electronic equipment and medium
US11425181B1 (en) * 2021-05-11 2022-08-23 CLIPr Co. System and method to ingest one or more video streams across a web platform

Also Published As

Publication number Publication date
US8301498B1 (en) 2012-10-30
EP3367676A1 (en) 2018-08-29
EP2382782A1 (en) 2011-11-02
WO2010087909A1 (en) 2010-08-05
US20160014440A1 (en) 2016-01-14
US20120272259A1 (en) 2012-10-25
EP2382782A4 (en) 2013-05-01

Similar Documents

Publication Publication Date Title
US8301498B1 (en) Video content analysis for automatic demographics recognition of users and videos
US10210462B2 (en) Video content analysis for automatic demographics recognition of users and videos
US11693902B2 (en) Relevance-based image selection
US20210397651A1 (en) Estimating social interest in time-based media
US9471936B2 (en) Web identity to social media identity correlation
US8473981B1 (en) Augmenting metadata of digital media objects using per object classifiers
US11574321B2 (en) Generating audience response metrics and ratings from social interest in time-based media
US8510252B1 (en) Classification of inappropriate video content using multi-scale features
US20190258671A1 (en) Video Tagging System and Method
US9087297B1 (en) Accurate video concept recognition via classifier combination
CA2817103C (en) Learning tags for video annotation using latent subtags
US9098807B1 (en) Video content claiming classifier
US10223438B1 (en) System and method for digital-content-grouping, playlist-creation, and collaborator-recommendation
WO2017070656A1 (en) Video content retrieval system
CN109871464B (en) Video recommendation method and device based on UCL semantic indexing
Li et al. A study on content-based video recommendation
Zhu et al. Videotopic: Modeling user interests for content-based video recommendation
US8880534B1 (en) Video classification boosting
CN114356979A (en) Query method and related equipment thereof
Redaelli et al. Automated Intro Detection ForTV Series
Tacchini et al. Do You Have a Pop Face? Here is a Pop Song. Using Profile Pictures to Mitigate the Cold-start Problem in Music Recommender Systems.

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CORTES, CORINNA;KUMAR, SANJIV;MAKADIA, AMEESH;AND OTHERS;REEL/FRAME:022325/0095

Effective date: 20090224

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING PUBLICATION PROCESS

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357

Effective date: 20170929