US20150249865A1 - Context-based content recommendations - Google Patents
Context-based content recommendations Download PDFInfo
- Publication number
- US20150249865A1 US20150249865A1 US14/431,481 US201214431481A US2015249865A1 US 20150249865 A1 US20150249865 A1 US 20150249865A1 US 201214431481 A US201214431481 A US 201214431481A US 2015249865 A1 US2015249865 A1 US 2015249865A1
- Authority
- US
- United States
- Prior art keywords
- context
- options
- content
- ordered
- categories
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH DRILLING; MINING
- E21B—EARTH DRILLING, e.g. DEEP DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
- E21B43/00—Methods or apparatus for obtaining oil, gas, water, soluble or meltable materials or a slurry of minerals from wells
- E21B43/16—Enhanced recovery methods for obtaining hydrocarbons
- E21B43/24—Enhanced recovery methods for obtaining hydrocarbons using heat, e.g. steam injection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24575—Query processing with adaptation to user needs using context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3322—Query formulation using system suggestions
- G06F16/3323—Query formulation using system suggestions using document space presentation or visualization, e.g. category, hierarchy or range presentation and selection
-
- G06F17/30528—
-
- G06F17/30643—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4755—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user preferences, e.g. favourite actors or genre
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4756—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4758—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for providing answers, e.g. voting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4826—End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
Definitions
- Implementations are described that relate to providing recommendations. Various particular implementations relate to providing context-based recommendations for various forms of content to be consumed by a user.
- Home entertainment systems including television and media centers, are converging with the Internet and providing access to a large number of available sources of content, such as video, movies, TV programs, music, etc. This expansion in the number of available sources necessitates a new strategy for navigating a media interface associated with such systems and making content recommendations and selections.
- Another drawback is that even if a large rating database is created by a user, there still may be inaccurate or non-relevant recommendations since the rating information may have been inaccurately collected from the user. For example, if a user rates the first five horror movies presented for rating as one star movie, the conventional recommendation engine may stop recommending horror movies to the user. However, the user may just not have liked the first five horror movies presented and may actually desire to have other horror movies brought to his or her attention.
- an ordered set of options is provided for a context category related to content selection.
- the ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories.
- An ordered set of options is provided for one or more additional context categories related to content selection.
- the ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category.
- implementations may be configured or embodied in various manners.
- an implementation may be performed as a method, or embodied as an apparatus, such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal.
- an apparatus such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal.
- FIG. 1 provides a block diagram depicting an implementation of a system for delivering video content.
- FIG. 2 provides a block diagram depicting an implementation of a set-top box/digital video recorder (DVR).
- DVR digital video recorder
- FIG. 3 provides a pictorial representation of a perspective view of an implementation of a remote controller, tablet, and/or second screen device.
- FIGS. 4-11 provide screen shots of an implementation for recommending content based on context.
- FIGS. 12-15 provide flow diagrams of various process implementations for recommending content based on context.
- FIG. 16 provides a flow diagram of an implementation of a system or apparatus for recommending content based on context.
- the inventor has determined various manners in which, for example, a user interface to a content recommendation system can be more helpful.
- One implementation provides a movie recommendation and discovery engine that takes the user's context into account by getting information about the current day of the week, time of the day, audience or companion(s), and desired content (for example, movie) genre. Based on this information, the system recommends a set of movies that suits the given context.
- One or more implementations provide a way to take the user's context into account when recommending movies to watch.
- context can vary depending on the content that is to be consumed. “Consuming” content has the well-known meaning of experiencing the content by, for example, watching or listening to the content. For different content, certain aspects of the context are more, or less, important. For example, activity and location are typically not as relevant when considering a movie to recommend as when considering music to recommend.
- a context category refers generally to a set of values that can be selected as context. More particularly, a context category often represents a common variable, and includes a set of alternative values for that variable. For example, “Time” is a common variable, and “Friday Night”, and “Saturday Morning” belong to the “Time” category, and may be chosen as values for that variable. As another example, “Genre” is another common variable, and “Action” and “Drama” are possible alternative values for the “Genre” variable.
- Dynamically building categories means that the categories are built based on the user input. Because the user input is dynamic, the building of the context categories is dynamic. For example, if a user selects “Friday Night with Friends”, the category “Genre” will be built algorithmically in runtime, based on those selections. “Building” the context category “Genre” refers to determining which values (elements) to include in the category “Genre”, and how to rank-order those values.
- the context categories are built automatically. This means that there is, at least primarily, no user intervention, aside from providing the user input of, for example, day, time, and companions, in the creating of categories. Rather, in a pure automatic system, all of the decisions are made by algorithms, not by people. For example, recommending “Action” and “Drama” movies (this is building the context category of “Genre”) on “Friday Night with Friends” was not a decision made by humans directly. The decision is based on data (for example, from previous user studies) and algorithms.
- the terms “audience” and “companions” are generally used interchangeably in this application to refer to the set of people consuming the content. However, in other implementations, the terms “audience” and “companions” can refer to distinct context categories. Some values for this context category in various implementations discussed in this application may, strictly speaking, refer to companions and not include the user (for example, “Partner”). Other values may refer to the entire audience including the user or might not include the user (for example, “Family&Kids” or “Friends” may or may not include the user). However, in typical implementations, the content (for example, movie) recommendations are indeed based on the entire audience including the user, whether or not the value for the “audience” or “companions” context category specifically includes the user.
- One or more implementations include two main parts: a movie selection system and a user interface. For one or more particular implementations, each will be described below, with reference to the figures. Variations of the movie selection and the user interface are contemplated.
- step 2 From the data acquired in step 1, we are provided data describing which specific movies people watch in a specific context (for example, a day/time/companions context). In order to make this information useful, and to help users navigate this information, we build upon this information. In various implementations, we build upon this information by aggregating the selected movies by their genres, and by performing statistical tests to identify which genres have a best average-rating for a certain context. For example, if a large part of the users said that they would watch “Inception” and “Signs” on a “Friday Night with their Partner”, then “Science Fiction” would be selected as a recommend genre for that context. Other implementations use a variety of tools and techniques to build upon the information from step 1.
- step 3 we identified which genres are recommended for each context.
- step 3 we identify which movies we should recommend to the user (and the rank-order of those movies) for each combination of context and genre.
- tools and techniques are available for use in performing the gathering (bootstrapping) of additional movie titles for a given context and genre. Such tools and techniques include, for example, categorizations, ratings, and reviews of movies.
- the user interface of various implementations allows users to intuitively navigate through the results from the previous phase and to find movies that are recommended for their current context.
- One implementation of the process is as follows:
- the system automatically recognizes the current day of the week and time of the day and presents a list of companions (Alone, Friends, Family&Kids, Partner) ordered by their expected frequency (most expected companions first) (see, for example, FIGS. 4 and 6 ). Users are able to change the day and/or time information (see, for example, FIG. 5 ).
- Genres are ordered by their recommendation score (calculated, for example, in step 2 of the Movie Selection phase). Recall that in step 2 of the Movie Selection phase we identified the top genres, and because we are averaging the movies belonging to that genre, we have a score that we can use to sort the list.
- a list of recommended movies is presented (determined, for example, in step 3 of the Movie Selection phase) (see, for example, FIGS. 9 and 11 ). Users can get more information about the movie, such as title, poster, description, genres, and links to external sources (see, for example, FIG. 10 ).
- external sources include, for example, IMDB (http://www.imdb.com/), Amazon (http://www.amazon.com/), Netflix (https://www.netflix.com/), and AllMovie (http://www.allmovie.com/).
- FIGS. 1-3 provide an implementation of a system and environment in which movie recommendations can be provided. Other systems and environments are envisioned, and the examples associated with FIGS. 1-3 are not intended to be exhaustive or restrictive.
- the content originates from a content source 102 , such as a movie studio or production house.
- the content may be supplied in at least one of two forms.
- One form may be a broadcast form of content.
- the broadcast content is provided to the broadcast affiliate manager 104 , which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc.
- the broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a delivery network, shown as delivery network 1 ( 106 ).
- Delivery network 1 ( 106 ) may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 ( 106 ) may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, or cable broadcast. The locally delivered content is provided to a receiving device 108 in a user's home, where the content will subsequently be searched by the user. It is to be appreciated that the receiving device 108 can take many forms and may be embodied as a set top box/digital video recorder (DVR), a gateway, a modem, etc. Further, the receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.
- DVR set top box/digital video recorder
- the receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.
- Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager, for example, movies, video games, or other video elements.
- the special content can originate from the same, or from a different, content source (for example, content source 102 ) as the broadcast content provided to the broadcast affiliate manager 104 .
- the special content may be content requested by the user.
- the special content may be delivered to a content manager 110 .
- the content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service.
- the content manager 110 may also incorporate Internet content into the delivery system.
- the content manager 110 may deliver the content to the user's receiving device 108 over a separate delivery network, delivery network 2 ( 112 ).
- Delivery network 2 ( 112 ) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 ( 112 ) and content from the content manager 110 may be delivered using all or parts of delivery network 1 ( 106 ). In addition, the user may also obtain content directly from the Internet via delivery network 2 ( 112 ) without necessarily having the content managed by the content manager 110 .
- the special content is provided as an augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc.
- the special content may completely replace some programming content provided as broadcast content.
- the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize.
- the special content may be a library of movies that are not yet available as broadcast content.
- the receiving device 108 may receive different types of content from one or both of delivery network 1 and delivery network 2.
- the receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands.
- the receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2 .
- the processed content (at least for video content) is provided to a display device 114 .
- the display device 114 may be a conventional 2-D type display or may alternatively be an advanced 3-D display.
- the receiving device 108 may also be interfaced to a second screen such as a touch screen control device 116 .
- the touch screen control device 116 may be adapted to provide user control for the receiving device 108 and/or the display device 114 .
- the touch screen control device 116 may also be capable of displaying video content.
- the video content may be graphics entries, such as user interface entries (as discussed below), or may be a portion of the video content that is delivered to the display device 114 .
- the touch screen control device 116 may interface to receiving device 108 using any well known signal transmission system, such as infra-red (IR) or radio frequency (RF) communications and may include standard protocols such as infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, or any proprietary protocol. Operations of touch screen control device 116 will be described in further detail below.
- IR infra-red
- RF radio frequency
- the system 100 also includes a back end server 118 and a usage database 120 .
- the back end server 118 includes a personalization engine that analyzes the usage habits of a user and makes recommendations based on those usage habits.
- the usage database 120 is where the usage habits for a user are stored. In some cases, the usage database 120 may be part of the back end server 118 .
- the back end server 118 (as well as the usage database 120 ) is connected to the system 100 and accessed through the delivery network 2 ( 112 ).
- Receiving device 200 may operate similar to the receiving device described in FIG. 1 and may be included, for example, as part of a gateway device, modem, set-top box, or other similar communications device.
- the device 200 shown may also be incorporated into other systems including an audio device or a display device. In either case, several components necessary for complete operation of the system are not shown in the interest of conciseness, as they are well known to those skilled in the art.
- the input signal receiver 202 may be one of several known receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber, and phone line networks.
- the desired input signal may be selected and retrieved by the input signal receiver 202 based on user input provided through a control interface or touch panel interface 222 .
- Touch panel interface 222 may include an interface for a touch screen device. Touch panel interface 222 may also be adapted to interface to a cellular phone, a tablet, a mouse, a high end remote or the like.
- the decoded output signal is provided to an input stream processor 204 .
- the input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream.
- the audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal.
- the analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier.
- the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF).
- HDMI High-Definition Multimedia Interface
- SPDIF Sony/Philips Digital Interconnect Format
- the audio interface may also include amplifiers for driving one more sets of speakers.
- the audio processor 206 also performs any necessary conversion for the storage of the audio signals.
- the video output from the input stream processor 204 is provided to a video processor 210 .
- the video signal may be one of several formats.
- the video processor 210 provides, as necessary a conversion of the video content, based on the input signal format.
- the video processor 210 also performs any necessary conversion for the storage of the video signals.
- a storage device 212 stores audio and video content received at the input.
- the storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, for example, navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216 and/or touch panel interface 222 .
- the storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or may be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.
- SRAM static RAM
- DRAM dynamic RAM
- CD compact disk
- DVD digital video disk
- the converted video signal from the video processor 210 , either originating from the input or from the storage device 212 , is provided to the display interface 218 .
- the display interface 218 further provides the display signal to a display device of the type described above.
- the display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as HDMI. It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results (for example, as described in more detail below with respect to FIGS. 4-11 ).
- the controller 214 is interconnected via a bus to several of the components of the device 200 , including the input stream processor 204 , audio processor 206 , video processor 210 , storage device 212 , the touch panel interface 222 , and the user interface 216 .
- the controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display.
- the controller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, the controller 214 performs searching of content and the creation and adjusting of the displays representing the context and/or the content, for example, as described below with respect to FIGS. 4-11 .
- the controller 214 is further coupled to control memory 220 (for example, volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214 .
- Control memory 220 may store instructions for controller 214 .
- Control memory may also store a database of elements, such as graphic elements representing context values or content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface for display interface 218 , and the like.
- the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below.
- the implementation of the control memory 220 may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
- the user interface process of various implementations employs an input device that can be used to provide input, including, for example, selection of day, time, audience, and/or genre.
- a tablet or touch panel device 300 (which is, for example, the same as the touch screen control device 116 shown in FIG. 1 and/or is an integrated example of receiving device 108 and touch screen control device 116 ) may be interfaced via the user interface 216 and/or touch panel interface 222 of the receiving device 200 .
- the touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device.
- the touch panel device 300 may simply serve as a navigational tool to navigate the display (for example, a navigational tool to navigate a display of context options and movie recommendations that is displayed on a TV). In other embodiments, the touch panel device 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the display of content.
- the touch panel device 300 may be included as part of a remote control device containing more conventional control functions such as activator and/or actuator buttons.
- the touch panel device 300 can also include at least one camera element. Note that various implementations employ a large screen TV for the display of, for example, context options and movie recommendations, and employ a user input device similar to a remote control to allow a user to navigate through the display.
- a screen shot 400 is shown.
- the system has automatically detected a current day of the week 402 and a current time of day (or at least a current portion of the day, such as, for example, morning, afternoon, or evening) 404 .
- the implementation then provides a rank-ordered list 410 of possible companions (that is, an audience for the content).
- the ordered list 410 of the screen shot 400 includes an “Alone” option 412 (a “face” icon), a “Family&Kids” option 414 (an icon with two people on the left and one person on the right), a “Partner” option 416 (an icon of two interlocked rings), and a “Friends” option 418 (an icon of two glasses toasting each other, with a star at the point of contact between the glasses).
- the companion list 410 is ordered (also referred to as sorted) by the expected frequencies of the possible companions 412 - 418 on the indicated day (Wednesday) 402 and at the indicated time 404 (afternoon).
- the companion list 410 is ordered from left to right in decreasing order of expected frequency.
- the system believes that it is most likely that the user will be watching the movie alone 412 .
- the next most likely companions in order from greatest likelihood to least likelihood, are “Family&Kids” 414 , “Partner” 416 , and “Friends” 418 .
- the frequency or likelihood can be based on, for example, observed habits of the user that have been tracked, on a profile provided by the user, and/or on objective information provided for other users or groups of people.
- a screen shot is shown in which the user can change the indicated day and/or time.
- the prompt can be provided automatically or in response to a number of actions, including, for example, (i) the user hovering over the day and/or time fields with an input device such as, for example, a mouse or a finger, and/or (ii) the user selecting the day and/or time fields with an input device.
- the screen shot of FIG. 5 includes a window 510 overlaying the screen shot 400 .
- the overlaid screen shot 400 that is layered under the window 510 is shaded.
- the window 510 includes an indicator of the selected day 512 (shown as “Friday”), an indicator of the selected time 514 (shown as “Night”), and various controls. Two controls are provided for setting the day 512 , including a “+” icon 520 for incrementing the day (for example, from Friday to Saturday) and a “ ⁇ ” icon 521 for decrementing the day (for example, from Friday to Thursday).
- Two analogous controls are also provided for setting the time 514 , including a “+” icon 530 for incrementing the time (for example, from Night to Morning to Afternoon to Evening) and a “ ⁇ ” icon 531 for decrementing the time (for example, from Night to Evening).
- the window 510 also includes two operational buttons.
- a “Close” button 540 closes the window 510 which is analogous to exiting the window without changing anything, and a “Set” button 545 sets the system to the selected day 512 and the selected time 514 .
- FIG. 6 a screen shot 600 is shown that provides an ordered list 610 of audiences that is now based on the new selection of day 512 and time 514 from FIG. 5 .
- the list 610 is ordered from left to right in decreasing order of expected frequency.
- FIG. 6 provides a different ordering in the list 610 of audience than does the list 410 in FIG. 4 . This is because the user does not have the same likelihood of watching movies with various companions on Wednesday Afternoon as on Friday Night. Specifically, on Friday night, the system believes that it is most likely that the user will be watching a movie with Friends 418 .
- the remaining displayed options for audiences, in order from most likely to least likely, are “Partner” 416 , “Alone” 412 , and “Family&Kids” 414 .
- audience options include, in various implementations, “Movie Club”, “Church Group”, and “Work Friends”.
- a screen shot 700 is shown that presents an ordered list 710 of movie genres.
- the ordered list 710 is based on the selected context elements from, for example, FIGS. 4-6 .
- the ordered list 710 includes a context set 720 displaying the selected context elements.
- the context set 720 includes a day/time element 722 and an audience element 724 .
- the day/time element 722 is a generic element (a “clock” icon) that indicates that the day and time have been selected, but the day/time element 722 does not indicate what the selected day and the selected time are.
- the audience element 724 indicates that the selected audience is Friends.
- the audience element 724 provides the indication of the audience by using a smaller version of the toasting glasses icon that is used for the Friends option 418 from FIGS. 4 and 6 .
- the elements of the context set 720 present the name of the selection when a user “hovers” over the icon using, for example, a mouse or other pointing device.
- a user hovering over the clock icon
- such implementations provide a small text box that displays the selected day/time, such as, for example, “Friday Night”.
- a small text box that displays the selected audience, which is “Friends” in this example.
- the list 710 of movie genres includes four options for movie genres, listed in order (from left to right) of most likely to least likely. Those options are (i) a Thriller genre 732 (shown by an icon of a ticking bomb), (ii) a Crime genre 734 (shown by an icon of a rifle scope), (iii) a Science Fiction genre 736 (shown by an icon of an atom), and (iv) an Action genre 738 (shown by an icon of a curving highway). That is, the system believes that on Friday night, if the user is watching a movie with friends, then the most likely movie genres to be watched are, in decreasing order of likelihood, thriller, crime, science fiction, and action.
- a screen shot 800 is shown that presents a variation of FIG. 7 .
- a different audience has been selected.
- “Partner” has been selected to replace “Friends”.
- the new audience selection is shown in a context set 820 that includes the generic day/time element 722 and an audience element 824 .
- the audience element 824 indicates that the selected audience is Partner because the audience element 824 uses a smaller version of the interlocking rings icon used in the Partner option 416 from FIGS. 4 and 6 .
- the screen shot 800 includes a new ordered list 810 of movie genres that is based on the new audience that has been selected.
- the list 810 provides the following genres, in order from most likely to least likely: (i) the Science Fiction genre, (ii) a Fantasy genre 842 (shown by an icon of a magic wand with a star on top), (iii) a Comedy genre 844 (shown by an icon of a smiley face), and (iv) a Drama genre 846 (shown by an icon of a heartbeat as typically shown on a heart rate monitor used with an electrocardiogram).
- the list 710 it is clear that the system believes different movie genres are more, or less, likely to be selected by the different audiences. Indeed, the list 710 and the list 810 have different genres, and not just a different ordering of the same set of genres.
- a screen shot 900 is shown that presents movie recommendations for the selected context.
- the selected context is shown with a context set 920 that includes the generic day/time element 722 , the audience element 824 , and a genre element 926 which is a smaller version of the atom icon used to represent the Science Fiction genre 736 .
- various implementations display a text box with the name of a selected context element when a user hovers over that element in the context set 920 .
- a user hovers over that element in the context set 920 such implementations provide a small text box that displays the selected genre, such as, for example, “Science Fiction”.
- the screen shot 900 includes an ordered set 910 of eight movie recommendations, with the highest recommendation at the top-left, and the lowest recommendation at the bottom-right.
- the set 910 includes, from highest recommendation to lowest recommendation: (i) a first recommendation 931 , which is “Inception”, (ii) a second recommendation 932 , which is “Children of men”, (iii) a third recommendation 933 , which is “signs”, (iv) a fourth recommendation 934 , which is “Super 8 ”, (v) a fifth recommendation 935 , which is “Déjà vu”, (vi) a sixth recommendation 936 , which is “Moon”, (vii) a seventh recommendation 937 , which is “Knowing”, and (viii) an eighth recommendation 938 , which is “Happening”.
- the eight recommendations are the movies that the system has selected as being the most likely to be selected for viewing by the user in the selected context.
- the movies can be presented in various orders, including for example, (i) ordered from highest to lowest recommendation from top to bottom and left to right, such that the highest recommendation is top-left (reference element 931 ) and the second highest recommendation is bottom-left (reference element 935 ), etc., (ii) ordered with the highest recommendations near the middle, (iii) ordered alphabetically, or (iv) randomly arranged.
- the screen shot 910 shows movie posters, however other implementations merely list the titles.
- the user is able to select a movie from the screen shot 910 .
- a movie from the screen shot 910 .
- one or more of a variety of operations may occur, including, for example, playing the movie, receiving information about the movie, receiving a payment screen for paying for the movie, etc.
- the user has other options in various implementations, besides selecting a displayed movie poster. For example, in certain implementations allow a user to remove movies from the list of recommendations using, for example, a close button associated with the movie's poster. In various of such implementations another movie is recommended and inserted as a replacement for the removed movie poster. Some implementations remember the user's selections and base future recommendations, in part, on these selections. Other implementations also allow more, or less, than eight movie posters to be displayed at a given time.
- a window 1000 is displayed after a user has selected the sixth movie recommendation 936 (the movie “Moon”) from the screen shot 910 .
- the window 1000 is overlaying the screen shot 900 .
- the overlaid screen shot 900 that is layered underneath the window 1000 is shaded.
- the window 1000 includes: (i) the movie title and year of release 1010 , (ii) the movie poster 1020 , (iii) a summary 1030 of the movie, and (iv) a set 1040 of options for viewing the movie.
- the set 1040 includes, in this implementation, four links to external sources of the selected movie “Moon”.
- the set 1040 includes (i) an AllMovie button 1042 to select AllMovie (http/www.allmovie.com/) as the external source, (ii) an IMDB button 1044 to select IMDB (http//www.imdb.com/) as the external source, (iii) an Amazon button 1046 to select Amazon (http://www.amazon.com/) as the external source, and (iv) a Netflix button 1048 to select Netflix (https://www.netflix.com/) as the external source.
- a user is also able to navigate back to the selection screen of the screen shot 900 .
- the user is able to navigate back to the previous screen of FIG. 9 .
- the screen shot 900 is shown again as a result of the user selecting (for example, clicking within) the overlaid screen shot 900 in FIG. 10 .
- the context set 920 serves, in part, as a history of the user's selections.
- Each of the icons 722 , 824 , and 926 in the context set 920 of the top-left area of the screen shot 900 can be selected by the user to go back to a particular previous screen.
- This feature provides a jump-back feature that can span several screens. For example, selecting the audience element 824 in the context set 920 of FIG. 11 navigates back, for example, to the screen shot 600 , which provides the audience recommendations.
- FIG. 11 also includes a “Partner” word icon 1110 (also referred to as a text box) that is displayed, for example, when the user hovers over the audience element 824 of the context set 920 .
- the audience element 824 is the “Partner” option, so the system provides a viewable name with the word icon 1110 as a guide to the user.
- FIG. 12 a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations.
- FIG. 12 provides a process that includes providing a content recommendation based on context.
- the content is, in various implementations, one or more of movies, music, sitcoms, serial shows, sports games, documentaries, advertisements, and entertainment.
- various of these categories can overlap and/or be hierarchically structured in different ways.
- documentaries can be one genre of movies, and movies and sports games can be two genres of entertainment.
- documentaries, movies, and sports games can be three separate genres of entertainment.
- FIG. 13 a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations.
- FIG. 13 provides a process that includes providing a content recommendation based on one or more of the following context categories: the user, the day, the time, the audience (also referred to as companions), and/or the genre. Note that the genre is often dependent on the type of content (for example, movies) that is being recommended.
- a one-block flow diagram is provided that describes a process for providing selections for a context category based on other context categories. For example, selections for the context categories of audience and/or genre can be provided. Further, the selections can be determined and rank-ordered based on one or more of the user, the day, and/or the time. It should be clear that the process of FIG. 14 is integrated, in various implementations, into the processes of FIGS. 12 and 13 .
- the process 1500 includes providing a set of options for a given context category, ordered based on a value for one or more other context categories ( 1510 ). In one particular implementation, this includes providing a user an ordered set of options for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories.
- the operation 1510 is performed in various implementations using, for example, one of the screen shots from any of FIGS. 4 and 6 - 8 . For example, FIG. 4 provides a list 410 based on the context for the day and time.
- the process 1500 further includes providing a set of options for one or more additional context categories, ordered based on an option for the given context category ( 1520 ).
- the operation 1520 includes providing an ordered set of options for one or more additional context categories related to content selection.
- the ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category.
- the operation 1520 is performed in various implementations using, for example, one of the screen shots from any of FIGS. 7-8 .
- Variations of the process 1500 further include receiving user input identifying one of the provided options for (i) the one or more other context categories, and/or (ii) the one or more additional context categories.
- This user input operation is performed in various implementations, for example, as discussed above in moving from FIG. 6 to FIG. 7 or 8 .
- the process 1500 can be performed using, for example, the structure provided in FIGS. 1-3 .
- the operations 1510 and 1520 can be performed using the receiving device 108 or the STB/DVR 200 to provide the sets of options on the display device 114 , the touch screen control device 116 , or the device of FIG. 3 .
- a user input operation can be performed using the touch screen control device 116 or the device of FIG. 3 to receive the user input.
- FIG. 16 a system or apparatus 1600 is shown that includes three components, one of which is optional.
- FIG. 16 includes an optional user input device 1610 , a presentation device 1620 , and a processor 1630 .
- these three components 1610 - 1630 are integrated into a single device, such as, for example, the device of FIG. 3 .
- Particular implementations integrate these three components 1610 - 1630 in a tablet used as a second screen while watching television.
- the user input device 1610 includes at least a touch-sensitive portion of a screen
- the presentation device 1620 includes at least a presentation portion of the screen
- a processor 1630 is housed within the tablet to receive and interpret the user input, and to control the presentation device 1620 .
- FIG. 16 depicts a distributed system in which the processor 1630 is distinct from, and remotely located with respect to, one or more of the user input device 1610 and the presentation device 1620 .
- the user input device 1610 is a remote control that communicates with a set-top box
- the presentation device 1620 is a TV controlled by the set-top box
- the processor 1630 is located in the set-top box.
- the presentation device 1620 and the user input device 1610 are integrated into a second screen such as, for example, a tablet.
- the processor 1630 is in a STB.
- the STB controls both the tablet and a primary screen TV.
- the tablet receives and displays screen shots from the STB, providing movie recommendations.
- the tablet accepts and transmits input from the user to the STB, in which the user interacts with the content on the screen shots.
- the STB does the processing for the movie recommendation system, although various implementations do have a processor in the tablet.
- the processor 1630 of FIG. 16 is, for example, any of the options for a processor described throughout this application.
- the processor 1630 can also be, or include, for example, the processing components inherent in the devices shown or described with respect to FIGS. 1-3 .
- the presentation device 1620 is, for example, any device suitable for providing any of the sensory indications described throughout this application. Such devices include, for example, all user interface devices described throughout this application. Such devices also include, for example, the display components shown or described with respect to FIGS. 1-3 .
- the system/apparatus 1600 is used, in various implementations to perform one or more of the processes shown in FIGS. 12-15 .
- the processor 1630 provides a content recommendation, based on context, on the presentation device 1620 .
- the processor 1630 provides a recommendation based on one or more of user, day, time audience/companions, or genre.
- the processor 1630 provides selections for audience/companions and/or genre that are ordered based on user, day, and/or time.
- Other implementations also combine one or more of the processes of FIGS. 12-14 using the system/apparatus 1600 .
- the processor 1630 provides the two sets of options in the operations 1510 and 1520 , and the user input device 1610 can receive the user input in those implementations that receive user input.
- the system/apparatus 1600 is also used, in various implementations, to provide one or more of the screen shots of FIGS. 4-11 .
- the processor 1630 provides the screen shots of FIGS. 4-11 on the presentation device 1620 , and receives user input the user input device 1610 .
- the presentation device 1620 and the user input device 1610 are included in an integrated touch screen device, such as, for example, a tablet.
- system/apparatus 1600 include only the presentation device 1620 and the processor 1630 , and do not include the user input device 1610 .
- Such systems are able to make content recommendations on the presentation device 1620 .
- implementations are able to access selections for context categories using one or more of, for example, (i) default values, (ii) values from profiles, and/or (iii) values accessed over a network.
- Additional implementations provide a user with options for simultaneously selecting values for multiple context categories at the same time. For example, upon receiving user selection of time and day in FIG. 5 , an implementation provides a user with rank-ordered options for both audience and genre. In one such implementation, a screen provides a first option that includes Friends and Thriller, and a second option that includes Partner and Science Fiction.
- context is indicated or described, for example, by context categories that describe an activity.
- Each activity for example, consuming content such as a movie
- context categories can have its own context categories.
- One manner of determining context categories is to answer the common questions of “who”, “what”, “where”, “when”, “why”, and “how”. For example, if the activity is defined as consuming content, the common questions can result in a variety of context categories, as discussed below:
- the audience is a context category.
- separate context categories can be used for demographic information such as age, gender, occupation, education achieved, location of upbringing, and previously observed behavior for an individual in the audience.
- the genre of the content is a context category.
- separate context categories can be used for the length of the content, and the maturity ranking of the content (for example, G, PG-13, or R).
- the location is a context category and can have values such as, for example, in a home, in an auditorium, in a vehicle such as a plane or car, in the Deep South, or in the North East. Additionally, or alternatively, separate context categories can be used for room characteristics (for example, living room, auditorium, or airplane cabin) and geographical location (for example, Deep South).
- the day-and-time is a context category.
- separate context categories can be used for the day, the time, the calendar season (winter, spring, summer, or fall), and the holiday season (for example, Christmas, Thanksgiving, or Fourth of July), as discussed further below.
- the occasion is a context category and can have values such as, for example, a wedding anniversary, a child's birthday party, or a multi-generational family reunion.
- the medium being used is a context category and can have values such as, for example, a small screen, a large screen, a mobile device, a low-speed connection, a high-speed connection, or surround sound. Additionally, or alternatively, separate context categories can be used for screen size, connection speed, and sound quality.
- presentation devices include, for example, a television (“TV”) (with or without picture-in-picture (“PIP”) functionality), a computer display, a laptop display, a personal digital assistant (“PDA”) display, a cell phone display, and a tablet (for example, an iPad) display.
- TV television
- PDA personal digital assistant
- the display devices are, in different implementations, either a primary or a secondary screen.
- Still other implementations use presentation devices that provide a different, or additional, sensory presentation. Display devices typically provide a visual presentation.
- presentation devices provide, for example, (i) an auditory presentation using, for example, a speaker, or (ii) a haptic presentation using, for example, a vibration device that provides, for example, a particular vibratory pattern, or a device providing other haptic (touch-based) sensory indications.
- Various implementations provide content recommendations based on other contextual information.
- One category of such information includes, for example, an emotional feeling of the user. For example, if the user is happy, sad, lonely, etc., the system can provide a different set of recommendations appropriate to the emotional state of the user.
- the system provides, based on, for example, user history or objective input from other users, a rank-ordered set of genres and/or content based on the day, the time, the audience, and the user's emotional state.
- Certain implementations provide indicators of a calendar season that include “summer”, “fall”, “winter”, and “spring”. Certain other implementations provide indicators of a holiday season that include “Christmas”, “Thanksgiving”, “Halloween”, and “Valentine's Day”. Obviously, certain implementations include both categories and their related values. As can be expected, a rank-ordering of movie genres can be expected to change based on the season. Additionally, a rank-ordering of movies within a genre can be expected to change based on the season.
- Various implementations receive user input identifying a value, or a selection, for a particular context category. Other implementations access a selection, or input, in other manners. For example, certain implementations receive input from other members of an audience using, for example, any of a variety of “second screens” such as, for example, a tablet or a smartphone. As another example, certain implementations use default selections when no user input is available or received. As another example, certain implementations access use profiles, access databases from the Internet, or access other remote sources, for input or selections.
- FIG. 6 anticipates receiving a single selection of audience
- FIG. 7 anticipates receiving a single selection of genre.
- Other implementations accept or even expect multiple selections.
- one implementation of FIG. 6 allows a user to select two audiences, and then provides a genre recommendation based on the combined audiences. Thus, if a user is going to watch a movie with her partner and some friends, the user could select both Friends 418 and Partner 416 , and the system would recommend genres based on this combined audience.
- FIGS. 1-3 and 16 This application provides multiple figures, including the block diagrams of FIGS. 1-3 and 16 , the pictorial representations of FIGS. 4-11 , and flow diagrams of FIGS. 12-15 . Each of these figures provides disclosure for a variety of implementations.
- FIG. 1 also presents a flow diagram for performing the functions of the blocks of FIG. 1 .
- the block for the content source 102 also represents the operation of providing content
- the block for the broadcast affiliate manger 104 also represents the operation of receiving broadcast content and providing the content on a scheduled delivery to the delivery network 1 106 .
- Other blocks of FIG. 1 are similarly interpreted in describing this flow process.
- FIGS. 2-3 and 16 can also be interpreted in a similar fashion to describe respective flow processes.
- the flow diagrams certainly describe a flow process.
- the flow diagrams provide an interconnection between functional blocks of a system or apparatus for performing the flow process.
- reference element 1510 also represents a block for performing the function of providing a user an ordered set of options for a given context category.
- Other blocks of FIG. 15 are similarly interpreted in describing this system/apparatus.
- FIGS. 12-14 can also be interpreted in a similar fashion to describe respective systems or apparatuses.
- FIGS. 4-11 certainly describe an output screen shown to a user.
- the screen shots describe flow process for interacting with the user.
- FIG. 4 also describes a process of presenting a user with time/day information 402 and 404 , presenting the user with associated audience information 410 , and providing the user with a mechanism for selecting one of the presented audience options 410 .
- FIGS. 5-11 can also be interpreted in a similar fashion to describe respective flow processes.
- Various implementations provide content recommendations based on context. Various other implementations also provide context selections that are ranked according to frequency or likelihood. Various other implementations provide content recommendations that are also ranked according to frequency or likelihood.
- the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
- Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
- Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
- any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
- such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
- This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
- a “set” can be represented in various manners, including, for example, in a list, or another visual representation.
- processors such as, for example, a post-processor or a pre-processor.
- the processors discussed in this application do, in various implementations, include multiple processors (sub-processors) that are collectively configured to perform, for example, a process, a function, or an operation.
- the processor 1630 , the audio processor 206 , the video processor 210 , and the input stream processor 204 , as well as other processing components such as, for example, the controller 214 are, in various implementations, composed of multiple sub-processors that are collectively configured to perform the operations of that component.
- the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
- the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, tablets, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
- PDAs portable/personal digital assistants
- Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications.
- equipment include an encoder, a decoder, a post-processor, a pre-processor, a video coder, a video decoder, a video codec, a web server, a television, a set-top box, a router, a gateway, a modem, a laptop, a personal computer, a tablet, a cell phone, a PDA, and other communication devices.
- the equipment may be mobile and even installed in a mobile vehicle.
- the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD”), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
- the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
- a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry as data the rules for writing or reading syntax, or to carry as data the actual syntax-values generated using the syntax rules.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on a processor-readable medium.
Abstract
Various implementations provide one or more recommendations for content, for example, to a user, based on one or more context categories. In one particular implementation, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. In various implementations, a user provides a selection for the one or more other context categories, the context category, and/or the one or more additional context categories.
Description
- This application claims the benefit of U.S. provisional application No. 61/707,077, filed Sep. 28, 2012, and titled “Context-based Content Recommendations”, the contents of which are hereby incorporated by reference herein for all purposes.
- Implementations are described that relate to providing recommendations. Various particular implementations relate to providing context-based recommendations for various forms of content to be consumed by a user.
- Home entertainment systems, including television and media centers, are converging with the Internet and providing access to a large number of available sources of content, such as video, movies, TV programs, music, etc. This expansion in the number of available sources necessitates a new strategy for navigating a media interface associated with such systems and making content recommendations and selections.
- The large number of possible content sources creates an interface challenge that has not yet been successfully solved in the field of home media entertainment. This challenge involves successfully presenting users with a large number of elements (programs, sources, etc.) without the need to tediously navigate through multiple display pages or hierarchies of content.
- Further, most existing search paradigms make an assumption that the user knows what they are looking for when they start, whereas often, an alternate mechanism is more desirable or appropriate. One approach for allowing a process of discovery and cross linkage is the use of ratings. Under this approach a user rates content and a recommendation engine recommends additional content related to the rated content. For example, if a user gives an action movie a five star rating and a horror movie a one star rating, a conventional recommendation engine is likely to recommend other action movies to the user rather than other horror movies. A drawback to this approach is that recommendations tend to be skewed to particular movie genres until a large enough rating database is created over multiple movie genres (for example, action, horror, romance, etc.) by the user. Furthermore, another drawback is that even if a large rating database is created by a user, there still may be inaccurate or non-relevant recommendations since the rating information may have been inaccurately collected from the user. For example, if a user rates the first five horror movies presented for rating as one star movie, the conventional recommendation engine may stop recommending horror movies to the user. However, the user may just not have liked the first five horror movies presented and may actually desire to have other horror movies brought to his or her attention.
- According to a general aspect, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category.
- The details of one or more implementations are set forth in the accompanying drawings and the description below. Even if described in one particular manner, it should be clear that implementations may be configured or embodied in various manners. For example, an implementation may be performed as a method, or embodied as an apparatus, such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal. Other aspects and features will become apparent from the following detailed description considered in conjunction with the accompanying drawings and the claims.
-
FIG. 1 provides a block diagram depicting an implementation of a system for delivering video content. -
FIG. 2 provides a block diagram depicting an implementation of a set-top box/digital video recorder (DVR). -
FIG. 3 provides a pictorial representation of a perspective view of an implementation of a remote controller, tablet, and/or second screen device. -
FIGS. 4-11 provide screen shots of an implementation for recommending content based on context. -
FIGS. 12-15 provide flow diagrams of various process implementations for recommending content based on context. -
FIG. 16 provides a flow diagram of an implementation of a system or apparatus for recommending content based on context. - The inventor has determined various manners in which, for example, a user interface to a content recommendation system can be more helpful. One implementation provides a movie recommendation and discovery engine that takes the user's context into account by getting information about the current day of the week, time of the day, audience or companion(s), and desired content (for example, movie) genre. Based on this information, the system recommends a set of movies that suits the given context. One or more implementations provide a way to take the user's context into account when recommending movies to watch.
- The definition of context can vary depending on the content that is to be consumed. “Consuming” content has the well-known meaning of experiencing the content by, for example, watching or listening to the content. For different content, certain aspects of the context are more, or less, important. For example, activity and location are typically not as relevant when considering a movie to recommend as when considering music to recommend.
- Various implementations build context categories dynamically and/or automatically, while other implementations rely more on manually built context categories. A context category refers generally to a set of values that can be selected as context. More particularly, a context category often represents a common variable, and includes a set of alternative values for that variable. For example, “Time” is a common variable, and “Friday Night”, and “Saturday Morning” belong to the “Time” category, and may be chosen as values for that variable. As another example, “Genre” is another common variable, and “Action” and “Drama” are possible alternative values for the “Genre” variable.
- Dynamically building categories means that the categories are built based on the user input. Because the user input is dynamic, the building of the context categories is dynamic. For example, if a user selects “Friday Night with Friends”, the category “Genre” will be built algorithmically in runtime, based on those selections. “Building” the context category “Genre” refers to determining which values (elements) to include in the category “Genre”, and how to rank-order those values.
- In various implementations, the context categories are built automatically. This means that there is, at least primarily, no user intervention, aside from providing the user input of, for example, day, time, and companions, in the creating of categories. Rather, in a pure automatic system, all of the decisions are made by algorithms, not by people. For example, recommending “Action” and “Drama” movies (this is building the context category of “Genre”) on “Friday Night with Friends” was not a decision made by humans directly. The decision is based on data (for example, from previous user studies) and algorithms.
- Other implementations build context categories manually by, for example, paying specialists to decide that “On Friday Nights with Friends”, a user should be watching “Action” and “Drama”.
- The terms “audience” and “companions” are generally used interchangeably in this application to refer to the set of people consuming the content. However, in other implementations, the terms “audience” and “companions” can refer to distinct context categories. Some values for this context category in various implementations discussed in this application may, strictly speaking, refer to companions and not include the user (for example, “Partner”). Other values may refer to the entire audience including the user or might not include the user (for example, “Family&Kids” or “Friends” may or may not include the user). However, in typical implementations, the content (for example, movie) recommendations are indeed based on the entire audience including the user, whether or not the value for the “audience” or “companions” context category specifically includes the user.
- One or more implementations include two main parts: a movie selection system and a user interface. For one or more particular implementations, each will be described below, with reference to the figures. Variations of the movie selection and the user interface are contemplated.
- Movie Selection
- In this phase, we find which movies can be considered to be the best to watch in a given context. A variety of implementations exist, many of which are dynamic and/or automatic, in whole or in part. Various implementations use the following process:
- 1. We ask a group of people, given a certain context and a limited set of movies, to decide which movies they find appropriate for the given context. In one example, the given context is that it is Friday night, and the individual (each individual answers independently) is with his/her friends. The individuals in the group are each asked if they would watch, for example, the movie “The Dark Knight”. This provides, for example, a selection of movies for each of several different day/time contexts.
- 2. From the data acquired in
step 1, we are provided data describing which specific movies people watch in a specific context (for example, a day/time/companions context). In order to make this information useful, and to help users navigate this information, we build upon this information. In various implementations, we build upon this information by aggregating the selected movies by their genres, and by performing statistical tests to identify which genres have a best average-rating for a certain context. For example, if a large part of the users said that they would watch “Inception” and “Signs” on a “Friday Night with their Partner”, then “Science Fiction” would be selected as a recommend genre for that context. Other implementations use a variety of tools and techniques to build upon the information fromstep 1. - 3. In the previous step (step 2), we identified which genres are recommended for each context. In this step (step 3), we identify which movies we should recommend to the user (and the rank-order of those movies) for each combination of context and genre. Using the data from
step 1, we gather the top-rated movies in the given context that belong to the desired genre (from step 2). If the resulting list is smaller than desired, we add to the list (bootstrap the list) by gathering additional movies that are the most similar to the ones already in the list. For example, if “Finding Nemo” is a good movie to watch in a given context, then it is likely that “Cars” and “Toy Story” will also be good movies (assuming, for example, that the genre is animation) for that context. A variety of tools and techniques are available for use in performing the gathering (bootstrapping) of additional movie titles for a given context and genre. Such tools and techniques include, for example, categorizations, ratings, and reviews of movies. - User Interface
- The user interface of various implementations allows users to intuitively navigate through the results from the previous phase and to find movies that are recommended for their current context. One implementation of the process is as follows:
- 1. The system automatically recognizes the current day of the week and time of the day and presents a list of companions (Alone, Friends, Family&Kids, Partner) ordered by their expected frequency (most expected companions first) (see, for example,
FIGS. 4 and 6 ). Users are able to change the day and/or time information (see, for example,FIG. 5 ). - 2. After selecting the time (see, for example,
FIG. 5 ) and the companion(s) (see, for example,FIG. 6 ), users can choose among the most recommended genres of movies for the given context (see, for example,FIGS. 7 and 8 ). Genres are ordered by their recommendation score (calculated, for example, instep 2 of the Movie Selection phase). Recall that instep 2 of the Movie Selection phase we identified the top genres, and because we are averaging the movies belonging to that genre, we have a score that we can use to sort the list. - 3. After selecting the desired genre, a list of recommended movies is presented (determined, for example, in step 3 of the Movie Selection phase) (see, for example,
FIGS. 9 and 11 ). Users can get more information about the movie, such as title, poster, description, genres, and links to external sources (see, for example,FIG. 10 ). Such external sources include, for example, IMDB (http://www.imdb.com/), Amazon (http://www.amazon.com/), Netflix (https://www.netflix.com/), and AllMovie (http://www.allmovie.com/). - 4. At any time, users can navigate back and change their previous selections (see, for example,
FIG. 11 ). -
FIGS. 1-3 provide an implementation of a system and environment in which movie recommendations can be provided. Other systems and environments are envisioned, and the examples associated withFIGS. 1-3 are not intended to be exhaustive or restrictive. - Referring to
FIG. 1 , a block diagram of an embodiment of asystem 100 for delivering content to a home or end user is shown. The content originates from acontent source 102, such as a movie studio or production house. The content may be supplied in at least one of two forms. One form may be a broadcast form of content. The broadcast content is provided to thebroadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc. The broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a delivery network, shown as delivery network 1 (106). Delivery network 1 (106) may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 (106) may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, or cable broadcast. The locally delivered content is provided to areceiving device 108 in a user's home, where the content will subsequently be searched by the user. It is to be appreciated that the receivingdevice 108 can take many forms and may be embodied as a set top box/digital video recorder (DVR), a gateway, a modem, etc. Further, the receivingdevice 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network. - A second form of content is referred to as special content. Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager, for example, movies, video games, or other video elements. The special content can originate from the same, or from a different, content source (for example, content source 102) as the broadcast content provided to the
broadcast affiliate manager 104. In many cases, the special content may be content requested by the user. The special content may be delivered to acontent manager 110. Thecontent manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. Thecontent manager 110 may also incorporate Internet content into the delivery system. Thecontent manager 110 may deliver the content to the user'sreceiving device 108 over a separate delivery network, delivery network 2 (112). Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from thebroadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (112) and content from thecontent manager 110 may be delivered using all or parts of delivery network 1 (106). In addition, the user may also obtain content directly from the Internet via delivery network 2 (112) without necessarily having the content managed by thecontent manager 110. - Several adaptations for utilizing the separately delivered content may be possible. In one possible approach, the special content is provided as an augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc. In another embodiment, the special content may completely replace some programming content provided as broadcast content. Finally, the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize. For instance, the special content may be a library of movies that are not yet available as broadcast content.
- The receiving
device 108 may receive different types of content from one or both ofdelivery network 1 anddelivery network 2. The receivingdevice 108 processes the content, and provides a separation of the content based on user preferences and commands. The receivingdevice 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receivingdevice 108 and features associated with playing back stored content will be described below in relation toFIG. 2 . The processed content (at least for video content) is provided to adisplay device 114. Thedisplay device 114 may be a conventional 2-D type display or may alternatively be an advanced 3-D display. - The receiving
device 108 may also be interfaced to a second screen such as a touchscreen control device 116. The touchscreen control device 116 may be adapted to provide user control for the receivingdevice 108 and/or thedisplay device 114. The touchscreen control device 116 may also be capable of displaying video content. The video content may be graphics entries, such as user interface entries (as discussed below), or may be a portion of the video content that is delivered to thedisplay device 114. The touchscreen control device 116 may interface to receivingdevice 108 using any well known signal transmission system, such as infra-red (IR) or radio frequency (RF) communications and may include standard protocols such as infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, or any proprietary protocol. Operations of touchscreen control device 116 will be described in further detail below. - In the example of
FIG. 1 , thesystem 100 also includes aback end server 118 and ausage database 120. Theback end server 118 includes a personalization engine that analyzes the usage habits of a user and makes recommendations based on those usage habits. Theusage database 120 is where the usage habits for a user are stored. In some cases, theusage database 120 may be part of theback end server 118. In the present example, the back end server 118 (as well as the usage database 120) is connected to thesystem 100 and accessed through the delivery network 2 (112). - Referring to
FIG. 2 , a block diagram of an embodiment of a receivingdevice 200 is shown. Receivingdevice 200 may operate similar to the receiving device described inFIG. 1 and may be included, for example, as part of a gateway device, modem, set-top box, or other similar communications device. Thedevice 200 shown may also be incorporated into other systems including an audio device or a display device. In either case, several components necessary for complete operation of the system are not shown in the interest of conciseness, as they are well known to those skilled in the art. - In the
device 200 shown inFIG. 2 , the content is received by aninput signal receiver 202. Theinput signal receiver 202 may be one of several known receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber, and phone line networks. The desired input signal may be selected and retrieved by theinput signal receiver 202 based on user input provided through a control interface ortouch panel interface 222.Touch panel interface 222 may include an interface for a touch screen device.Touch panel interface 222 may also be adapted to interface to a cellular phone, a tablet, a mouse, a high end remote or the like. - The decoded output signal is provided to an
input stream processor 204. Theinput stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to anaudio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to anaudio interface 208 and further to the display device or audio amplifier. Alternatively, theaudio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio interface may also include amplifiers for driving one more sets of speakers. Theaudio processor 206 also performs any necessary conversion for the storage of the audio signals. - The video output from the
input stream processor 204 is provided to avideo processor 210. The video signal may be one of several formats. Thevideo processor 210 provides, as necessary a conversion of the video content, based on the input signal format. Thevideo processor 210 also performs any necessary conversion for the storage of the video signals. - A
storage device 212 stores audio and video content received at the input. Thestorage device 212 allows later retrieval and playback of the content under the control of acontroller 214 and also based on commands, for example, navigation instructions such as fast-forward (FF) and rewind (Rew), received from auser interface 216 and/ortouch panel interface 222. Thestorage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or may be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive. - The converted video signal, from the
video processor 210, either originating from the input or from thestorage device 212, is provided to thedisplay interface 218. Thedisplay interface 218 further provides the display signal to a display device of the type described above. Thedisplay interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as HDMI. It is to be appreciated that thedisplay interface 218 will generate the various screens for presenting the search results (for example, as described in more detail below with respect toFIGS. 4-11 ). - The
controller 214 is interconnected via a bus to several of the components of thedevice 200, including theinput stream processor 204,audio processor 206,video processor 210,storage device 212, thetouch panel interface 222, and theuser interface 216. Thecontroller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. Thecontroller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, thecontroller 214 performs searching of content and the creation and adjusting of the displays representing the context and/or the content, for example, as described below with respect toFIGS. 4-11 . - The
controller 214 is further coupled to control memory 220 (for example, volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code forcontroller 214.Control memory 220 may store instructions forcontroller 214. Control memory may also store a database of elements, such as graphic elements representing context values or content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface fordisplay interface 218, and the like. Alternatively, the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below. Further, the implementation of thecontrol memory 220 may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit. - Referring to
FIG. 3 , the user interface process of various implementations employs an input device that can be used to provide input, including, for example, selection of day, time, audience, and/or genre. To allow for this, a tablet or touch panel device 300 (which is, for example, the same as the touchscreen control device 116 shown inFIG. 1 and/or is an integrated example of receivingdevice 108 and touch screen control device 116) may be interfaced via theuser interface 216 and/ortouch panel interface 222 of the receivingdevice 200. Thetouch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device. - In one embodiment, the
touch panel device 300 may simply serve as a navigational tool to navigate the display (for example, a navigational tool to navigate a display of context options and movie recommendations that is displayed on a TV). In other embodiments, thetouch panel device 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the display of content. Thetouch panel device 300 may be included as part of a remote control device containing more conventional control functions such as activator and/or actuator buttons. Thetouch panel device 300 can also include at least one camera element. Note that various implementations employ a large screen TV for the display of, for example, context options and movie recommendations, and employ a user input device similar to a remote control to allow a user to navigate through the display. - Referring to
FIG. 4 , ascreen shot 400 is shown. In the screen shot 400, the system has automatically detected a current day of theweek 402 and a current time of day (or at least a current portion of the day, such as, for example, morning, afternoon, or evening) 404. The implementation then provides a rank-orderedlist 410 of possible companions (that is, an audience for the content). The orderedlist 410 of the screen shot 400 includes an “Alone” option 412 (a “face” icon), a “Family&Kids” option 414 (an icon with two people on the left and one person on the right), a “Partner” option 416 (an icon of two interlocked rings), and a “Friends” option 418 (an icon of two glasses toasting each other, with a star at the point of contact between the glasses). Thecompanion list 410 is ordered (also referred to as sorted) by the expected frequencies of the possible companions 412-418 on the indicated day (Wednesday) 402 and at the indicated time 404 (afternoon). Thecompanion list 410 is ordered from left to right in decreasing order of expected frequency. For example, on Wednesday afternoon, the system believes that it is most likely that the user will be watching the movie alone 412. The next most likely companions, in order from greatest likelihood to least likelihood, are “Family&Kids” 414, “Partner” 416, and “Friends” 418. As with other recommendations and ordering, the frequency or likelihood can be based on, for example, observed habits of the user that have been tracked, on a profile provided by the user, and/or on objective information provided for other users or groups of people. - Referring to
FIG. 5 , a screen shot is shown in which the user can change the indicated day and/or time. The prompt can be provided automatically or in response to a number of actions, including, for example, (i) the user hovering over the day and/or time fields with an input device such as, for example, a mouse or a finger, and/or (ii) the user selecting the day and/or time fields with an input device. - The screen shot of
FIG. 5 includes awindow 510 overlaying the screen shot 400. In various implementations, the overlaid screen shot 400 that is layered under thewindow 510 is shaded. Thewindow 510 includes an indicator of the selected day 512 (shown as “Friday”), an indicator of the selected time 514 (shown as “Night”), and various controls. Two controls are provided for setting theday 512, including a “+”icon 520 for incrementing the day (for example, from Friday to Saturday) and a “−”icon 521 for decrementing the day (for example, from Friday to Thursday). Two analogous controls are also provided for setting thetime 514, including a “+”icon 530 for incrementing the time (for example, from Night to Morning to Afternoon to Evening) and a “−”icon 531 for decrementing the time (for example, from Night to Evening). - The
window 510 also includes two operational buttons. A “Close”button 540 closes thewindow 510 which is analogous to exiting the window without changing anything, and a “Set”button 545 sets the system to the selectedday 512 and the selectedtime 514. - Referring to
FIG. 6 , ascreen shot 600 is shown that provides an orderedlist 610 of audiences that is now based on the new selection ofday 512 andtime 514 fromFIG. 5 . As with thelist 410, thelist 610 is ordered from left to right in decreasing order of expected frequency. However,FIG. 6 provides a different ordering in thelist 610 of audience than does thelist 410 inFIG. 4 . This is because the user does not have the same likelihood of watching movies with various companions on Wednesday Afternoon as on Friday Night. Specifically, on Friday night, the system believes that it is most likely that the user will be watching a movie withFriends 418. The remaining displayed options for audiences, in order from most likely to least likely, are “Partner” 416, “Alone” 412, and “Family&Kids” 414. - Other implementations have different audience options, not just ordering differences among the options, for different days/times. For example, other audience options include, in various implementations, “Movie Club”, “Church Group”, and “Work Friends”.
- Referring to
FIG. 7 , ascreen shot 700 is shown that presents an orderedlist 710 of movie genres. The orderedlist 710 is based on the selected context elements from, for example,FIGS. 4-6 . The orderedlist 710 includes a context set 720 displaying the selected context elements. The context set 720 includes a day/time element 722 and anaudience element 724. The day/time element 722 is a generic element (a “clock” icon) that indicates that the day and time have been selected, but the day/time element 722 does not indicate what the selected day and the selected time are. Theaudience element 724, however, indicates that the selected audience is Friends. Theaudience element 724 provides the indication of the audience by using a smaller version of the toasting glasses icon that is used for theFriends option 418 fromFIGS. 4 and 6 . - In various implementations, the elements of the context set 720 present the name of the selection when a user “hovers” over the icon using, for example, a mouse or other pointing device. For example, when hovering over the clock icon, such implementations provide a small text box that displays the selected day/time, such as, for example, “Friday Night”. As another example, when hovering over the “Friends” icon, such implementations provide a small text box that displays the selected audience, which is “Friends” in this example.
- The
list 710 of movie genres includes four options for movie genres, listed in order (from left to right) of most likely to least likely. Those options are (i) a Thriller genre 732 (shown by an icon of a ticking bomb), (ii) a Crime genre 734 (shown by an icon of a rifle scope), (iii) a Science Fiction genre 736 (shown by an icon of an atom), and (iv) an Action genre 738 (shown by an icon of a curving highway). That is, the system believes that on Friday night, if the user is watching a movie with friends, then the most likely movie genres to be watched are, in decreasing order of likelihood, thriller, crime, science fiction, and action. - Referring to
FIG. 8 , ascreen shot 800 is shown that presents a variation ofFIG. 7 . InFIG. 8 , a different audience has been selected. In particular, “Partner” has been selected to replace “Friends”. The new audience selection is shown in a context set 820 that includes the generic day/time element 722 and anaudience element 824. Theaudience element 824 indicates that the selected audience is Partner because theaudience element 824 uses a smaller version of the interlocking rings icon used in thePartner option 416 fromFIGS. 4 and 6 . - The screen shot 800 includes a new ordered
list 810 of movie genres that is based on the new audience that has been selected. Thelist 810 provides the following genres, in order from most likely to least likely: (i) the Science Fiction genre, (ii) a Fantasy genre 842 (shown by an icon of a magic wand with a star on top), (iii) a Comedy genre 844 (shown by an icon of a smiley face), and (iv) a Drama genre 846 (shown by an icon of a heartbeat as typically shown on a heart rate monitor used with an electrocardiogram). By comparing thelist 710 with thelist 810, it is clear that the system believes different movie genres are more, or less, likely to be selected by the different audiences. Indeed, thelist 710 and thelist 810 have different genres, and not just a different ordering of the same set of genres. - Referring to
FIG. 9 , ascreen shot 900 is shown that presents movie recommendations for the selected context. The selected context is shown with a context set 920 that includes the generic day/time element 722, theaudience element 824, and agenre element 926 which is a smaller version of the atom icon used to represent theScience Fiction genre 736. - As described earlier, various implementations display a text box with the name of a selected context element when a user hovers over that element in the context set 920. For example, when hovering over the
genre element 926, such implementations provide a small text box that displays the selected genre, such as, for example, “Science Fiction”. - The screen shot 900 includes an ordered
set 910 of eight movie recommendations, with the highest recommendation at the top-left, and the lowest recommendation at the bottom-right. Theset 910 includes, from highest recommendation to lowest recommendation: (i) afirst recommendation 931, which is “Inception”, (ii) asecond recommendation 932, which is “Children of men”, (iii) athird recommendation 933, which is “signs”, (iv) afourth recommendation 934, which is “Super 8”, (v) afifth recommendation 935, which is “Déjà vu”, (vi) asixth recommendation 936, which is “Moon”, (vii) aseventh recommendation 937, which is “Knowing”, and (viii) aneighth recommendation 938, which is “Happening”. The eight recommendations are the movies that the system has selected as being the most likely to be selected for viewing by the user in the selected context. - More, or fewer, recommendations can be provided in different implementations. Additionally, the movies can be presented in various orders, including for example, (i) ordered from highest to lowest recommendation from top to bottom and left to right, such that the highest recommendation is top-left (reference element 931) and the second highest recommendation is bottom-left (reference element 935), etc., (ii) ordered with the highest recommendations near the middle, (iii) ordered alphabetically, or (iv) randomly arranged. The screen shot 910 shows movie posters, however other implementations merely list the titles.
- The user is able to select a movie from the screen shot 910. Upon selection, one or more of a variety of operations may occur, including, for example, playing the movie, receiving information about the movie, receiving a payment screen for paying for the movie, etc.
- The user has other options in various implementations, besides selecting a displayed movie poster. For example, in certain implementations allow a user to remove movies from the list of recommendations using, for example, a close button associated with the movie's poster. In various of such implementations another movie is recommended and inserted as a replacement for the removed movie poster. Some implementations remember the user's selections and base future recommendations, in part, on these selections. Other implementations also allow more, or less, than eight movie posters to be displayed at a given time.
- Referring to
FIG. 10 , awindow 1000 is displayed after a user has selected the sixth movie recommendation 936 (the movie “Moon”) from the screen shot 910. InFIG. 10 , thewindow 1000 is overlaying the screen shot 900. In various implementations, the overlaid screen shot 900 that is layered underneath thewindow 1000 is shaded. - In this implementation, information about the selected movie is provided to the user, as shown in the
window 1000. Thewindow 1000 includes: (i) the movie title and year ofrelease 1010, (ii) themovie poster 1020, (iii) asummary 1030 of the movie, and (iv) aset 1040 of options for viewing the movie. - The
set 1040 includes, in this implementation, four links to external sources of the selected movie “Moon”. Theset 1040 includes (i) anAllMovie button 1042 to select AllMovie (http/www.allmovie.com/) as the external source, (ii) anIMDB button 1044 to select IMDB (http//www.imdb.com/) as the external source, (iii) anAmazon button 1046 to select Amazon (http://www.amazon.com/) as the external source, and (iv) aNetflix button 1048 to select Netflix (https://www.netflix.com/) as the external source. - A user is also able to navigate back to the selection screen of the screen shot 900. By selecting a part of the overlaid screen shot 900, in
FIG. 10 , the user is able to navigate back to the previous screen ofFIG. 9 . - Referring to
FIG. 11 , the screen shot 900 is shown again as a result of the user selecting (for example, clicking within) the overlaid screen shot 900 inFIG. 10 . Recall that the screen shot 900 provided the science fiction movie recommendations. The context set 920 serves, in part, as a history of the user's selections. Each of theicons audience element 824 in the context set 920 ofFIG. 11 navigates back, for example, to the screen shot 600, which provides the audience recommendations. -
FIG. 11 also includes a “Partner” word icon 1110 (also referred to as a text box) that is displayed, for example, when the user hovers over theaudience element 824 of the context set 920. Theaudience element 824 is the “Partner” option, so the system provides a viewable name with theword icon 1110 as a guide to the user. - Referring to
FIG. 12 , a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations.FIG. 12 provides a process that includes providing a content recommendation based on context. The content is, in various implementations, one or more of movies, music, sitcoms, serial shows, sports games, documentaries, advertisements, and entertainment. Clearly, various of these categories can overlap and/or be hierarchically structured in different ways. For example, documentaries can be one genre of movies, and movies and sports games can be two genres of entertainment. Alternatively, documentaries, movies, and sports games can be three separate genres of entertainment. - Referring to
FIG. 13 , a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations.FIG. 13 provides a process that includes providing a content recommendation based on one or more of the following context categories: the user, the day, the time, the audience (also referred to as companions), and/or the genre. Note that the genre is often dependent on the type of content (for example, movies) that is being recommended. - Referring to
FIG. 14 , a one-block flow diagram is provided that describes a process for providing selections for a context category based on other context categories. For example, selections for the context categories of audience and/or genre can be provided. Further, the selections can be determined and rank-ordered based on one or more of the user, the day, and/or the time. It should be clear that the process ofFIG. 14 is integrated, in various implementations, into the processes ofFIGS. 12 and 13 . - Referring to
FIG. 15 , aprocess 1500 is provided. Theprocess 1500 includes providing a set of options for a given context category, ordered based on a value for one or more other context categories (1510). In one particular implementation, this includes providing a user an ordered set of options for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. Theoperation 1510 is performed in various implementations using, for example, one of the screen shots from any of FIGS. 4 and 6-8. For example,FIG. 4 provides alist 410 based on the context for the day and time. - The
process 1500 further includes providing a set of options for one or more additional context categories, ordered based on an option for the given context category (1520). Continuing with the example discussed above, in one particular implementation, theoperation 1520 includes providing an ordered set of options for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. Theoperation 1520 is performed in various implementations using, for example, one of the screen shots from any ofFIGS. 7-8 . - Variations of the
process 1500 further include receiving user input identifying one of the provided options for (i) the one or more other context categories, and/or (ii) the one or more additional context categories. This user input operation is performed in various implementations, for example, as discussed above in moving fromFIG. 6 toFIG. 7 or 8. - The
process 1500 can be performed using, for example, the structure provided inFIGS. 1-3 . For example, theoperations receiving device 108 or the STB/DVR 200 to provide the sets of options on thedisplay device 114, the touchscreen control device 116, or the device ofFIG. 3 . Additionally, a user input operation can be performed using the touchscreen control device 116 or the device ofFIG. 3 to receive the user input. - Referring to
FIG. 16 , a system orapparatus 1600 is shown that includes three components, one of which is optional.FIG. 16 includes an optionaluser input device 1610, apresentation device 1620, and aprocessor 1630. In various implementations, these three components 1610-1630 are integrated into a single device, such as, for example, the device ofFIG. 3 . Particular implementations integrate these three components 1610-1630 in a tablet used as a second screen while watching television. In certain tablets, theuser input device 1610 includes at least a touch-sensitive portion of a screen, thepresentation device 1620 includes at least a presentation portion of the screen, and aprocessor 1630 is housed within the tablet to receive and interpret the user input, and to control thepresentation device 1620. - In other implementations, however,
FIG. 16 depicts a distributed system in which theprocessor 1630 is distinct from, and remotely located with respect to, one or more of theuser input device 1610 and thepresentation device 1620. For example, in one implementation, theuser input device 1610 is a remote control that communicates with a set-top box, thepresentation device 1620 is a TV controlled by the set-top box, and theprocessor 1630 is located in the set-top box. - In another distributed implementation, the
presentation device 1620 and theuser input device 1610 are integrated into a second screen such as, for example, a tablet. Theprocessor 1630 is in a STB. The STB controls both the tablet and a primary screen TV. The tablet receives and displays screen shots from the STB, providing movie recommendations. The tablet accepts and transmits input from the user to the STB, in which the user interacts with the content on the screen shots. The STB does the processing for the movie recommendation system, although various implementations do have a processor in the tablet. - The
processor 1630 ofFIG. 16 is, for example, any of the options for a processor described throughout this application. Theprocessor 1630 can also be, or include, for example, the processing components inherent in the devices shown or described with respect toFIGS. 1-3 . - The
presentation device 1620 is, for example, any device suitable for providing any of the sensory indications described throughout this application. Such devices include, for example, all user interface devices described throughout this application. Such devices also include, for example, the display components shown or described with respect toFIGS. 1-3 . - The system/
apparatus 1600 is used, in various implementations to perform one or more of the processes shown inFIGS. 12-15 . For example, in one implementation of the process ofFIG. 12 , theprocessor 1630 provides a content recommendation, based on context, on thepresentation device 1620. As another example, in one implementation of the process ofFIG. 13 , theprocessor 1630 provides a recommendation based on one or more of user, day, time audience/companions, or genre. As another example, in one implementation of the process ofFIG. 14 , theprocessor 1630 provides selections for audience/companions and/or genre that are ordered based on user, day, and/or time. Other implementations also combine one or more of the processes ofFIGS. 12-14 using the system/apparatus 1600. As another example, in one implementation of the process ofFIG. 15 , theprocessor 1630 provides the two sets of options in theoperations user input device 1610 can receive the user input in those implementations that receive user input. - The system/
apparatus 1600 is also used, in various implementations, to provide one or more of the screen shots ofFIGS. 4-11 . For example, in one implementation, theprocessor 1630 provides the screen shots ofFIGS. 4-11 on thepresentation device 1620, and receives user input theuser input device 1610. In this implementation, thepresentation device 1620 and theuser input device 1610 are included in an integrated touch screen device, such as, for example, a tablet. - Various implementations of the system/
apparatus 1600 include only thepresentation device 1620 and theprocessor 1630, and do not include theuser input device 1610. Such systems are able to make content recommendations on thepresentation device 1620. Additionally, such implementations are able to access selections for context categories using one or more of, for example, (i) default values, (ii) values from profiles, and/or (iii) values accessed over a network. - Additional implementations provide a user with options for simultaneously selecting values for multiple context categories at the same time. For example, upon receiving user selection of time and day in
FIG. 5 , an implementation provides a user with rank-ordered options for both audience and genre. In one such implementation, a screen provides a first option that includes Friends and Thriller, and a second option that includes Partner and Science Fiction. - Various implementations discuss context. As previously discussed, context is indicated or described, for example, by context categories that describe an activity. Each activity (for example, consuming content such as a movie) can have its own context categories. One manner of determining context categories is to answer the common questions of “who”, “what”, “where”, “when”, “why”, and “how”. For example, if the activity is defined as consuming content, the common questions can result in a variety of context categories, as discussed below:
- “Who” is consuming the content? For example, the audience is a context category. Additionally, or alternatively, separate context categories can be used for demographic information such as age, gender, occupation, education achieved, location of upbringing, and previously observed behavior for an individual in the audience.
- “What” content is being consumed? For example, the genre of the content is a context category. Additionally, or alternatively, separate context categories can be used for the length of the content, and the maturity ranking of the content (for example, G, PG-13, or R).
- “Where” is the content being consumed? For example, the location is a context category and can have values such as, for example, in a home, in an auditorium, in a vehicle such as a plane or car, in the Deep South, or in the North East. Additionally, or alternatively, separate context categories can be used for room characteristics (for example, living room, auditorium, or airplane cabin) and geographical location (for example, Deep South).
- “When” is the content being consumed? For example, the day-and-time is a context category. Additionally, or alternatively, separate context categories can be used for the day, the time, the calendar season (winter, spring, summer, or fall), and the holiday season (for example, Christmas, Thanksgiving, or Fourth of July), as discussed further below.
- “Why” is the content being consumed? For example, the occasion is a context category and can have values such as, for example, a wedding anniversary, a child's birthday party, or a multi-generational family reunion.
- “How” is the content being consumed? For example, the medium being used is a context category and can have values such as, for example, a small screen, a large screen, a mobile device, a low-speed connection, a high-speed connection, or surround sound. Additionally, or alternatively, separate context categories can be used for screen size, connection speed, and sound quality.
- Other manners of determining context categories may also be used.
- Different implementations vary one or more of a number of features. Some of those features, and their variations, are described below:
- Various implementations use different presentation devices. Such presentation devices include, for example, a television (“TV”) (with or without picture-in-picture (“PIP”) functionality), a computer display, a laptop display, a personal digital assistant (“PDA”) display, a cell phone display, and a tablet (for example, an iPad) display. The display devices are, in different implementations, either a primary or a secondary screen. Still other implementations use presentation devices that provide a different, or additional, sensory presentation. Display devices typically provide a visual presentation. However, other presentation devices provide, for example, (i) an auditory presentation using, for example, a speaker, or (ii) a haptic presentation using, for example, a vibration device that provides, for example, a particular vibratory pattern, or a device providing other haptic (touch-based) sensory indications.
- Various implementations provide content recommendations based on other contextual information. One category of such information includes, for example, an emotional feeling of the user. For example, if the user is happy, sad, lonely, etc., the system can provide a different set of recommendations appropriate to the emotional state of the user. In one particular implementation, the system provides, based on, for example, user history or objective input from other users, a rank-ordered set of genres and/or content based on the day, the time, the audience, and the user's emotional state.
- As discussed above, another example of additional contextual information is “season”. Certain implementations provide indicators of a calendar season that include “summer”, “fall”, “winter”, and “spring”. Certain other implementations provide indicators of a holiday season that include “Christmas”, “Thanksgiving”, “Halloween”, and “Valentine's Day”. Obviously, certain implementations include both categories and their related values. As can be expected, a rank-ordering of movie genres can be expected to change based on the season. Additionally, a rank-ordering of movies within a genre can be expected to change based on the season.
- Various implementations, as should be clear from earlier statements, base genre recommendations and/or movie recommendations on contextual information that is different from that described in
FIGS. 4-11 . - Various implementations receive user input identifying a value, or a selection, for a particular context category. Other implementations access a selection, or input, in other manners. For example, certain implementations receive input from other members of an audience using, for example, any of a variety of “second screens” such as, for example, a tablet or a smartphone. As another example, certain implementations use default selections when no user input is available or received. As another example, certain implementations access use profiles, access databases from the Internet, or access other remote sources, for input or selections.
- Various implementations describe receiving a single value or selection for a particular context category. For example,
FIG. 6 anticipates receiving a single selection of audience, andFIG. 7 anticipates receiving a single selection of genre. Other implementations, however, accept or even expect multiple selections. For example, one implementation ofFIG. 6 allows a user to select two audiences, and then provides a genre recommendation based on the combined audiences. Thus, if a user is going to watch a movie with her partner and some friends, the user could select bothFriends 418 andPartner 416, and the system would recommend genres based on this combined audience. - This application provides multiple figures, including the block diagrams of
FIGS. 1-3 and 16, the pictorial representations ofFIGS. 4-11 , and flow diagrams ofFIGS. 12-15 . Each of these figures provides disclosure for a variety of implementations. - For example, the block diagrams certainly describe an interconnection of functional blocks of an apparatus or system. However, it should also be clear that the block diagrams provide a description of a process flow. As an example,
FIG. 1 also presents a flow diagram for performing the functions of the blocks ofFIG. 1 . For example, the block for thecontent source 102 also represents the operation of providing content, and the block for thebroadcast affiliate manger 104 also represents the operation of receiving broadcast content and providing the content on a scheduled delivery to thedelivery network 1 106. Other blocks ofFIG. 1 are similarly interpreted in describing this flow process. Further,FIGS. 2-3 and 16 can also be interpreted in a similar fashion to describe respective flow processes. - For example, the flow diagrams certainly describe a flow process. However, it should also be clear that the flow diagrams provide an interconnection between functional blocks of a system or apparatus for performing the flow process. For example,
reference element 1510 also represents a block for performing the function of providing a user an ordered set of options for a given context category. Other blocks ofFIG. 15 are similarly interpreted in describing this system/apparatus. Further,FIGS. 12-14 can also be interpreted in a similar fashion to describe respective systems or apparatuses. - For example, the screen shots of
FIGS. 4-11 certainly describe an output screen shown to a user. However, it should also be clear that the screen shots describe flow process for interacting with the user. For example,FIG. 4 also describes a process of presenting a user with time/day information audience information 410, and providing the user with a mechanism for selecting one of the presentedaudience options 410. Further,FIGS. 5-11 can also be interpreted in a similar fashion to describe respective flow processes. - We have thus provided a number of implementations. Various implementations provide content recommendations based on context. Various other implementations also provide context selections that are ranked according to frequency or likelihood. Various other implementations provide content recommendations that are also ranked according to frequency or likelihood.
- It should be noted, however, that variations of the described implementations, as well as additional applications, are contemplated and are considered to be within our disclosure. Additionally, features and aspects of described implementations may be adapted for other implementations.
- Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation” of the present principles, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
- Additionally, this application or its claims may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
- Further, this application or its claims may refer to “accessing” various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
- It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C” and “at least one of A, B, or C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
- Various implementations refer to a set of options for a context category. A “set” can be represented in various manners, including, for example, in a list, or another visual representation.
- Additionally, many implementations may be implemented in a processor, such as, for example, a post-processor or a pre-processor. The processors discussed in this application do, in various implementations, include multiple processors (sub-processors) that are collectively configured to perform, for example, a process, a function, or an operation. For example, the
processor 1630, theaudio processor 206, thevideo processor 210, and theinput stream processor 204, as well as other processing components such as, for example, thecontroller 214, are, in various implementations, composed of multiple sub-processors that are collectively configured to perform the operations of that component. - The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, tablets, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
- Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor, a pre-processor, a video coder, a video decoder, a video codec, a web server, a television, a set-top box, a router, a gateway, a modem, a laptop, a personal computer, a tablet, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.
- Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD”), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
- As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading syntax, or to carry as data the actual syntax-values generated using the syntax rules. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.
- A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.
Claims (20)
1. A method comprising:
providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
2. The method of claim 1 further comprising:
providing one or more content recommendations to the user based on (i) the identification of the option for the context category, and (ii) an identification of an option from the provided options for the one or more additional context categories.
3. The method of claim 1 wherein:
the set of options for the context category is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the previously determined option for the one or more other context categories, and
the set of options for the one or more additional context categories is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the identification of the option for the context category.
4. The method of claim 1 wherein providing the one or more 30 content recommendations comprises providing the one or more content recommendations in an order reflecting likelihood of selection.
5. The method of claim 1 wherein at least one of the context category or the one or more additional context categories includes one or more of (i) day of the week for intended content consumption, (ii) time of the day for 5 intended content consumption, (iii) season for intended content consumption, (iv) emotional feeling of a user, (v) the intended audience that will be consuming the content, or (vi) the genre of the content.
6. The method of claim 1 wherein:
the context category includes the intended audience that will be consuming the content, and
the one or more additional context categories includes the genre of the content.
7. The method of claim 1 wherein:
the one or more context categories include one or more of (i) day of the week for intended content consumption, or (ii) time of the day for intended content consumption.
8. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.
9. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of extrapolations and/or machine learning applied to input from one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.
10. The method of claim 1 further comprising receiving a user input as the identification of the option for the context category.
11. The method of claim 2 further comprising receiving a user input as the identification of the option for the one or more additional context categories.
12. An apparatus configured to perform one or more of the methods of claim 1 .
13. The apparatus of claim 12 comprising one or more processors collectively configured to perform one or more of the methods.
14. An apparatus comprising:
means for providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
means for providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
15. An apparatus comprising:
a presentation device; and
a processor configured to provide on the presentation device an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories, wherein the processor is further configured to provide on the presentation device an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
16. The apparatus of claim 15 further comprising a user input device configured to receive a user input as the identification of the option for the context category.
17. The apparatus of claim 16 wherein the presentation device and the user input device are integrated into a single unit.
18. An apparatus comprising one or more processors collectively configured to perform the following operations:
providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
19. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform the following operations:
providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
20. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform one or more of the methods of claim 1 .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/431,481 US20150249865A1 (en) | 2012-09-28 | 2012-12-17 | Context-based content recommendations |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261707077P | 2012-09-28 | 2012-09-28 | |
US14/431,481 US20150249865A1 (en) | 2012-09-28 | 2012-12-17 | Context-based content recommendations |
PCT/US2012/070017 WO2014051644A1 (en) | 2012-09-28 | 2012-12-17 | Context-based content recommendations |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150249865A1 true US20150249865A1 (en) | 2015-09-03 |
Family
ID=47521167
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/431,481 Abandoned US20150249865A1 (en) | 2012-09-28 | 2012-12-17 | Context-based content recommendations |
US13/858,180 Expired - Fee Related US9243484B1 (en) | 2012-09-14 | 2013-09-13 | Oil field steam generation using untreated water |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/858,180 Expired - Fee Related US9243484B1 (en) | 2012-09-14 | 2013-09-13 | Oil field steam generation using untreated water |
Country Status (6)
Country | Link |
---|---|
US (2) | US20150249865A1 (en) |
EP (1) | EP2901708A1 (en) |
JP (1) | JP2016502691A (en) |
KR (1) | KR20150065686A (en) |
CN (1) | CN104813680A (en) |
WO (1) | WO2014051644A1 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140143737A1 (en) * | 2012-11-20 | 2014-05-22 | Samsung Electronics Company, Ltd. | Transition and Interaction Model for Wearable Electronic Device |
US20150338928A1 (en) * | 2014-05-26 | 2015-11-26 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US20150350600A1 (en) * | 2012-10-31 | 2015-12-03 | Hewlett-Packard Development Company, L.P. | Visual call apparatus and method |
CN105812830A (en) * | 2016-03-11 | 2016-07-27 | 传成文化传媒(上海)有限公司 | Recommendation method and system of hotel service content |
US20170127102A1 (en) * | 2015-10-30 | 2017-05-04 | Le Holdings (Beijing) Co., Ltd. | Method and electronic device for video recommendation |
US10425700B2 (en) * | 2016-12-31 | 2019-09-24 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on real-time or near-real-time content context analysis |
US10645462B2 (en) | 2016-12-31 | 2020-05-05 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain |
US10694231B2 (en) | 2016-12-31 | 2020-06-23 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain based on user preferences |
US20200204834A1 (en) | 2018-12-22 | 2020-06-25 | Turner Broadcasting Systems, Inc. | Publishing a Disparate Live Media Output Stream Manifest That Includes One or More Media Segments Corresponding to Key Events |
US10750224B2 (en) | 2016-12-31 | 2020-08-18 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on user selection |
US10827220B2 (en) | 2017-05-25 | 2020-11-03 | Turner Broadcasting System, Inc. | Client-side playback of personalized media content generated dynamically for event opportunities in programming media content |
US10856016B2 (en) | 2016-12-31 | 2020-12-01 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection |
US10880606B2 (en) | 2018-12-21 | 2020-12-29 | Turner Broadcasting System, Inc. | Disparate live media output stream playout and broadcast distribution |
US10965967B2 (en) | 2016-12-31 | 2021-03-30 | Turner Broadcasting System, Inc. | Publishing a disparate per-client live media output stream based on dynamic insertion of targeted non-programming content and customized programming content |
US10992973B2 (en) | 2016-12-31 | 2021-04-27 | Turner Broadcasting System, Inc. | Publishing a plurality of disparate live media output stream manifests using live input streams and pre-encoded media assets |
US11038932B2 (en) | 2016-12-31 | 2021-06-15 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
US11051061B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
US11051074B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams using live input streams |
US11070860B2 (en) * | 2013-02-14 | 2021-07-20 | Comcast Cable Communications, Llc | Content delivery |
US11082734B2 (en) | 2018-12-21 | 2021-08-03 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US11109086B2 (en) | 2016-12-31 | 2021-08-31 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US11134309B2 (en) | 2016-12-31 | 2021-09-28 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US11204958B2 (en) * | 2013-03-15 | 2021-12-21 | Pandora Media, Llc | System and method of personalizing playlists using memory-based collaborative filtering |
US11503352B2 (en) | 2016-12-31 | 2022-11-15 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US11622160B2 (en) | 2014-08-11 | 2023-04-04 | Comcast Cable Communications, Llc | Merging permissions and content access |
US20230217075A1 (en) * | 2020-09-23 | 2023-07-06 | Samsung Electronics Co., Ltd. | Display device and control method therefor |
US11722848B2 (en) | 2014-06-16 | 2023-08-08 | Comcast Cable Communications, Llc | User location and identity awareness |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016153865A1 (en) | 2015-03-25 | 2016-09-29 | Thomson Licensing | Method and apparatus for providing content recommendation |
US10402410B2 (en) | 2015-05-15 | 2019-09-03 | Google Llc | Contextualizing knowledge panels |
JP6545743B2 (en) * | 2017-03-07 | 2019-07-17 | シャープ株式会社 | Display device, television receiver, display control method, display control program, control device, control method, control program, and recording medium |
WO2019058724A1 (en) * | 2017-09-21 | 2019-03-28 | シャープ株式会社 | Information processing device, portable terminal device, content recommendation method, and control program |
JP7134699B2 (en) * | 2018-05-11 | 2022-09-12 | 株式会社Nttドコモ | Information processing device and program |
JP7134698B2 (en) | 2018-05-11 | 2022-09-12 | 株式会社Nttドコモ | Information processing device and program |
US11359923B2 (en) | 2019-03-29 | 2022-06-14 | Volvo Car Corporation | Aligning content playback with vehicle travel |
US11200272B2 (en) | 2019-03-29 | 2021-12-14 | Volvo Car Corporation | Dynamic playlist priority in a vehicle based upon user preferences and context |
US11688293B2 (en) | 2019-03-29 | 2023-06-27 | Volvo Car Corporation | Providing educational media content items based on a determined context of a vehicle or driver of the vehicle |
CA3098744A1 (en) * | 2019-11-12 | 2021-05-12 | Innotech Alberta Inc. | Electrical vapor generation methods and related systems |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130979A1 (en) * | 2001-12-21 | 2003-07-10 | Matz William R. | System and method for customizing content-access lists |
US20030197740A1 (en) * | 2002-04-22 | 2003-10-23 | Nokia Corporation | System and method for navigating applications using a graphical user interface |
US20040158870A1 (en) * | 2003-02-12 | 2004-08-12 | Brian Paxton | System for capture and selective playback of broadcast programs |
US8229977B1 (en) * | 2010-03-05 | 2012-07-24 | Sprint Communications Company L.P. | Web site deployment framework |
US20130167168A1 (en) * | 2006-07-31 | 2013-06-27 | Rovi Guides, Inc. | Systems and methods for providing custom movie lists |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2864502A (en) * | 1954-04-26 | 1958-12-16 | H2 Oil Engineering Corp | Methods and means for the treatment of oil, gas and water emulsions |
US4513733A (en) | 1982-11-12 | 1985-04-30 | The Babcock & Wilcox Company | Oil field steam production and use |
US6536523B1 (en) * | 1997-01-14 | 2003-03-25 | Aqua Pure Ventures Inc. | Water treatment process for thermal heavy oil recovery |
US20030126130A1 (en) * | 2001-12-31 | 2003-07-03 | Koninklijke Philips Electronics N.V. | Sort slider with context intuitive sort keys |
JP3964728B2 (en) * | 2002-05-02 | 2007-08-22 | 日本電信電話株式会社 | Information retrieval method and apparatus, execution program for the method, and recording medium recording the execution program for the method |
EA009398B1 (en) | 2003-11-26 | 2007-12-28 | Акватек Интернэшнл Корпорейшн | Method for production of high pressure steam from produced water |
US7736518B2 (en) * | 2005-02-14 | 2010-06-15 | Total Separation Solutions, Llc | Separating mixtures of oil and water |
US20070185899A1 (en) * | 2006-01-23 | 2007-08-09 | Msystems Ltd. | Likelihood-based storage management |
GB2448874A (en) * | 2007-04-30 | 2008-11-05 | Hewlett Packard Development Co | Context based media recommender |
BRPI0814085B1 (en) * | 2007-07-19 | 2021-01-05 | Shell Internationale Research Maatschappij B.V. | seawater processing system and method |
JP5896741B2 (en) * | 2008-07-23 | 2016-03-30 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Displaying music metadata at multiple hierarchy levels |
US9367618B2 (en) * | 2008-08-07 | 2016-06-14 | Yahoo! Inc. | Context based search arrangement for mobile devices |
US8746336B2 (en) | 2009-02-06 | 2014-06-10 | Keith Minnich | Method and system for recovering oil and generating steam from produced water |
US9114406B2 (en) | 2009-12-10 | 2015-08-25 | Ex-Tar Technologies | Steam driven direct contact steam generation |
US20120160187A1 (en) | 2010-12-23 | 2012-06-28 | Paxton Corporation | Zero emission steam generation process |
-
2012
- 2012-12-17 WO PCT/US2012/070017 patent/WO2014051644A1/en active Application Filing
- 2012-12-17 US US14/431,481 patent/US20150249865A1/en not_active Abandoned
- 2012-12-17 JP JP2015534453A patent/JP2016502691A/en active Pending
- 2012-12-17 EP EP12812782.6A patent/EP2901708A1/en not_active Ceased
- 2012-12-17 CN CN201280076106.9A patent/CN104813680A/en active Pending
- 2012-12-17 KR KR1020157007733A patent/KR20150065686A/en not_active Application Discontinuation
-
2013
- 2013-09-13 US US13/858,180 patent/US9243484B1/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130979A1 (en) * | 2001-12-21 | 2003-07-10 | Matz William R. | System and method for customizing content-access lists |
US20030197740A1 (en) * | 2002-04-22 | 2003-10-23 | Nokia Corporation | System and method for navigating applications using a graphical user interface |
US20040158870A1 (en) * | 2003-02-12 | 2004-08-12 | Brian Paxton | System for capture and selective playback of broadcast programs |
US20130167168A1 (en) * | 2006-07-31 | 2013-06-27 | Rovi Guides, Inc. | Systems and methods for providing custom movie lists |
US8229977B1 (en) * | 2010-03-05 | 2012-07-24 | Sprint Communications Company L.P. | Web site deployment framework |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150350600A1 (en) * | 2012-10-31 | 2015-12-03 | Hewlett-Packard Development Company, L.P. | Visual call apparatus and method |
US20140143737A1 (en) * | 2012-11-20 | 2014-05-22 | Samsung Electronics Company, Ltd. | Transition and Interaction Model for Wearable Electronic Device |
US11372536B2 (en) * | 2012-11-20 | 2022-06-28 | Samsung Electronics Company, Ltd. | Transition and interaction model for wearable electronic device |
US11070860B2 (en) * | 2013-02-14 | 2021-07-20 | Comcast Cable Communications, Llc | Content delivery |
US11204958B2 (en) * | 2013-03-15 | 2021-12-21 | Pandora Media, Llc | System and method of personalizing playlists using memory-based collaborative filtering |
US20150338928A1 (en) * | 2014-05-26 | 2015-11-26 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US9965049B2 (en) * | 2014-05-26 | 2018-05-08 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US11722848B2 (en) | 2014-06-16 | 2023-08-08 | Comcast Cable Communications, Llc | User location and identity awareness |
US11622160B2 (en) | 2014-08-11 | 2023-04-04 | Comcast Cable Communications, Llc | Merging permissions and content access |
US20170127102A1 (en) * | 2015-10-30 | 2017-05-04 | Le Holdings (Beijing) Co., Ltd. | Method and electronic device for video recommendation |
WO2017071244A1 (en) * | 2015-10-30 | 2017-05-04 | 乐视控股(北京)有限公司 | Mobile phone screen-based video recommendation method and system |
CN105812830A (en) * | 2016-03-11 | 2016-07-27 | 传成文化传媒(上海)有限公司 | Recommendation method and system of hotel service content |
US11038932B2 (en) | 2016-12-31 | 2021-06-15 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
US10694231B2 (en) | 2016-12-31 | 2020-06-23 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain based on user preferences |
US11917217B2 (en) | 2016-12-31 | 2024-02-27 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection publishing disparate live media output streams in mixed mode based on user selection |
US10425700B2 (en) * | 2016-12-31 | 2019-09-24 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on real-time or near-real-time content context analysis |
US11665398B2 (en) | 2016-12-31 | 2023-05-30 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US10645462B2 (en) | 2016-12-31 | 2020-05-05 | Turner Broadcasting System, Inc. | Dynamic channel versioning in a broadcast air chain |
US10965967B2 (en) | 2016-12-31 | 2021-03-30 | Turner Broadcasting System, Inc. | Publishing a disparate per-client live media output stream based on dynamic insertion of targeted non-programming content and customized programming content |
US10992973B2 (en) | 2016-12-31 | 2021-04-27 | Turner Broadcasting System, Inc. | Publishing a plurality of disparate live media output stream manifests using live input streams and pre-encoded media assets |
US11503352B2 (en) | 2016-12-31 | 2022-11-15 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on external data |
US11051061B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream using pre-encoded media assets |
US11051074B2 (en) | 2016-12-31 | 2021-06-29 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams using live input streams |
US10856016B2 (en) | 2016-12-31 | 2020-12-01 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode based on user selection |
US10750224B2 (en) | 2016-12-31 | 2020-08-18 | Turner Broadcasting System, Inc. | Dynamic scheduling and channel creation based on user selection |
US11134309B2 (en) | 2016-12-31 | 2021-09-28 | Turner Broadcasting System, Inc. | Creation of channels using pre-encoded media assets |
US11109086B2 (en) | 2016-12-31 | 2021-08-31 | Turner Broadcasting System, Inc. | Publishing disparate live media output streams in mixed mode |
US10939169B2 (en) | 2017-05-25 | 2021-03-02 | Turner Broadcasting System, Inc. | Concurrent presentation of non-programming media assets with programming media content at client device |
US11297386B2 (en) | 2017-05-25 | 2022-04-05 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US10827220B2 (en) | 2017-05-25 | 2020-11-03 | Turner Broadcasting System, Inc. | Client-side playback of personalized media content generated dynamically for event opportunities in programming media content |
US11095942B2 (en) | 2017-05-25 | 2021-08-17 | Turner Broadcasting System, Inc. | Rules-based delivery and presentation of non-programming media items at client device |
US11228809B2 (en) | 2017-05-25 | 2022-01-18 | Turner Broadcasting System, Inc. | Delivery of different services through different client devices |
US11245964B2 (en) | 2017-05-25 | 2022-02-08 | Turner Broadcasting System, Inc. | Management and delivery of over-the-top services over different content-streaming systems |
US11109102B2 (en) | 2017-05-25 | 2021-08-31 | Turner Broadcasting System, Inc. | Dynamic verification of playback of media assets at client device |
US11051073B2 (en) | 2017-05-25 | 2021-06-29 | Turner Broadcasting System, Inc. | Client-side overlay of graphic items on media content |
US10924804B2 (en) | 2017-05-25 | 2021-02-16 | Turner Broadcasting System, Inc. | Dynamic verification of playback of media assets at client device |
US11082734B2 (en) | 2018-12-21 | 2021-08-03 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream that complies with distribution format regulations |
US10880606B2 (en) | 2018-12-21 | 2020-12-29 | Turner Broadcasting System, Inc. | Disparate live media output stream playout and broadcast distribution |
US20200204834A1 (en) | 2018-12-22 | 2020-06-25 | Turner Broadcasting Systems, Inc. | Publishing a Disparate Live Media Output Stream Manifest That Includes One or More Media Segments Corresponding to Key Events |
US10873774B2 (en) | 2018-12-22 | 2020-12-22 | Turner Broadcasting System, Inc. | Publishing a disparate live media output stream manifest that includes one or more media segments corresponding to key events |
US20230217075A1 (en) * | 2020-09-23 | 2023-07-06 | Samsung Electronics Co., Ltd. | Display device and control method therefor |
Also Published As
Publication number | Publication date |
---|---|
CN104813680A (en) | 2015-07-29 |
JP2016502691A (en) | 2016-01-28 |
WO2014051644A1 (en) | 2014-04-03 |
US9243484B1 (en) | 2016-01-26 |
EP2901708A1 (en) | 2015-08-05 |
KR20150065686A (en) | 2015-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150249865A1 (en) | Context-based content recommendations | |
US10963498B2 (en) | Systems and methods for automatic program recommendations based on user interactions | |
JP5619621B2 (en) | System and method for selecting media assets to be displayed on a screen of an interactive media guidance application | |
JP2020115355A (en) | System and method of content display | |
JP5328658B2 (en) | Present media guidance search results based on relevance | |
US8285726B2 (en) | Presenting media guidance search results based on relevancy | |
US7996399B2 (en) | Presenting media guidance search results based on relevancy | |
US20130007618A1 (en) | Systems and methods for mixed-media content guidance | |
US20130054319A1 (en) | Methods and systems for presenting a three-dimensional media guidance application | |
US20130347033A1 (en) | Methods and systems for user-induced content insertion | |
US20140172891A1 (en) | Methods and systems for displaying location specific content | |
US20140298215A1 (en) | Method for generating media collections | |
JP2013513304A (en) | System and method for determining proximity of media objects in a 3D media environment | |
JP5766220B2 (en) | Present media guidance search results based on relevance | |
JP2014508984A (en) | Method and apparatus for providing media recommendations | |
US20150363500A1 (en) | Method and system for content discovery | |
US20140245353A1 (en) | Methods and systems for displaying media listings | |
CN106687957B (en) | Method and apparatus for search query construction | |
WO2015153125A1 (en) | System and method for interactive discovery for cold-start recommendation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |