US20160182971A1 - Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game - Google Patents

Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game Download PDF

Info

Publication number
US20160182971A1
US20160182971A1 US14/976,493 US201514976493A US2016182971A1 US 20160182971 A1 US20160182971 A1 US 20160182971A1 US 201514976493 A US201514976493 A US 201514976493A US 2016182971 A1 US2016182971 A1 US 2016182971A1
Authority
US
United States
Prior art keywords
data
region
display
video
additional information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/976,493
Inventor
Luis M. Ortiz
Richard H. Krukar
Kermit D. Lopez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Flick Intelligence LLC
Original Assignee
Flickintel LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/976,148 external-priority patent/US9508387B2/en
Priority claimed from US13/345,382 external-priority patent/US8751942B2/en
Priority claimed from US13/413,859 external-priority patent/US9465451B2/en
Application filed by Flickintel LLC filed Critical Flickintel LLC
Priority to US14/976,493 priority Critical patent/US20160182971A1/en
Publication of US20160182971A1 publication Critical patent/US20160182971A1/en
Assigned to Flick Intelligence, LLC reassignment Flick Intelligence, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FLICK INTEL, LLC
Priority to US16/845,604 priority patent/US11496814B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03542Light pens for emitting or receiving light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • H04N21/41265The peripheral being portable, e.g. PDAs or mobile phones having a remote control device for bidirectional communication between the remote control device and client device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • H04N21/42207
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42208Display device provided on the remote control
    • H04N21/42209Display device provided on the remote control for displaying non-command information, e.g. electronic program guide [EPG], e-mail, messages or a second television channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/4222Remote control device emulator integrated into a non-television apparatus, e.g. a PDA, media center or smart toy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42222Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42224Touch pad or touch panel provided on the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards

Definitions

  • Embodiments relate to video content, video displays, and video compositing. Embodiments also relate to computer systems, user input devices, databases, and computer networks. Embodiments also relate to environments in which mobile device users can access supplemental data related to entertainment being displayed on multimedia devices. Embodiments are also related to methods, systems and processor-readable media for supporting entertainment programming identification and data retrieval.
  • one aspect of the disclosed embodiments to provide for methods, systems and applications for supporting the retrieval of information about a scene (e.g., live or recorded video programming) being displayed on a display screen without disturbing the scene as it is being rendered on the display screen.
  • a scene e.g., live or recorded video programming
  • a method can include, for example, registering at least one wireless hand held device with a server, service or controller providing background information about scene and/or associated with at least one at least one multimedia display.
  • a program can operate on a mobile device that enables the mobile device with a digital camera to capture a picture of a scene, video or a scene form video, rendering (being displayed) on a display screen (e.g., flat panel display), determining the identity of the scene or video by comparison of the capture picture with images of programming stored in a database, and providing the mobile device with access to data related to the identified scene that can include information about the scene (title, date, location, actors/characters, product placement, history, statistics) for review on or through the mobile device.
  • the database can be associated with a remote server that can be access over a data communications network (wired or wirelessly) by the mobile device.
  • Providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held mobile device based on a selection (capture) of the data rendering on the at least one multimedia display can require registration of the wireless hand held mobile device with the remote server (e.g., a service provided remotely to registered users).
  • the remote server e.g., a service provided remotely to registered users.
  • a step can be implemented for obtaining supplemental information from an event in the form of at least one of a movie, television program, recorded event, a live event, sporting event, or a multimedia game.
  • a step can be implemented for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • a step can be implemented for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device.
  • a step can be implemented for pushing the supplemental data for rendering via at least one other multimedia display from the wireless handheld mobile device.
  • a step can be implemented for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display.
  • a step can be provided for filtering the supplemental data for rendering via a multimedia display.
  • a system can be implemented for supporting scene identification and related data retrieval with a mobile device that can include any of an integrated digital camera, a processor, wireless data communications, and a location determination module (e.g., GPS).
  • the mobile device can be configured for selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display: and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • such instructions can be further configured for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game.
  • such instructions can be further configured for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • such instructions can be further configured for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device.
  • such instructions can be configured for pushing the supplemental data for rendering via at least one other multimedia display from a remote server via the mobile device or directly from the server to the at least one other multimedia display over a data network.
  • such features can be further configured for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display.
  • such instructions can be further configured for filtering the supplemental data for rendering via a multimedia display.
  • a processor-readable medium in a mobile device can be implemented for storing code representing instructions or an application to cause a processor on the mobile device to facilitate image capturing, image matching via a remote server, access to and retrieval of supplemental information related to programming matching the captured image, and rendering of the supplemental data on the mobile device or via a secondary display screen.
  • the mobile device can support bidirectional communications and data sharing.
  • remote servers can include code to register at least one wireless hand held device with the remote server to gain access to image-program matching capabilities and access supplemental information related to programming that matches images of events rendering on multimedia display screens located near (within image capture and/or WIFI range of) the mobile devices.
  • a profile can be associated with a registered mobile device/user.
  • location information from a mobile device and/or profile information can be utilized by a remote server to locate at least one additional remote server to support supplemental data access by the mobile device.
  • a remote server located closer to the mobile device can reduce transmission inefficiencies associated with data networks and long distances.
  • an additional remote server can be selected based on a profile associated with the registered user/mobile device so that information can be provided in different format pre-selected or related to the user/mobile device, such as a different language (e.g., in French or Spanish, instead of English).
  • such code can further comprise code to provide the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game.
  • such code can comprise code to manipulate via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • such code can comprise code to store the supplemental data in a memory in response to a user input via the at east one wireless hand held device.
  • such code can comprise code to push the supplemental data for rendering via at least one other multimedia display.
  • such code can comprise code to push the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display.
  • such code can comprise code to filter the supplemental data for rendering via a multimedia display.
  • a method can be implemented for displaying additional information about a displayed point of interest. Such a method may include selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region, and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In still other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In yet other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • the mobile device can be, for example, a Smartphone.
  • the mobile device can be, for example, a pad computing device (e.g., an iPad, an Android tablet computing device, a Kindle (Amazon) device etc.
  • the mobile device can be a remote gaming device.
  • a step or operation can be implemented for synchronizing the display with the secondary display through a network.
  • the aforementioned network can be, for example, a wireless network (e.g., Wi-Fi), a cellular communications network, the Internet, etc.
  • a system can be implemented for displaying additional information about a displayed point of interest.
  • a system can include, for example, a memory; and a processor in communications with the memory, wherein the system is configured to perform a method, wherein such a method comprises, for example, selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame being rendered on a touch-sensitive display screen associated with the mobile device for selection via touch sensitivity of the touch-sensitive display by the mobile device user.
  • the mobile device may be, for example, a Smartphone. It still other system embodiments, the mobile device can be a tablet or pad computing device. In yet other system embodiments, the mobile device may be, for example, a remote gaming device. In still other system embodiments, the aforementioned method can include synchronizing the mobile device with a secondary display through a network.
  • FIG. I illustrates a high level diagram of portions of a system including a wireless Hand Held Device (HHD) visually near a multimedia display screen, which can capture images of video programming being presented on the display screen and/or be implemented as an active pointer, in accordance with the disclosed embodiments;
  • HHD Hand Held Device
  • FIG. 2 illustrates two cursors being displayed over selectable and non-selectable screen areas, in accordance with the disclosed embodiments
  • FIG. 3 illustrates a system that includes at least one HHD and at least one multimedia display, in accordance with the disclosed embodiments
  • FIG. 4 illustrates a system that includes a user accessing video information, video annotation infrastructure and/or video programming identification a HHD having a video camera and adapted to capture a photographic image of video programming being displayed on the display screen, and/or alternatively via augmented reality device, in accordance with the disclosed embodiments.
  • FIG. 5 illustrates an annotated group video of a user's social contacts, in accordance with the disclosed embodiments
  • FIG. 6 illustrates a system which includes differences between cursor control and element identification, in accordance with the disclosed embodiments
  • FIG. 7 illustrates a system that depicts additional processes and methods of cursor selection, in accordance with the disclosed embodiments
  • FIG. 8 illustrates a graphical view of a system that includes a multimedia display with an annotated element, in accordance with the disclosed embodiments
  • FIG. 9 illustrates a system utilizing annotated video with a setting have multiple independent users viewing a single display such as a movie screen, in accordance with the disclosed embodiments
  • FIG. 10 illustrates a system similar to that of system of FIG. 9 , but adapted for sports venues, music venues, and theatrical events, and other events (e.g., political conventions, trade shows, etc.) in accordance with the disclosed embodiments;
  • FIG. 11 illustrates a system, in accordance with the disclosed embodiments.
  • FIG. 12 illustrates a system depicting a Sync Data Request that can be sent to an identification service, in accordance with an embodiment
  • FIG. 13 illustrates a system that includes a Sync Data Response that an identification service can send in response to a sync data request, in accordance with an embodiment
  • FIG. 14 illustrates a system for maintaining unique identifiers for elements that appear in videos and in using those unique identifiers for scattering and gathering user requests and responses, in accordance with the disclosed embodiments.
  • FIG. 15 illustrates a system generally including a multimedia display associated with a controller and which communicates with one or more wireless HHD's, in accordance with the disclosed embodiments.
  • FIG. 16 illustrates a system that generally includes a multimedia display and a secondary multimedia display, in accordance with the disclosed embodiments.
  • FIG. 17 illustrates a system in which an HHD can communicate wirelessly via a data network (e.g., the Internet) with a remote server associated with a database for providing at least one of: identification based on a captured image provided from a HHD of event or video programming rendering on a multimedia display screen, for obtaining supplemental data associated with the event or video programming by the HHD once the event or video programming is identified, and storage of the supplemental data in a memory or database for later retrieval, in accordance with the disclosed embodiments.
  • a data network e.g., the Internet
  • FIG. 18 illustrates a system in which multiple HHD's can communicate with the controller and/or the multimedia displays, and also illustrates that additional multimedia devices such as flat panel display screens can be provided for rendering supplemental data obtained by the HHD from a remote server based on the matching of an image captured by a camera integrated with the HHD and matched by the remote server, in accordance with the disclosed embodiments;
  • FIG. 19 illustrates a system in which multiple HHD's (or a single HHD in some cases) can communicate via a data network 520 with a remote server and database associated with and/or in communication with sever, in accordance with the disclosed embodiments;
  • FIG. 20 illustrates a graphical view of a system that includes a multimedia display and one or more wireless HDD's etc. in accordance with the disclosed embodiments;
  • FIG. 21 illustrates a flow chart depicting logical operational steps of a method for registering and associating an HHD with a display screen and displaying supplemental data in accordance with the disclosed embodiments
  • FIG. 22 illustrates a flow chart depicting logical operational steps of a method for transmitting, storing and retrieving supplemental data and registering an HHD with a secondary screen, in accordance with the disclosed embodiments;
  • FIG. 23 illustrates a flow chart depicting logical operational steps of a method for designating and displaying profile icons for selection of content/media of interest and supplemental data thereof, in accordance with the disclosed embodiments;
  • FIG. 24 illustrates a flow chart depicting logical operation steps of a method for capturing with an HHD camera an image of video programming being displayed on a display screen, providing the image to a remote server for comparison to and matching with images of programming stored in a database, identifying a match and the availability of supplemental data to the HHD form the remote server, and enabling HHD access to the supplemental data including elements of interest associated or tagged in the image of video programming captured by the HHD;
  • FIG. 25 illustrates a schematic diagram of a set that includes a few elements such as actors and some props positioned in front of a “green screen,” in accordance with an example embodiment
  • FIG. 26 illustrates a schematic diagram of an example wherein the scene of FIG. 25 is imaged to produce a Final view, that includes a background view has been composited in with a foreground view to produce the Final view, in accordance with an example embodiment
  • FIG. 27 illustrates a schematic diagram of a view changing because the camera moves while imaging a scene, in accordance with an example embodiment
  • FIG. 28 illustrates a schematic diagram of a scenario wherein a user can select an item by selecting a point within the final view (i.e., a spot on the screen), in accordance with an example embodiment
  • FIG. 29 illustrates a block diagram of a system that includes an annotation database accessible via an annotation service, in accordance with an example embodiment.
  • the present invention can be embodied as a method, system, and/or a processor-readable medium. Accordingly, the embodiments may take the form of an entire hardware application, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the embodiments may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer-readable medium or processor-readable medium may be utilized including, for example, but not limited to, hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
  • Computer program code for carrying out operations of the disclosed embodiments may be written in an object oriented programming language (e.g., Java, C++, etc.).
  • the computer program code, however, for carrying out operations of the disclosed embodiments may also be written in conventional procedural programming languages such as the “C” programming language, HTML, XML, etc., or in a visually oriented programming environment such as, for example, Visual Basic.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
  • the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
  • program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
  • program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
  • program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
  • program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
  • program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
  • program modules include, but are not limited to, routines, sub
  • module may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines, and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module.
  • the term module may also simply refer to an application such as a computer program designed to assist in the performance of a specific task such as word processing, accounting, inventory management, etc. Additionally, the term “module” can also refer in some instances to a hardware component such as a computer chip or other hardware.
  • circuits and other means supported by each block and combinations of blocks can be implemented by special purpose hardware, software or firmware operating on special or general-purpose data processors, or combinations thereof. It should also be noted that, in some alternative implementations, the operations noted in the blocks can occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or the varying embodiments described herein can be combined with one another or portions of such embodiments can be combined with portions of other embodiments in another embodiment.
  • FIG. 1 illustrates a high level diagram of portions of a system 100 including a wireless Hand Held Device (HHD) 120 , located visually near a multimedia display screen, which can capture images of video programming being presented on the display screen and/or be implemented as an active pointer, in accordance with the disclosed embodiments.
  • the HHD 120 may be, for example, a wireless hand held device such as, for example, a Smartphone, cellular telephone, a remote control device such as a video game control device, a television/set-top box remote control, a table computing device (e.g., “iPad”) and so forth.
  • An active pointer is one that emits a signal Remote controls emitting an ultraviolet signal, a flashlight, or a laser pointer are examples of an active pointer.
  • an active pointing application can be incorporated into an HHD such as HHD 120 and can also communicate via wireless bidirectional communications with other devices, such as, for example the multimedia display 121 shown in FIG. 1 .
  • a multimedia display can be, for example, a flat panel television or monitor or other display screen (e.g., a movie theater display screen), which in turn can be, in some scenarios, associated with a controller or set-top box, etc., or can integrate such features therein.
  • a user 122 can hold the HHD 120 , which in turn can emit and receive wireless signals.
  • the multimedia display device 121 i.e., display device
  • the multimedia display device 121 generally includes a multimedia display area or multimedia display 126 .
  • the HHD 120 can also include an integrated digital camera and wireless data communications, which are features common in, for example, smartphone devices and tablets that are in wide use today. Configured in this manner, the HHD can capture an image (e.g. photograph) of video programming as it is rendering on the multimedia display 126 .
  • information can be embedded in the emitted signal such as identification information that identifies the pointer or HHD 120 or that identifies the user 122 .
  • the user 122 can be recognized by the HHD 120 or pointer in a variety of ways such as by requiring a passcode tapped out on one or more buttons, a unique gesture detected by accelerometers, or biometric data such as a fingerprint, retinal pattern, or one of the body's naturally occurring and easily measurable signals such as heartbeat that has been shown to often contain unique traits.
  • One of the datums often encoded in the emitted signal is the state of a selection button.
  • One very simple case would involve the use of a laser pointer as the HHD 120 that is simply on or off.
  • An advantage of embedding a unique identifier in the emitted signal is that multiple pointers can be used simultaneously.
  • a pointer detector 114 can read and track the emitted signal
  • the pointer detector 114 typically has multiple sensors and can determine where the pointer or HHD 120 is pointing.
  • the pointer detector 114 and the display 126 being presented on a multimedia display such as multimedia display 126 are separate units that can be located near one another.
  • a calibration routine can be utilized so that the pointer detector 114 can accurately determine where on the display 126 the pointer is aimed.
  • the output of the pointer detector 114 can include the pointer's aim point and the status of the selection button or other selection actuator.
  • the multimedia display device 121 and hence the multimedia display 126 present video data such as a movie, television show, or sporting event to the user 122 .
  • the video data can be annotated by associating it with annotation selection data.
  • the annotation selection data 110 specifies the times and screen locations at which certain scene elements, each having an element identifier 116 , are displayed.
  • the annotation selection data 110 can be included with the video data as a data channel similar to the red, green, blue channels present in some video formats, can be encoded into blank spaces in an NTSC type signal, can be separately transmitted as a premium service, can be downloaded and stored, or another technique by which the user has access to both the video data and the annotation data can be used.
  • synchronization data 106 can be utilized to ensure that the video data and annotation selection data 110 are in sync.
  • One example of synchronization data is the elapsed time from the video start until the current frame is displayed.
  • the annotation data can specify selectable zones.
  • a simple image can have selectable areas.
  • a video is a timed sequence of images.
  • a selectable zone is a combination of selectable areas and the time periods during which the areas are selectable.
  • a selectable zone has coordinates in both space and time and can be specified as a series of discrete coordinates, a series of temporal extents (time periods) and areal extents (bounding polygons or closed curves), as areal extents that change as a function of time, etc.
  • Areal extents can change as a function of time when a selectable area moves on the display, changes size during a scene, or changes shape during a scene.
  • selectable zone and “cursor coordinate” will be used in this disclosure as having both spatial and temporal coordinates.
  • a selectable zone can be specified in terms of on-screen extents and time periods.
  • a cursor coordinate combines the aim point and a media time stamp.
  • Annotated scene elements can be specified as selectable zones associated with one or more element identifiers, such as, for example, element identifier 116 .
  • U.S. Pub No.: 20080066129 A1 submitted by Katcher et al., titled: “Method and Apparatus for Interaction with Hyperlinks in a Television Broadcast” was filed Nov. 8, 2007 and is herein incorporated by reference in its entirety.
  • U.S. Pub No.: 20080066129 A1 teaches and discloses the annotation of video, video display systems, video data systems, databases, and viewer interaction with annotated video to obtain information about on-screen items. It is for its teachings of video, video display systems, video data systems, databases, viewer interaction with annotated video, and viewer obtainment of information about on-screen items that U.S. Pub No.: 20080066129 A1 is herein incorporated by reference in its entirety.
  • U.S. Pub No.: 20100154007 A1 submitted by Touboul et al., titled: “Embedded Video Advertising Method and System” was filed Apr. 21, 2009 and is herein incorporated by reference in its entirety.
  • U.S. Pub No.: 20100154007 A1 teaches and discloses the annotation of video, video display systems, video data systems, databases, and viewer interaction with annotated video to obtain information about on-screen items. It is for its teachings of video, video display systems, video data systems, databases, viewer interaction with annotated video, and viewer obtainment of information about on-screen items that U.S. Pub No.: 20100154007 A1 is herein incorporated by reference in its entirety.
  • a cursor control module 108 can change the appearance of a displayed cursor 124 based on the cursor coordinate.
  • the aim point can be passed to the cursor control module 108 that also receives synchronization data.
  • the synchronization data can indicate the displayed scene, elapsed time within a scene, elapsed time from video start, the media time stamp, or other data that helps determine the media time stamp.
  • the cursor coordinate can be determined from the aim point and the media time stamp.
  • the cursor control module 108 can examine the annotation selection data for the cursor coordinate to determine if the pointer is aimed at an annotated scene element. In terms of selectable zones, the cursor coordinate lies within a selectable zone or it doesn't. If it does, the cursor control module 108 can cause a “selectable” cursor to be displayed at the aim point. Otherwise, a different cursor can be displayed.
  • the cursor control module 108 can communicate with a media device 104 , which in turn can communicate with the display device 121 .
  • the media device 104 also receives data from the annotated video data 102 and then sends data as, for example, synchronization data 106 to the cursor control module 108 .
  • the pointer detector 114 can transmit, for example, pointer aim point and/or selection aimpoint information 112 to the cursor control module 108 .
  • annotation selection data 110 can be provided to the cursor control module 108 and also to one or more element identifiers, such as, for example, the element identifier 116 , which in turn can send information/data to a video annotation infrastructure.
  • the HHD 120 can capture an image of the scene (e.g., video programming) as it is being rendered (displayed) on the display device 121 .
  • the image can be captured with a digital camera commonly found integrated in HHDs.
  • the captured image can then be used by the HHD to determine the identity of the video programming and obtain addition information about it by communicating with a remote server over a data communication network using wireless communication capabilities also commonly found in HHDs. Additional teachings of video capture, programming identification, and access to related data are further discussed with respect to FIG. 4 below.
  • FIG. 2 illustrates two cursors 125 and 127 being displayed over selectable and non-selectable screen areas, in accordance with the disclosed embodiments.
  • the screen area occupied by the “Bletch Cola” image 123 is within a selectable zone and a solid ring cursor 127 is shown. Other areas are not inside selectable zones and an empty ring (i.e., cursor 125 ) is shown.
  • Any easily distinguished set of cursors can be utilized because the reason for changing the cursor is to alert a user, such as user 122 that the screen element is selectable.
  • aiming at a selectable scene element can result in a “selectable” cursor being displayed.
  • the selectable element can be selected by actuating a trigger or button on the pointing device.
  • Other possibilities include letting the cursor linger over the selectable element, awaiting a “select this” query to pop up and pointing at that, or performing a gesture with the pointing device/HHD 120 .
  • the annotation data can include additional data associated with the selectable areas or selectable zones such as element identifiers, cursor identifiers, cursor bitmaps, and zone executable code.
  • the zone executable code can be triggered when the cursor enters a selectable zone, leaves a selectable zone, lingers in a selectable zone, or when a selection type event occurs when the cursor coordinate is inside a selectable zone.
  • the zone executable code can augment or replace default code that would otherwise be executed when the aforementioned events occur.
  • the default code for a selection event can cause an element identifier to be embedded in an element query that is then sent to a portion of the video annotation infrastructure.
  • the zone executable code can augment the default code by sending tracking information to a market research firm, playing an audible sound, or some other action.
  • the additional data allows for customization based on the specific selectable zone.
  • FIG. 3 illustrates a system 300 that includes an HHD 120 and a multimedia display 126 , in accordance with the disclosed embodiments.
  • System 300 of FIG. 3 is similar to system 100 of FIG. 1 with some notable exceptions.
  • System 100 of FIG. 1 includes “annotated video data” (i.e., annotated video data 102 ) wherein the annotation election data 110 is distributed with the video data.
  • the annotation selection data 110 is obtained from some other source.
  • the media device 104 still provides the synchronization data 106 used for determining the time datum for the cursor coordinate.
  • a passive pointing device such as a hand can be utilized.
  • a device such as, for example, the Microsoft Kinect® can determine an aim point, selection gesture, and other actions by analyzing the user's stance, posture, body position, or movements.
  • HHD 120 may be, as indicated earlier, a Smartphone, game control device, tablet or pad computing device, and so forth. It can be appreciated, for example, the HHD 120 and the pointer detector 113 may be, for example, the same device, or associated with one another.
  • FIG. 3 illustrates an element identification module 115 examining the cursor coordinate and annotation data to determine an element identifier 116 .
  • the element identifier and some user preferences 140 can be combined to form a scene element query 142 .
  • the user preferences can include the user's privacy requests, location data, data from a previous query, or other information. Note that the privacy request can be governed by the law. For example, the US has statutes governing children's privacy and many nations have statutes governing the use and retention of anyone's private data.
  • the user's element query can pass to a query router 144 .
  • the query router 144 can send the query to a number of servers based on the element identifier 116 (e.g., element ID), the user's preferences, etc. Examples include: local servers that provide information tailored to the user's locale; product servers that provide information directly from manufacturers, distributors, or marketers, ad servers wherein specific queries are routed to certain servers in return for remuneration, and social network servers that share the query or indicated interest amongst the user's friends or a select subset of those friends.
  • the query router 144 can route data to, for example, one or more scene element databases 148 , which in turn provide information to an information display module 150 .
  • the information display module 150 can assemble the query responses and displays them to a user, such as user 122 .
  • FIG. 3 also illustrates a secondary multimedia display 129 on Which query results can be presented. On response is an advertisement, another is an offer for a coupon, and a third is an offer for online purchase.
  • the secondary display 129 can be, for example, a cell phone, computer, tablet, television, or other web-connected device.
  • FIG. 4 illustrates a system 400 that includes a user 122 accessing video information, video annotation infrastructure and/or video programming identification with a HHD 120 having a video camera and adapted to capture a photographic image of video programming being displayed on the display screen 121 , and/or alternatively video annotation infrastructure through an augmented reality device 161 , in accordance with the disclosed embodiments.
  • the augmented reality (AR) device 161 can include a local display that the user 122 can observe, a video camera, and other devices for additional functionality.
  • the display is a movie screen and the user 122 is one of many people watching a movie in a theater.
  • the multimedia display device 121 in this example can include a movie screen and projector.
  • the movie screen can include markers that help the AR device 161 determine the screen location and distance.
  • the user 122 can watch the movie through the AR device 161 with the AR device 161 introducing a cursor into the user's view of the movie.
  • the AR device 161 can be one having a forward pointing camera and a display presenting what is in front of the AR device 161 to the user 122 .
  • users can view the world through their own eyes or can view an augmented version by viewing the world through the device.
  • the user 122 is aiming the camera at the display and is, oddly, watching the display on a secondary display 129 almost as if the AR device 161 is a transparent window with the exception that the secondary display 129 can also present annotation data, can zoom in or out, and can overlay various other visual elements within the user's view.
  • the AR device 161 can thus include a number of modules and features, such as, for example, a scene alignment module 160 , pointer aimpoint and/or selection aimpoint 162 , and a cursor control module 164 .
  • Other possible modules and or features can include a synchronization module 166 , and an element identification module 170 , along with an annotation data receiver 172 , annotation selection data 174 , and a user authentication module 178 along with user preferences 180 .
  • element identifier 176 user's scene element query 182 , a query router 184 , a local scene element database 186 , and an information overlay module 188 .
  • the display 129 can be, in some examples, a local display.
  • the information overlay module can also access, for example, data from a remote scene element database 190 .
  • the theater can transmit synchronization data 168 , for example, as a service to its customers, perhaps as a paid service or premium service.
  • the AR device 161 can include a synchronization module 166 that obtains synchronization data 168 from some other source, perhaps by downloading video fingerprinting data or by processing signals embedded within the movie itself.
  • the scene alignment module 60 can locate the display position from one or more of the markers 153 , 155 , and 157 , by simply observing the screen edges, or some other means.
  • the display position can be used to map points on the local display, such as the cursor, to points on the main display (movie screen). Combining the cursor position and the synchronization data gives the cursor coordinate.
  • Annotation selection data can be downloaded into the AR device 161 as the movie displays, ahead of time, or on-demand. Examining the annotation selection data for the cursor coordinate can yield the element identifier 176 .
  • the user authentication module 178 is depicted within the AR device 161 .
  • any of the disclosed pointing scenarios can include the use of an authentication module such as user authentication module 178 .
  • the passive pointer 113 of FIG. 2 can include authentication by recognizing the user's face, movements, retina, or pre-selected authentication gesture. User authentication is useful because it provides for tying a particular person to a particular purchase, query, or action.
  • the query router 144 can send queries to various data servers perhaps including a local data store within the AR device 161 themselves.
  • the query results are gathered and passed to an overlay module for presentment to the user 122 via the HHD 120 and/or via the multimedia display 126 and/or the multimedia display 129 .
  • the local display 129 on the AR device 161 shows the cursor 127 and the query results overlaid on top of the displayed image of the movie screen.
  • a movie can be used as an example whereas the display the user is viewing through the AR device 161 is can be any display such as a television in a home, restaurant, or other social venue.
  • An HHD 120 including a camera can be used instead of an augmented reality device (although the AR device features and functions could be carried out by a smartphone) to capture an image of the scene being displayed on the display device 121 for the purpose of identifying the scene and obtaining additional information related to the scene, such as elements of the scene described above.
  • the HHD 120 can communicate with a remote server over a data communication network, provide the captured image of the scene (video programming displayed on the multimedia display 121 ) where the scene can be identified by matching the captured image to images of video programming stored in a database, e.g., remote scene element database 190 , associated with the remote server.
  • the server can then notify the HHD 120 of the availability of data related to the scene (e.g., information regarding scene elements) If the server can match the scene and identify it.
  • Scene elements can be selected on a touch-sensitive display screen commonly included with HHDs by selection of the area or element of interest within the scene, which can then enable the server to provide additional information about the selected area/scene element.
  • the sever and database can be provided in the form of a service where registered users can determine the identity of a scene (e.g., a captured image of live or recorded video programming) being displayed on any screen utilizing a captured image to match, identity and provide data about or related to the scene.
  • Databases would require continuous updating in the event programs are live, Live scenes (e.g., live football or baseball games) would not likely be tagged or have limited scene selection capabilities, where recorded programs (e.g., movies) would be updated as scene tagging is updated, In either case, additional data can be collected and provided to users. Additionally, advertising content can be provided before, during or after the provision of scene-related data; thereby enabling revenue generation should the service be free to end users.
  • An application (“App”) supporting this aspect of the embodiments can be downloaded from a server supporting the HHD (e.g., IOS, Android stores).
  • FIG. 5 illustrates an annotated group video 500 of a user's social contacts, in accordance with the disclosed embodiments.
  • Automated facial recognition applications already exist for annotating still images; this video version combines the selectable areas of single images into the selectable zones of annotated video.
  • selection of a scene element equates to selecting a person.
  • the selection event can trigger bringing up the persons contact information, personal notes, and can even initiate contact through telephone, video chat, text message, or some other means. Additionally, lingering the cursor (or a sensed finger over a touch sensitive screen) over a face can bring up that person's information.
  • crowd videos or images can be artificially created.
  • an image of a friend can be isolated within a picture or video to create content with only that one person.
  • the content from a set of friends or contacts can be combined to form a virtual crowd scene.
  • Examples of a “not-connected” or “pending” indicator can include a dimmed or greyed window, a thin border frame, and an overlaid pattern such as the “connection pending text” shown or a connection pending icon.
  • An. established connection can be indicated by a different means such as the thick border frame shown in FIG. 5 .
  • FIG. 6 illustrates a system 600 , which includes differences between cursor control and element identification, in accordance with the disclosed embodiments.
  • System 600 includes, for example, a pointer detection module 197 which can send data to cursor control module 108 , which in turn can provide data 192 indicative of cursor style and location, which can be provided to, for example, a display device such as display device 121 or even secondary displays such as multimedia display 129 .
  • Annotation selection data 110 can also be provided to the element identification module 115 .
  • the pointer detection module 197 can determine a cursor coordinate.
  • a cursor coordinate can include both a location on a display screen and a time stamp for synchronizing the time and position of the cursor with the time varying video image.
  • the annotation data 196 can include selection zones such that the cursor control module determines if the cursor coordinates lies within a selection zone and determines the cursor style and location to present on the display device. Note that this example assumes that only default “selectable” and “not-selectable” styles are available.
  • Element identification requires slightly more information and the figure indicates this by show “annotation selection data” that can be the selection data augmented with element identifiers associated with each selectable zone.
  • the element identification module receives data from a selection event that includes the time stamp and the cursor display location at the moment the selection was made. Recall that a selection can be made with an action as simple as clicking a button on a pointing device.
  • the element identification module 115 can examine the annotation data 196 to determine which, if any, selectable zone was chosen and then determines the element Id associated with that selection zone. The element ID can then be employed for formulating a scene element query 194 .
  • a screen can be 800 pixels wide and 600 pixels tall.
  • This normalization example is intended to be non-limiting because a wide variety of normalization mappings and schemes are functionally equivalent.
  • windowed display environments wherein a graphical user interface can present a number of display windows on a display device.
  • Windowed display environments generally include directives for mapping a cursor location on a display into a cursor location in a display window. If a video is played within a display window then the cursor location or aim point must be mapped into “window coordinates” before the cursor coordinate can be determined.
  • different window sizes can be compensated for by mapping all displays to a normalized size as discussed above and similarly maintaining the selection zones in that normalized coordinate space.
  • FIG. 7 illustrates a system 700 that depicts additional processes and methods of cursor selection, in accordance with the disclosed embodiments.
  • annotation data can include, for example, a cursor image, template, and/or executable code data.
  • Such information can be provided to a pointer detection module 204 , which in turn can generate cursor style data 212 , including, for example, cursor style and location information 210 , which in turn is transmitted to a display device, such as, for example, multimedia display device 121 and/or other multimedia display devices.
  • the pointer detection module 204 also provides for cursor position data and/or selection data, as shown at block 206 .
  • Such data can be provided to, for example, a cursor style selection module 214 , which can communicate with and retrieve data from, for example, a cursor style database 218 .
  • a cursor style database 218 may be, for example, a remote database retrieved over, for example, a data network such as the Internet.
  • the data shown at block 206 can also be provided to an element identification module 208 , which in turn can generate a scene element query. Note that FIG. 7 is about having different on screen cursors based on what selectable element is under the cursor. Content developers can choose, for example, the cursor and an updateable database allows selection to be altered over time.
  • Cursor styles and actions can be dependent of the selectable or non-selectable zone containing the cursor.
  • a cursor style is the cursors appearance. Most users are currently familiar with cursor styles such as arrows, insertion points, hands with a pointing finger, Xes, crossed lines, dots, bulls eyes, and others.
  • An example of an insertion point is the blinking vertical bar often used within text editors. The blinking insertion point can be created in a variety of ways including an animated GIF, or executable computer code.
  • An animated GIF is a series of images that are displayed sequentially one after the other in a continuous loop. The blinking cursor example can be made with two pictures, one with a vertical bar and the next without.
  • cursors Another property of cursors is that they often have transparent pixels as well as displayed pixels. That is why users can often see part of what is displayed under the cursor.
  • the insertion point can alternatively be created with a small piece of executable code that draws the vertical bar and then erases it in a continuous loop.
  • the specific cursor to be displayed within a selectable zone can be stored within the annotation data, can be stored within a media device or display device, or can be obtained from a remote server.
  • the cursor enters a selectable zone and thereby triggers a cursor style selection module to query a cursor style database to obtain the cursor to be displayed.
  • the annotation data can contains a directive that the cursor style can be obtained from a local database.
  • the local database can return a directive that that the cursor can be obtained from a remote database.
  • the chain of requests can continue until eventually an actual displayable cursor is returned or until an error occurs such as a bad database address, a timeout, or another error. On error, a default cursor can be used. Cursor styles that are successfully fetched can be locally cached.
  • the infrastructure for obtaining a cursor style has many components that are identical or similar to those for obtaining element data.
  • moving the cursor into a selectable area results in a “cursor entry” event that triggers the various actions required for changing the cursor appearance.
  • a similar event, “cursor exited” can be triggered when the cursor leaves the selectable zone. Notice that with a simple image display these events only occur when the cursor moves into and out of a selectable area.
  • Video data having a time axis as well as the other axes, can have selectable zones that appear, move, and disappear as the video is displayed. As such, the cursor entered and cursor exited events can occur without cursor movement.
  • hover event Another event is the “hover event” that can occurs when the cursor remains within a selectable zone for a set period of time.
  • a hover event can cause a change within or near the selectable zone.
  • One example is that a small descriptive text box appears.
  • FIG. 8 illustrates a graphical view of a system 800 that includes a multimedia display 128 with an annotated element 220 , in accordance with the disclosed embodiments.
  • FIG. 8 thus depicts a change within the selectable area wherein a finger is detected within the selectable zone associated with a can of Bletch Cola.
  • the video can be streamed to the user's display device or the user can be using a device such as that shown in FIG. 4 .
  • an event such as the cursor entered event is triggered and as a result the Bletch Cola is highlighted as shown in the figure.
  • the “Get Some” text box can be displayed in response to a “hover” type event that is triggered when the finger remains with the selectable zone for more than a preset time period.
  • a selectable zone can be highlighted in a variety of way such as by brightening all the individual pixels (e.g. increasing the RGB color values), brightening pixels having certain values or properties, tinting pixels, or passing the pixels through an image filter.
  • Image processing software such as Photoshop and the GIMP has a wide variety of image filters that can be applied to standard images. These filters can be applied to video data by applying them to each video frame.
  • the selectable zone can define a filter mask for each frame so that the image filtering operations appear within the selectable zone and can even appear to move with the selectable zone.
  • FIG. 9 illustrates a system 810 utilizing annotated video with a setting have multiple independent users viewing a single display such as a movie screen, in accordance with the, disclosed embodiments.
  • System 810 generally, includes for example, a movie theater infrastructure 240 and a movie screen display 242 .
  • Movie video data 246 and/or movie annotation data 248 , and/or movie permission data 250 (e.g., can be limited to certain geographic areas and to certain time periods in some embodiments) can be provided to, for example, an HHD 120 , which may be, for example, a Smartphone, a user's AR device, a pad computing device, etc.
  • Coded ticket stubs e.g., QR code
  • an electronic ticket can have additional permissions data, as indicated at block 254 , which also may be provided to the HHD 120 .
  • the venue can display video on a large display while also providing movie annotation data to people watching the video through augmented reality devices as discussed above in reference to FIG. 3 .
  • the venue can also stream video data or annotated video data so that a user with a cell phone type device can receive view the large display or can view an augmented but smaller version on the cell phone.
  • One of the issues that arise whenever users have camera within any venue is piracy. For example, people can use augmented reality devices or cell phones to record a movie as they watch it.
  • Transmitting permission data to the user's device so that the user can record the movie and/or watch the recording only when located within a certain space or within a certain time period can control recording. Examples include allowing the recording to be viewed only within the theater or on the same day as the user viewed a movie. GPS or cell tower triangulation type techniques are often used to determine location.
  • the user's ticket to the venue can contain further permission information so that only paying customers can make recordings.
  • the user can purchase or otherwise obtain additional permissions to expand the zones or times for permissibly viewing the recording.
  • FIG. 10 illustrates a system 820 similar to that of system 810 of FIG. 9 , but adapted for sports venues, music venues, and theatrical events, and other events (e.g., political conventions, trade shows, etc.) in accordance with the disclosed embodiments.
  • System 810 can include, for example, an annotation infrastructure 260 , a sports venue infrastructure 262 , one or more video cameras or other video recording devices and systems 264 with respect to a field of play shown at block 266 .
  • Video data 268 , annotation data 270 and permissions data 272 can be transmitted from a sports venue infrastructure 262 to, for example, an HHD 120 (e.g., user's Smartphone, pad computing device, AR device, etc.).
  • HHD 120 e.g., user's Smartphone, pad computing device, AR device, etc.
  • the venue displays the action on a big screen and the user records that.
  • the venue streams camera data from numerous cameras and the user chooses one or more camera view.
  • users can point their own cameras (such as the AR device's front facing camera) directly at the field of play. Live venues can annotate video data in near real time to thereby provide users with a full FlickIntel experience.
  • FIG. 11 illustrates a system 830 , in accordance with the disclosed embodiments.
  • System 830 includes a number of potential features and modules, such as for example, finger printing, as indicated at block 280 .
  • Video data 120 can be provided to a finger pointing module 280 and/or a multimedia display device such as, for example, display device 121 .
  • a data acquisition device 290 e.g., microphone, camera, etc.
  • An identification service 282 can, for example, respond to the synchronization data request 288 and provide data 284 (e.g., media ID, frame/scene estimate, synchronization refinement data).
  • a synchronization module 286 can receive data 284 and data indicative of a synchronization data request and can transmits synchronization data 298 which in turn provides selection data 300 and user language selection 310 .
  • a user request 302 based on selection data 300 can be transmitted to an element identification service 304 , which in turn generates element information and other data 308 (e.g., element app, site, page, profile or tag), which can provided via messaging 318 (e.g., user email or other messaging).
  • the element identification service 304 also provides for user language selection 310 , which in turn can be provided to a language translation module 292 , which in turn can provide for digital voice translations 294 which are provided to a user audio device 296 .
  • the element identification service can also generate an element offer/coupon 312 which can provide a user electronic wallet 314 and a printing service 316 .
  • the element offer/coupon 312 can also be rendered via a user selected display or device 320 .
  • System 830 of FIG. 11 includes aspects of a fingerprinting service.
  • Video data including the audio channel, can be fingerprinted by analyzing it for certain identifying features.
  • Video can be fingerprinted by simplifying the image data by, for example, converting color to grey scale having at least one hit per pixel to thereby greatly reduce the amount of data per video frame.
  • the frame data can also be reduced by transform techniques.
  • one of the current video image compression techniques is based on the discreet cosine transform (DCT).
  • DCT coefficients have also been used in pattern recognition. As such, compressed video already contains DCT descriptors for each scene and those descriptors can be used as fingerprints.
  • video is a sequence of images and thereby presents a timed sequence of descriptors. It is therefore seen that current video compression technology provides one fingerprinting solution.
  • the compression type algorithms can be applied to reduced data such as the grey scaled video or lower resolution video to provide fingerprints that are quicker to produce and match.
  • a display device or media device can calculate fingerprints for the video data being presented to a user.
  • a camera such as that of an AR device, can record the displayed video data and fingerprints be calculated from the recording.
  • the fingerprint calculations can be calculated in near real time and submitted to an identification service.
  • the identification service comprises a massive search engine containing fingerprint data for numerous known videos.
  • the identification service accepts the fingerprints for the unknown video, matches them to a known video, and returns synchronization data.
  • the search engine can determine a number of candidates to thereby reduce search size and can continue to receive and process fingerprints of the unknown video until only one candidate remains. Note that two of the candidates can be the same movie at different times.
  • the fingerprint data can be calculated from a single video frames, a sequence of video frames, a portion of the soundtrack, or a combination of these or other data from the video.
  • FIG. 12 illustrates a system 840 depicting a Sync Data Request 359 that can be sent to an identification service, in accordance with an embodiment.
  • a Video Visual and/or Sound Data field 358 can include fingerprint data for the unknown video that is to be identified. Many of the illustrated elements can be used to reduce the size of the identification service's search.
  • the User Id 332 identifies the user such that the user's viewing habits can be tracked. A fan of a certain weekly show is likely to be watching that show at a specific time every week.
  • a Device ID 330 has similar utility. Location and time information can be used to limit an initial search to whatever is being transmitted at that location and time.
  • Media History 340 can be used to target the initial search at one or more shows or genres.
  • Service type 338 can indicate, satellite service, over the air, cable, or streamed which when combined with location and time (i.e. block 336 labeled “Location and Time”) greatly limits the search. Service, location, time, and channel in combination are almost certain to limit the search to a single show.
  • User authentication data 350 can be used to verify the user, to limit the identification service to only authorized or subscribing users, and even to tie criminal or unlawful acts, such as piracy or underage viewing, to a specific viewer.
  • the request ID 334 is typically used to identify a matching response.
  • Display Device Id 342 and Address 344 can be used favorably when the display device is a movie screen or similar device at a public venue because many requests from the same venue can be combined and because the identification service might already know what is being displayed by the display device.
  • User preferences 356 can also be designated. Note that device id 330 and the display device id 342 may be unique because the device id 330 can refer to, for example, an AR device or to a device (e.g. Smartphone) with a display slaved to a primary display that has the display device id.
  • Sync response history 356 can indicate recent response from the identification service because it is likely that a user has continued watching the same video. Synch test results 352 can also be provided.
  • FIG. 13 illustrates a system 850 that includes a Sync Data Response 411 that an identification service can send in response to a sync data request, in accordance with an embodiment.
  • a response Id 402 can be included and used later in a request's sync response history, such as the sync response history 356 shown in FIG. 12 .
  • a request Id 404 can refer to a sync data request for which a sync data response is being sent. Note that a single request can result in numerous responses if the system is configured to process numerous responses.
  • a piracy flag 408 can indicate that the video has been identified and appears to be being pirated, stolen, or used in violation of copyright based on the User Authentication, User Id, or other data.
  • a parental block flag 410 can indicate that the video has been identified and that it appears that the user shouldn't be viewing it based on the user's age or on permissions or policies set for the user by the user's parent, guardian, or overseer.
  • the ‘Num Candidates’ field 406 indicates the number of candidate videos being indicated in the data sync response.
  • the identification field 360 indicates that the video has been positively matched.
  • a Media Id 362 and a timestamp are sufficient for identifying a frame in a video. Therefore, the identification field 360 can include nothing more than a media Id 362 and a sync estimate 264 .
  • the sync estimate 364 can be a timestamp or functionally equivalent information that can specify a frame in the identified media. The time difference between sending the request and receiving the response can be indeterminate and significant enough that the sync estimate is not precise enough and the user experience feels delayed, out of sync, jittery, or unresponsive.
  • Test data 368 and a test algorithm 366 can be used to refine the sync estimate.
  • the test data 368 can include the sync estimate and/or some of the fingerprinting data from the identification service 360 .
  • the test algorithm field 366 can specify which test algorithm to use or can include a test algorithm in the form of executable code. In any case, certain embodiments can have a synchronization module that can use a test algorithm on the test data and the video visual or sound data to produce or refine a sync estimate.
  • a sync data field 370 is also shown.
  • the identification service 380 can submit one or more candidates when the video has not been positively identified.
  • Candidate field 380 is thus shown in FIG. 13 and refers to a first candidate, but additional candidate fields can be provided for other candidates.
  • Candidate field 380 thus includes media id 372 , sync estimate 374 , test algorithm 376 , test data 378 , and a test sync field 380 .
  • the candidate field 389 can contain data very similar to that in an identification field 360 such that all of the candidates can be tested to determine which, if any of the candidates are the unidentified video. In many cases a threshold test, a best match, a likelihood test, some other test or some combination of tests can be used to determine when the video has been positively identified.
  • One interesting case is when a scene has been positively identified but the scene is included in multiple videos. This case can occur when stock footage is used and when different versions exist such as a director's cut and a theatrical release.
  • candidates can be submitted to a synchronization module but with sync estimates that are in the future.
  • the synchronization module can test the candidates anytime the needed fingerprint data from the unidentified video is available. In some cases, such as when a video stream is being viewed as it is downloaded, the needed fingerprint data will become available only after the time offset of the future sync estimate is reached. In other cases, the entire video is available such as when it is on a disk or already downloaded. In these cases the candidate having a sync estimate in the future can be tested immediately because all the needed data is available.
  • sync data request is submitted without any fingerprinting data (video visual and/or sound data).
  • the other data in the request can contain sufficient information to limit the number of likely candidates. For example, the time, location, service type, and channel can almost certainly be used to fully identify a video and a sync estimate when the video starts on time and proceeds at a predictable rate. Candidates can be returned to further narrow the search and the sync test results field contains data that helps guide the search engine.
  • the sync test results field 352 shown in FIG. 12 can also indicate when the identified video or the candidates do not, match the unknown video.
  • the synchronization module 286 can produce synchronization data 298 that can be used to keep the selection zones synchronized with the video and that can be used to synchronize alternative language tracks with the video.
  • Videos are often streamed or distributed with only one or a select few languages.
  • the video has a few audio tracks with different audio tracks having dialog in different languages. Dialog in additional languages can be distributed separately and can provide a good user experience if it is synchronized with the video.
  • Multiple audio tracks can be mixed by, at, or near the user audio device with different tracks having different content. For example, some tracks can include background or other sounds that are not dialog and some other tracks can be dialog in various languages. Mixing data can specify relative sound levels and other details necessary for mixing the tracks.
  • track refers generically to electronically encoded audio that can include numerous audio channels for surround sound or stereo sound and with some tracks distributed with the video.
  • video which includes some audio tracks, can be distributed on a data disk, a radio frequency transmission, satellite download, computer data file, or data stream. Additional audio tracks can be distributed separately via a data disk, a radio frequency transmission, satellite download, computer data file, or data stream.
  • the synchronization data 298 can be used to ensure that the additional tracks are played, perhaps after mixing with other tracks, in synchronization with the displayed video.
  • a use can therefore select a video to watch and a preferred language.
  • a system enabled according to one or more embodiments herein can then obtain dialog in the desired language, mix the sound tracks, and present the video to the user. The user will then hear dialog in the desired language.
  • Certain embodiments can also include optionally displayed subtitles that can be rendered into video and overlayed on the video and in synchronization with the video.
  • a further aspect is that a user can choose to have different actors speak in different languages. The different actors can each have audio tracks in different languages that can be mixed to provide a multilingual auditory experience.
  • the synchronization data 298 can also be used to keep the selection data synced up with the video.
  • Selection data can be distributed separately from the video just as additional audio tracks can be. This is particularly true when the distributed video predates an enabled system or for some other reason is does not already include selection data, annotation data, or synchronization data. Recall that it is the selection data that can specify the selection zones.
  • a selection zone can be identified from a screen coordinate and synchronization data that specifies the video or media being displayed as well as a time, frame, or likely time period.
  • the selection data can reside in either a local data store or a remote data store.
  • a user request can specify what the user wants. Recalling the local display 129 of FIG. 4 , a first user request brought up options for “Bletch Cola” that included “fetch” and “order online”. The options can be displayed to the user in a number of ways including as graphical overlays on a display, local display, or AR display. The graphical overlays are essentially screen elements with their own selection zones that are created in response to the first user request.
  • a second user request for example a selection of “order online”, can bring up a new set of options and selection zones for ordering some cola from an online merchant. Another possibility is that the user select the “$1@Marty's Grocer” to obtain an electronic coupon or printable coupon for use at a merchant location. It is important to note that the various selection zones can be dynamically created and dismissed (perhaps by selecting an area outside the selection zones) and that the selection data for the dynamically created selection zones can be distributed in a wide variety of ways and can accessed locally or remotely.
  • broadcasters often overlay advertising or information onto displayed video. Examples include station identifiers and graphics of varying levels of intrusiveness advertising television shows at other times. These can also be examples of dynamically created screen elements with selection zones. User action may not have caused these screen elements to appear, but those elements can be selected. The dynamically created screen elements can obscure, occlude, or otherwise overlay other selectable screen elements.
  • the selectable elements and selection zones associated with a video overlay should usually take precedence such that a user selecting an overlaying element does not receive data or a response based on an occluded or not-currently-visible screen element.
  • a user request can imply an element Id by specifying a selection zone or can explicitly include the element Id.
  • element Ids can have ambiguities.
  • An element identification service can resolve ambiguities. For example, two different videos can use the same element Id.
  • An element identification service can use a media Id and an element Id to identify a specific thing.
  • the element identification service can amend the user request to resolve any ambiguities and can send the user request to another service, server, or system to fulfill the user request.
  • the illustrated examples include selecting a language, obtaining an offer for sale or coupon, obtaining information, or being directed to other data sources and services.
  • the obtained information can be provided to the user, perhaps in accordance with the user request or a stored user profile, in email or another messaging service, or to a display device such as the user's cell phone, AR device, tablet, or television screen.
  • a user can be directed to other data services by being directed to an element app, site, page, profile, or tag.
  • An element app is an application that can be downloaded and run on the user's computer or some other computing or media device.
  • a site can be a web site about the identified element and perhaps even a page at an online store that includes a description and sale offer.
  • a page, profile, or tag can refer to locations, information, and data streams within a social media site or service and associated with the identified element.
  • a user can select a menu icon, a menu button, or in some other way cause a menu of viewing options to appear on one of the screens upon which the user can select screen elements.
  • One of the options can provide the user with available voice translations at which time the user selects a language or dialect.
  • FIG. 14 illustrates a system 860 for maintaining unique identifiers for elements that appear in videos and in using those unique identifiers for scattering and gathering user requests and responses, in accordance with the disclosed embodiments.
  • a universal element identifier (UEI) 420 is an identifier with an element descriptor 422 that is uniquely identified with a certain thing, such as “Bletch Cola”, so that whenever that thing appears on screen with a selectable zone that it is associated with the UEI. For example, every advertisement and product placement of Bletch Cola can be associated with that one UEI thereby greatly simplifying the routing of and responses to user requests.
  • a UEI generator may, for example, generate data for storage in a UEI database 480 .
  • the UEI database 480 can be kept on one or more servers (e.g., server 424 ) with the database storing data including, for example, one or more element records, such as, for example, element record 460 .
  • the UEI database 482 may be responsive to, for example, an element admin request 484 .
  • the element record 460 can be created in response to a request that includes an element descriptor.
  • the request can be sent to a (UEI) server 424 or directly to the database server 480 .
  • the UEI server 424 and the database 480 can be maintained on separate computer systems having extremely limited general access but with widely available read only access for UEI servers.
  • the element record 460 can contain data including but not limited to, for example, the Universal Element Id 474 , registered response servers 472 , element descriptors 470 , registration data, 466 administrator credentials 468 , a cryptographic signature 464 , and an element public key 462 .
  • the UEI server 424 can generate a UEI 426 .
  • a UEI request 420 is an example of an administrative request, an example of which is also shown at block 484 .
  • Element administrators can have read, write, and modify access to element records and should be limited to writing and modifying only those records for which they are authorized. Different records can have different administrators. Element administrators have far more limited access and rights than server or database administrators.
  • Element records can contain admin credentials specifying who can administer the record or similar password or authentication data. The admin credential can also contain contact information, account information, and billing information (if the UEI is a paid service). In many cases the admin credentials can refer to an administrator record in a database table. The important aspects are that only authorized admins can alter the element record, that the admins can be contacted, and that the record can be paid for if it is a paid service. Paid services can have various fees and payment options such as one time fees, subscription fees, prepayment, and automatic subscription renewal.
  • the element descriptor 422 is data describing the thing that is uniquely identified. It can be whatever an element record admin desires such as simply “Bletch Cola” or something far more detailed. Registration data can include a creation date and a log of changes to the record. It can be important that responses to user queries come from valid sources because otherwise a miscreant can inject a spurious and perhaps even dangerous response.
  • a cryptographic signature and element public key are two parts of a public key encryption technique. Selectable zones and UEIs can include or be associated with a public key. Responses to user queries can be signed with the cryptographic signature and verified with the public key. Similarly, the queries can be signed with the public key and verified with the cryptographic signature.
  • the element record 460 can contain a list of registered response servers.
  • a query routing module 426 can obtain the list of registered response servers and direct user queries to one or more of the registered servers as shown as block 448 .
  • a query to one or more unregistered servers is depicted at block 450 , which in turn will provide data to an unregistered data server 456 .
  • Unsigned element data 444 and/or signed element data 446 may be output from the unregistered data server 456 and in turn can be provided to an element data gathering module, which in turn can provide data to an element data resolution module, which in turn can provide element data and presentation directives as shown at block 440 .
  • a typical query 428 may contain, for example a UEI 430 and a query type 432 .
  • the query routing module 426 can be a purpose built hardware device such as an Internet router or can be executable code run by a computing device.
  • the list of registered servers can be obtained directly from the UEI database server 480 or from a different but authorized server (or responder) that has obtained the list. Public key techniques can ensure that the responder is authorized or that the response contains signed and valid data.
  • a man in the middle responder 434 can be authorized or can be relaying signed and authorized information.
  • the man-in-the-middle responder 434 can communicate with a third party responder 452 and also the database/server 480 .
  • the man-in-the-middle responder 434 can provide spurious information and thereby cause the query routing module 436 to send the query to a hostile data server.
  • the makers of Bletch Cola would prefer that their own or only authorized servers respond to user queries having the UEI for their product.
  • a competitor might want to send a different response such as a coupon for their own cola.
  • a truly hostile entity such as a hacker might want to inject a response that compromises the user's media devices, communications devices or related consumer electronics.
  • a network operator or communications infrastructure provider may desire to add, delete, or modify the registered response server list. Such modification is not always nefarious because it may be simply redirecting the queries to a closer server or a caching server.
  • the network operator might also wish to intercept “UEI not found” type responses and inject other information that is presumably helpful to the user.
  • the network operator may intentionally interfere with disclosed systems and services by responding to user queries or registered response service requests with information about its own services, premium data charges, or reasons for not allowing the queries on its network.
  • the query routing module 436 will send the query to at least one registered data/response server that returns a element data, perhaps signed, that can be presented to the user.
  • an element gathering module can collect the responses and format them for the user.
  • a response gathering module can cooperate with the routing module such that it knows how many responses are expected and when to timeout on waiting for an expected response.
  • An element data resolution module can inspect the gathered responses and resolve them by removing duplicate data, deleting expired data, applying corrections to data, and other operations. Data can be corrected when it is versioned and the responses contain an older version of the data along with ‘diffs’ that can be applied to older versions to obtain newer versions.
  • the user query can also be sent to an unregistered data server with good or had intent as discussed above.
  • the unregistered server can not reply with signed data unless the encryption technique is compromised.
  • the data element gathering module or a similar module can discard unsigned or wrongly signed responses.
  • the module can be configured to accept unsigned response from certain sources or to accept wrongly signed responses with certain signatures. All responses can be treated the same once accepted and systems lacking encryption or verification capabilities will most likely accept all response to user queries.
  • a screen/display device/associated device that transmits an annotation signal can be provided that includes a media identifier as well as screen annotation data such that local devices (remotes, phones, etc.) can create “augmented reality” cursor and transmit cursor icon & location to display device etc.
  • the video data may be in a narrative format that does not respond to user inputs except, perhaps, for panning and zooming around a scene.
  • a system can be implemented which includes a display device presenting a display of time varying video data; annotation data comprising time and position based annotations of display elements appearing on the display; a pointing device that transmits a pointer signal wherein the pointing device comprises a selection mechanism (always on or only when button pushed); a pointer sensor that detects the pointer signal and determines a cursor location wherein the pointing device is pointing at the cursor location; a cursor presented on the display at the cursor location; and an annotation identification module that determines when an on screen element is selected.
  • annotated video data can be video data that is associated with, bundled with, or distributed with annotation data.
  • the pointer signal can encode cursor data wherein the cursor data determines the on-screen cursor appearance.
  • the pointer signal can encode a pointer identifier.
  • the on-screen cursor can change appearance when overlying an annotated element.
  • the pointing device can transmit the pointer signal when a pointer actuator is activated.
  • a system can be implemented that includes annotated video data comprising annotation data; an annotation data transmitter that transmits the annotation data to a pointing device (e.g., Kinect or camera based, for example in the context of an AR browser).
  • a pointing device e.g., Kinect or camera based, for example in the context of an AR browser.
  • cameras may detect where a person/passive pointer (like specially painted or shaped baton or weapon type thing) is pointing, in order to detect who the person is via facial recognition, biometric measurement, gesture as password, etc.
  • a contact list for video conferencing can be provided.
  • a navigating screen with a remote control phone as a pointing device can be provided, and the phone camera and display used to aim an on-phone cursor at the display.
  • the phone camera and display can share images, poke phone, and the phone can send screen location data to the display which then treats it as a local selection.
  • an HHD can synch in on frame/position and communicates with a server.
  • theater/movie “cookies” can phone so that user can redisplay/rewatch movie for xxx time period such that queries can be made outside the theater. Addition ay, it is not a requirement that the device (e.g., HHD) be aimed at the movie screen.
  • tagged searches are possible. For example, YouTube can be searched by a tag so that people can tag the videos and then search on the tags.
  • An extension is to tag movie scenes so that a media identifier and offset into the ‘movie’ comes back, perhaps even a link for viewing that scene directly.
  • someone can obtain a tag by selecting a screen element and assemble a search string by combining tags some of which are entered via keyboard and some via element selection.
  • Synchronization data can be obtained through a technique like the fingerprinting type technique described above or more simply by knowing what is being watched (media Id or scene Id) and how far into it the person is (time stamp or frame number).
  • a person can tag a scene or a frame with any of the above mentioned devices (AR device, cell phone, remote control, kinect type device, pointing device, . . . ). Basically, click to select frame/scene, key in or otherwise enter in the tag, and accept.
  • the sync data (includes media Id) is sent off to be stored in a server. People can search the tag database to check out tagged scenes. For example, Joe tags a scene “explosion” and Jill tags the same scene “gas”.
  • a search for “gas explosion” can result in a link to watch the scene.
  • a social media tie in is that Joe's friends might see that “Joe tagged a scene with ‘explosion’” along with a single frame from the scene and embedded links to the scene.
  • Joe's tag could have been assembled by selecting Kyle Durden in the scene and requesting a tag (assumes style annotated video) and then adding ‘explosion’.
  • the “Kyle Durden” tag can be the textual name, or a small graphic.
  • the small graphic can be the actor's picture or a symbol or graphic people associate with the actor.
  • any mark can use that mark (word, logo, design, or combo mark) can use the mark as a tag. Searching for such a tag requires entering it in some way such as by “clicking” on it via the pointing device or HHD.
  • FIG. 15 illustrates a system 870 generally including a multimedia display 126 associated with a controller and which communicates with one or more wireless HHD's, in accordance with the disclosed embodiments.
  • the controller 503 can be integrated with the multimedia display 126 or may be a separate device, such as, for example, a set-top box that communicates with the multimedia display 126 .
  • Communications between the multimedia display 126 and the control 503 may be wireless (e.g., bidirectional wireless communications) or wired (e.g., USB, HDMI, etc.).
  • One or more HHD's, such as, for example, wireless HHD 502 can communicate via wireless bidirectional communications with the controller 503 and or the multimedia display 870 (e.g., in the case where the controller 503 is integrated with the multimedia display 126 ).
  • a user of the HHD 502 can utilize the HHD 502 to register the HHD 502 with the multimedia display 126 .
  • the HHD 502 is thus associated via this registration process with the multimedia display 126 .
  • the user can utilize the HHD 502 to move an onscreen graphically displayed cursor to content/media of interest and then “click” the HHD 502 and select that area on the screen multimedia display 126 , which then prompts a display of data supplemental to the selected content/media in a display of the HHD 502 itself and/or, for example, in a sub-screen 512 within the display 126 .
  • the sub-screen would “pop up” and display the supplemental data within display 126 .
  • the supplemental data can be displayed for the user of the HHD 502 within a display area or display screen of the HHD 502 .
  • FIG. 16 illustrates a system 880 that generally includes a multimedia display 126 and a secondary multimedia display 129 , in accordance with the disclosed embodiments.
  • multimedia display 126 can be associated with a controller 503 and multimedia display 129 can be associated with controller 505 .
  • Controller 503 can communicate wirelessly or via wired means with multimedia display 126 and controller 505 can similarly communicate wirelessly or via wired means with multimedia display 129 (e.g., a secondary screen).
  • the wireless HHD 502 can communicate via wireless bidirectional communications means with controller 502 and/or controller 505 and/or directly with multimedia displays 126 and/or 129 .
  • the supplemental data can be displayed via display 126 .
  • the supplemental data can be displayed in a window or sub-screen 514 that “pop ups” in multimedia display 129 .
  • a similar display process for the supplemental data can occur via the window or sub-screen 512 that can “pop up” in multimedia display 126 .
  • FIG. 17 illustrates and represents a system 890 in which HHD 512 can communicate wirelessly via a data network 520 (e.g., the Internet) for storage and/or retrieval supplemental data in a memory or database 522 , in accordance with the disclosed embodiments.
  • a data network 520 e.g., the Internet
  • a user of HHD 502 may wish to access from or save the supplemental data to the “cloud” for later viewing, rather displaying the data immediately via a pop up window or sub-screen 512 of via a display associated with the HHD 502 .
  • FIG. 17 illustrates and represents a system 890 in which HHD 512 can communicate wirelessly via a data network 520 (e.g., the Internet) for storage and/or retrieval supplemental data in a memory or database 522 , in accordance with the disclosed embodiments.
  • a user of HHD 502 may wish to access from or save the supplemental data to the “cloud” for later viewing, rather displaying the data immediately via a pop
  • FIG. 17 further illustrates and represents a system 890 in which an HHD 512 can communicate wirelessly via a data network 520 (e.g., the Internet) with a remote server 522 associated with a database for providing at least one of identification based on a captured image provided from a HHD 512 of event or video programming rendering on a multimedia display screen 126 , for obtaining supplemental data associated with the event or video programming by the HHD 502 once the event or video programming is identified, and storage of the supplemental data in a memory or database 522 for later retrieval, in accordance with the disclosed embodiments
  • a data network 520 e.g., the Internet
  • a remote server 522 associated with a database for providing at least one of identification based on a captured image provided from a HHD 512 of event or video programming rendering on a multimedia display screen 126 , for obtaining supplemental data associated with the event or video programming by the HHD 502 once the event or video programming is identified, and storage of the supplemental data in a memory or database
  • FIG. 18 illustrates a system 900 in which multiple HHD's can communicate with the controller 503 and/or the multimedia displays 126 and/or 129 , in accordance with the disclosed embodiments.
  • Each HHD 502 , 504 , 506 , etc. can be registered with the same controller 503 or with other controllers, such as controller 505 discussed previously.
  • each controller HHD 502 , 504 and/or 506 , etc. can be registered with multimedia displays 126 and/or 129 . All of the HHD's 502 , 506 , 506 , etc. can interact with, for example, the main screen 126 and collaborate with one another (e.g., in a gaming environment).
  • a user of HHD 504 may desire, for example, to display selected supplemental data on display 129 rather than display 126 .
  • a user of HHD 506 may desire to display the supplemental data only on his or her HHD.
  • Each HHD 502 , 504 , 506 , etc. can also independently interact with displayed media on either screen 126 or 129 to retrieve data of personal interest to that particular user of a respective HHD.
  • the ‘screen’ via a controller such as controllers 503 and/or 504 can function as director of two or more HHD's, particularly in the context of a gaming environment.
  • FIG. 19 illustrates a system 910 in which multiple HHD's (or a single HHD in some cases) can communicate with a data network 520 that communicates with a server 524 and database 522 associated with and/or in communication with sever 524 , in accordance with the disclosed embodiments.
  • Supplemental data can thus be stored via HHD 502 , 504 and/or 506 , etc. in database 522 or another appropriate memory for later retrieval.
  • data can be retrieved from remote servers such as server 524 via data network 520 (e.g., the Internet).
  • the illustrated system 910 supports communication of HHDs 502 over a data network 520 to communication with a server 524 for purposes of identifying video programing based on an image captured by the HHD 502 of video programming being displayed on a multimedia display 126 by matching the image with images of video programming stored in a database 522 . If there is a match found by the server 524 , then the HHD can be notified of the availability of additional data related to the video programming in accordance with features described herein.
  • FIG. 20 illustrates a graphical view of a system 920 that includes a multimedia display 126 and one or more wireless HDD's 502 , 504 , 506 , etc., in accordance with the disclosed embodiments. It can be assumed that the configuration shown in FIG. 20 can be implemented in the context of the other arrangements and systems disclosed herein (e.g., including various controllers, primary and secondary screens, data networks, servers, databases/memory components and so forth). Each user of an HHD can designate his or her own personal profile icon during, for example, the registration process or at a later time. Each profile icon can be displayed as a cursor via a multimedia display such as multimedia display 126 .
  • a multimedia display such as multimedia display 126 .
  • a user of HHD 502 may select a profile icon 532 in the form of a dollar sign, which is displayed graphically in display 126 .
  • a user of HHD 504 may be, for example, a hunter or outdoorsman and may select a profile icon 534 in the symbol of a deer as his profile icon.
  • a user of HHD 506 may, for example, select a profile icon 538 in the symbol of skeleton as his or her profile icon.
  • a secondary profile icon may be displayed instead of a primary icon so as not to cause confusion among users of the various HHD's 502 , 504 , 508 , etc. during the same session or event.
  • FIG. 21 illustrates a flow chart depicting logical operational steps of a method 940 for registering and associating an HHD with a display screen and displaying supplemental data, in accordance with the disclosed embodiments.
  • the process can be initiated.
  • an HHD such as, for example, the HHD's 502 , 504 , 506 , etc. can be registered with a multimedia display and/or a controller such as described previously.
  • a controller such as described previously.
  • an icon or cursor can be displayed, which associated with the HHD.
  • supplemental data can be displayed on the HHD display or on the multimedia display (or both).
  • FIG. 22 illustrates a flow chart depicting logical operational steps of a method 950 for transmitting, storing and retrieving supplemental data and registering an HHD with a secondary screen, in accordance with the disclosed embodiments.
  • the process begins.
  • the HHD can be registered with a secondary screen and/or associated controller.
  • An example of a secondary screen is the multimedia display 129 discussed earlier.
  • supplemental data can be transmitted to a database (e.g., database/memory 522 ) via a data network, (e.g., network 520 , the Internet, etc.), as illustrated at block 576 .
  • the supplemental data can be retrieved for display via the secondary screen, the primary screen and/or other screens. etc.
  • the process can then terminate, as shown at block 580 .
  • FIG. 23 illustrates a flow chart depicting logical operational steps of a method 930 for designating and displaying profile icons for selection of content/media of interest and supplemental data thereof, in accordance with the disclosed embodiments.
  • the process can be initiated.
  • the HHD (or multiple HHD's) can be registered (together or individually) with a multimedia display screen and/or associated controller.
  • user profiles can be established via the HHD and one or more profile icons selected. Examples of profile icons include icons/cursors 532 , 534 , and 536 shown in FIG. 20 .
  • one or more of the profile icons can be displayed “on screen”.
  • content/media of interest can be selected via the HHD with respect to the profile icon displayed on screen.
  • a profile icon cursor may be moved via the HHD to an area of interest (e.g., an actor on screen).
  • the user then “clicks” an appropriate HHD user control (or touch screen button in the case of an HHD touch screen) to initiate the display of supplemental data (with respect to the content/media of interest).
  • the supplemental data can be displayed on, for example, an HHD display screen, as shown at block 560 .
  • the process can then end as depicted at block 562 .
  • an APP on a HHD can be activated to capture an image of a scene of video programming displayed on a multimedia display and enable access to data associated with video programming from a remote server if the scene is matched and identified By the remote server.
  • the image of the scene is captured using a digital camera integrated in the HHD.
  • the HHD then wirelessly accesses a remote server to match the captured scene with images of video programs in a database to determine the identity of the scene and the availability of related data, as shown in Block 620 .
  • Wireless communication can be supported by wireless communications hardware integrated in the HHD to support WIFI or cellular communications (e.g., 802.xx, 4g, LTE, and so on).
  • the HHD can receive notification of a match and the availability of related data that can be selectively retrieved from the server by HHD.
  • the HHD selectively accesses additional data related to the scene.
  • the data can logically be accessed and manipulated consistent with many of the teachings provide herein
  • a set such as the set 700 illustrated in FIG. 25 can have a few elements such as actors 705 and some props 715 positioned in front of a “green screen” 710 .
  • a green screen 710 is simply a known background that can be easily replaced by other imagery. Green screens were originally a single uniform color. The non-green parts of a scene were, by definition, the foreground 711 and the green parts were the background 712 . Views of background scenes such as mountain vistas, moving traffic, or bustling cities can be captured independently of the foreground views. A final view can be produced by compositing the foreground view 711 and the background view 712 .
  • FIG. 26 illustrates an example wherein the scene of FIG. 25 is imaged to produce Final view 713 , that includes background view 712 has been composited in with foreground view 711 to produce the Final view 713 .
  • Compositing with a green screen amounts to taking the foreground view 711 and replacing everything green with the background view 712 .
  • Computer technology made this process particularly convenient because the green pixels are simply replaced by background pixels.
  • the above example of using a green screen actually produced three content layers.
  • the static set elements are foreground things that don't move such as a table or a sphere resting on the table.
  • the dynamic set elements are foreground things that do move such as the actor and anything the actor manipulates.
  • FIG. 27 illustrates a view changing because the camera moves while imaging a scene. Note that the foreground and background are shifted as indicated by arrows, but are otherwise unchanged. FIG. 27 more clearly illustrates this concept by showing a complete view and a series of presented views. A scene can be much larger than the view imaged by a camera and the camera can pan and tilt to move the view around and thereby capture different areas of the scene.
  • a movie, television show, or video is essentially a sequence of views. More conveniently though, the sequences of views are first combined into acts and the sequence of acts is combined to produce the movie.
  • An act is often the sequence of views of a scene. For example, the views taken from two cameras filming a scene can be cut and spliced to produce an act in a movie.
  • FIG. 28 illustrates two views being captured from the same scene.
  • Virtual scenes are computer generated and can be static, dynamic, foreground, background, or any other layer.
  • the virtual cameras are mathematically defined to produce views of the virtual scenes.
  • a view from a virtual scene can be composited with any other view.
  • Annotating views and scenes is the process of determining which spots on a display correspond to what elements in a scene.
  • Each scene or view can have a “screen map” wherein specific areas in a view are identified as being an actor, consumer product. or other scene element.
  • One very data intensive embodiment would use a different screen map for every frame of every scene in a movie. The user selects a scene element by pointing at it with a pointing device or finger, the point of aim/selection is identified then submitted to a software module that uses the screen map to determine what thing in the scene corresponds to that point. The identity of the pointed at thing is then returned for subsequent use such as finding an actor's biography or purchasing an on screen item.
  • Virtual scenes can be automatically annotated as they are created because the rendering software used to place the scene elements can also record the viewed locations of those elements.
  • any of the views can be annotated before compositing.
  • the actor, table and sphere can be annotated for each of the two images views.
  • the mountainous background view of FIGS. 26 and 27 can be annotated independently.
  • Compositing annotated views together requires that the annotations also be composited.
  • Annotation compositing can be performed by tracking which items are composited on top of other items and producing a final annotation for the final view. Another option is to preserve the annotations for each view. As shown in FIG. 28 , a user can select an item by selecting a point within the final view (i.e., a spot on the screen).
  • a software module can then determine if the point coincides with a foreground view element such as the actor. If not, then the software module can determine if the point coincides with an element in the background. If there are additional layers composited between the foreground and background, then each of those layers is checked in turn beginning with the foremost and progressing to the hindmost. Green screen areas are either not annotated at all or are annotated as green screen, transparency, or some other value that signals the software module to proceed to the next layer.
  • a complete view can also be annotated.
  • the selected point of FIG. 28 does not move within the complete view but does move within the presented views.
  • Vector addition of the view offset and the selection vector locates the selected point within the complete view.
  • a similar mathematical operation provides for locating the selection point when the camera zooms into or out of the scene. As such, annotating the complete view and tracking both the view offset and zoom factor allows the same annotation data to be used as the camera pans, tilts, and zooms through a scene.
  • a complete view of a sports arena can be annotated once for each camera angle and the annotation then used as the background annotation when a game is played. Note that in this case the background is changing and is not composited into the final view but that the same background annotation can be used throughout a gaming season.
  • Annotation compositing and complete view annotation are advantageous because they greatly ease burden of annotating views.
  • An entire scene can be annotated once and then the view trajectories recorded.
  • View trajectories are records of view positions and zoom factors at different times as a scene is imaged.
  • the amount of annotation data is reduced because view trajectories are much smaller than screen maps.
  • a further advantage is that the different views in different layers (foreground, background, etc.) can be reused in different parts of a presentation or even in completely different presentation. Movies, videos, and sporting events are examples of presentations.
  • An annotation database accessible via an annotation server 800 can annotate every pixel in a frame, can annotate only those pixels overlying certain elements, or can annotate only certain points in select frames. Every annotated point can be termed an “annotation coordinate”. Every piece of annotated media content is identified by a media identifier and every annotation coordinate includes a frame (or subframe) identifier, a coordinate, and an element identifier. Annotation coordinates can also contain the media identifier. Similarly, the user's selected element can be specified by selection data containing a screen coordinate, media content identifier, and frame identifier. The annotation coordinate best matching the selection data has the same media identifier and is closest in both time and space.
  • Closest in space can be a Euclidean, Mahalinobis, or Manhattan distance. Closest in time can be measured in frames or seconds.
  • a useful distance measure for annotation purposes can be a weighted sum of spatial and temporal distance, can step forward or backward through frames seeking a nearby annotation coordinate, or can combine the spatial and temporal date in other ways.
  • an element that does not move during a contiguous series of frames can be annotated with a set of annotation coordinates arranged at either end of the frame sequence and spatially centered over the element.
  • a further refinement to overcome irregularities in temporal distance measurement is to track scene changes and to exclude annotation coordinates from other scenes.
  • a yet further refinement is give each scene in a movie, show, sporting event, etc. a unique media content identifier. This refinement addresses issues that may arise when clips or portions of a show are presented instead of the entire performance.
  • a yet further refinement is to annotate a media clip with a media content identifier and a time offset specifying at what time in the original does the media clip begin. For example, a media clip could begin 35 minutes into a movie. In this manner it is far easier to annotate previews, clips, mash-ups, or different cuts (theatrical, director's, unrated, etc.) of a presentation. Mash-ups are a series of clips taken from different presentations or rearrangements of a single presentation. Note that sound tracks, songs, dialog, and other sounds can be clipped, mixed, or otherwise arranged within a mashup.
  • An annotation coordinate can refer to a place or a sound as easily as to an on screen element by specifying special spatial elements as flags for such place elements or sound elements. For example, ( ⁇ 1, ⁇ 1) could indicate sound while ( ⁇ 2, ⁇ 2) could indicate location.
  • Compositing can be used to replace scene elements in live or recorded video content.
  • advertisements placed around the stadium such as posters, billboards, signage, and lighted signage.
  • the giant video screens present in many stadiums often show multiple advertisements at the same time as well as images, video, and information about an event (game, concert, race, etc.) occurring at the stadium, about athletes, about attendees, and about stadium activities.
  • the “kiss cam” shows attendees who are encouraged to kiss.
  • Stadium activities can include singing the national anthem, throwing t-shirts and swag into the audience, mascot races, half time shows, etc.
  • Compositing can replace the stadium advertisements with other advertisements or images.
  • Similar compositing can be performed for recorded video.
  • the Brazilian soccer game can be recorded and shown years later with completely different ads composited over the Brazilian beer advertisement.
  • Such compositing can be particularly useful when a product or service cannot be legally advertised or becomes socially unacceptable.
  • An example of that is cigarette ads which have become regulated in some countries and are sometimes viewed with disdain or antipathy.
  • Compositing can also be used to modernize or otherwise replace product placement within movies and television shows.
  • Product placement occurs when an item is placed within a scene as a scene element but is not overtly advertised. Examples include what an actor eats, drinks, or wears while on screen. Examples also include background elements such as an advertisement on a bench that an actor walks past or sits on.
  • Compositing can update a product placement when a can of Tab in decades old video is composited out in favor of a can of Coke, Jolt, etc.
  • Compositing can also be used to adjust product placements based on geographic markets with an American seeing Jolt instead of Tab and an Australian seeing Solo instead of Tab.
  • Geographic markets can be narrowed to the point that people in different parts of the same city, perhaps even in neighboring houses, see different scene elements. Such fine levels of marketing may require location data from a person's mobile or wearable devices.
  • a persons IP (internet protocol) address can also be used to determine location data. Such location data is currently available.
  • compositing opportunities include demographic based or targeted advertisements.
  • a viewer's location or profile can be queried and the composited in product placements and stadium ads targeted at the viewer.
  • targeted advertising via compositing can be used to harass specific individuals or groups when another person or group obtains a right to place products or advertisements into that individual's viewed video content.
  • Compositing also provides a solution wherein the harassed individual opts out of targeted ads or obtains the right to place products or ads into his own viewed video content.
  • “obtains a right” can mean a purchase or license of advertising rights.
  • “obtains a right” can also include stealing or hijacking wherein video content is intercepted and scene elements composited in (or out) without the consent of the video content provider and/or the person viewing the content.
  • Composited out means replaced by something else via compositing.
  • Composited in means replacing something that was originally a scent element via compositing.
  • the composited in scene elements are in a video layer in front of the background and the original foreground.
  • annotations can be composited.
  • the annotation selection data can therefore be composited such that the correct element identifier is obtained.
  • the annotations can be composited when the video is composited or can be annotated downstream, such as at the users media device (element 104 of FIG. 1 ).
  • Some embodiments can use local or downloaded annotation data, or composited annotation data while others can assemble a query containing information sufficient to specify a cursor aim point and a frame of video. Without loss of generality, the data sufficient to a frame of video is herein referred to as a media tag.
  • the query can be submitted to an annotation server 800 that queries an annotation database to determine the element identifier of the object selected by the user. This scenario is complicated when the frame of video has new scene elements composited in because the media tag of the original video may not be associated with or linked to the composited in material.
  • One solution is to add annotation data for the composited in material to the annotation database (or another annotation database) and to cause the server 800 to find the element identifier for the composited in object, if selected.
  • the media tag can be augmented with a “composited annotation indicator” indicating that there is annotation data corresponding to composited in video.
  • the composited annotation indicator can contain annotation data for the composited in scene elements or can contain a tag, address, URL (uniform resource locator), pointer, or identifier that can be used access annotation data for the composited in scene elements.
  • This augmented media tag can contain an ordered list of composited annotation indicators if multiple layers of video have been composited over the original video. The list's order can indicate the order in which compositing occurred. Recall that the composited layers can contain large transparent regions through which lower video layers are visible.
  • the media tag can be replaced with a new media tag.
  • the new media tag can reference the annotation data and the media tag of the original (or underlying) video.
  • the underlying video can also have composited in items.
  • Another alternative is for the new media tag to be an ordered list of older media tags with the bottom media tag indicating the original video and the rest indicating video layers composited over one another, and the list order indicating the order in which the compositing occurred. Lists can be ordered by putting them in order or by ranking each list element such that the list can be put in order.
  • the annotation data for the composited in scene elements can indicate which screen coordinates in a frame correspond to the composited in items. In this manner, the determination of an element identifier is similar to that for the non-composited video. If the user selected a composited in item, the element identifier for that item is returned and the annotations for the original video need not be searched (although searching in parallel can lead to higher performance). For example, “Jolt” is composited over “Tab” and the user selects “Jolt” in a particular frame. The annotation data for the composited in scene elements provides the “Jolt” element identifier. The annotation data for the original video provides the “Tab” element identifier.
  • the “Jolt” element identifier is selected because the composited in scene elements overlay the original ones. Note that if the user selected a hat in the original video then the annotation data for the composited in scene elements can return “no item” thereby indicating that the hat's element identifier obtained from the annotations for the original video is the correct element identifier. Another example is that a new scene element may be composited into the video in which case the annotation data for the composited in scene elements provides an element identifier.
  • the annotation data can indicate an element identifier that should replace one in the original video.
  • the annotation data for the original video can indicate that the user selected a can of Tab.
  • the annotation data can indicate that the “Jolt” element identifier should be returned instead of the “Tab” element identifier.
  • This alternative requires that lower level annotation data, such as that for the original video, be searched such that lower level element Ids can be replaced.
  • annotation data can be queried as discussed above with the element identifier for the topmost item at the selected coordinates being the correct element identifier.
  • the people at the stadium can see advertisements and activities with their naked eyes, can see selected views on the stadium's giant screen (or screens), or can view their surroundings through mobile devices such as smart phones, tablet computers, glasses with integrated displays, and other wearable devices with display capabilities.
  • mobile devices such as smart phones, tablet computers, glasses with integrated displays, and other wearable devices with display capabilities.
  • an attendee's surroundings can be recorded or streamed from the integrated camera in a mobile device and viewed on the display of the streaming device or another device that is registered with the recording/streaming device.
  • a person can view an augmented version of reality with overlaid information related to tagged content or recognized content.
  • Another augmentation is that the viewer use the camera or display device to magnify or zoom-in on an activity or object.
  • Remote viewers of an event happening at the stadium can also see advertisements and activities.
  • the remote viewers can be provided with video having composted in items.
  • the remote viewers can select a composited in item and receive information about the composited in element.
  • streaming services or other intermediaries can receive video from mobile device cameras (or other cameras) and record it or stream the video to other intermediaries or display devices.
  • the display, camera, and intermediaries would be registered directly or indirectly with one another.
  • indirect registration is when devices or intermediaries are not directly registered with each other, may be unaware of each other, but can receive data from one another.
  • the camera and display may both register with an intermediary that receives the camera video stream and provides it to the display.
  • the video stream can pass through multiple intermediaries in series and in parallel.
  • the video stream can reach multiple display devices. Every one of those intermediaries has the opportunity to composite items and advertisements into the video stream before passing it along.
  • Certain intermediaries can interpret annotation data for the composited in scene elements and composite over those scene elements.
  • Certain other intermediaries can interpret media tags and use the data therein to attempt to register with or contact the camera streaming the video or an intermediary closer, streamwise, to the camera.
  • a method can be implemented for supporting bidirectional communications and data sharing.
  • Such a method can include, for example, registering at least one wireless hand held device with a controller associated with at least one at least one multimedia display; selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • a step can be implemented for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game.
  • a step can be implemented for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • a step can be implemented for storing the supplemental data in a memory in response to, a user input via the at east one wireless hand held device.
  • a step can be implemented for pushing the supplemental data for rendering via at least one other multimedia display.
  • a step can be implemented for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display.
  • a step can be provided for filtering the supplemental data for rendering via a multimedia display.
  • “Filtering” may include, for example, automatically censing supplemental data based on a user profile or parameters for censoring or filing such data. For example, a public place such as a bar or restaurant, may wish to prevent certain types of supplemental data (e.g., pornography, violence, etc.) from being displayed on multimedia displays in its establishment. Filtering may occur via the controller (set-top box), controls or settings on the displays themselves and/or via the HHD accessing media of interest.
  • controller set-top box
  • a system can be implemented for supporting bidirectional communications and data sharing.
  • Such a system can include a processor, and a data bus coupled to the processor.
  • such a system can include a computer-usable medium embodying computer code, the computer-usable medium being coupled to the data bus, the computer program code comprising instructions executable by the processor and configured for: registering at least one wireless hand held device with a controller associated with at least one at least one multimedia display; selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • such instructions can be further configured for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game.
  • such instructions can be further configured for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • such instructions can be further configured for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device.
  • such instructions can be configured for pushing the supplemental data for rendering via at least one other multimedia display.
  • such instructions can be further configured for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In other embodiments, such instructions can be further configured for filtering the supplemental data for rendering via a multimedia display.
  • a processor-readable medium can be implemented for storing code representing instructions to cause a processor to perform a process to support bidirectional communications and data sharing.
  • code can in some embodiments, comprise code to register at least one wireless hand held device with a controller associated with at least one at least one multimedia display; select at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and provide supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • such code can further comprise code to provide the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game.
  • such code can comprise code to manipulate via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device.
  • such code can comprise code to store the supplemental data in a memory in response to a user input via the at east one wireless hand held device.
  • such code can comprise code to push the supplemental data for rendering via at least one other multimedia display.
  • such code can comprise code to push the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display.
  • such code can comprise code to filter the supplemental data for rendering via a multimedia display.
  • a method can be implemented for displaying additional information about a displayed point of interest. Such a method may include selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region, and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In still other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In yet other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • the mobile device can be, for example, a Smartphone.
  • the mobile device can be, for example, a pad computing device (e.g., an iPad, an Android tablet computing device, a Kindle (Amazon) device etc.
  • the mobile device can be a remote gaming device.
  • a step or operation can be implemented for synchronizing the display with the secondary display through a network.
  • the aforementioned network can be, for example, a wireless network (e.g., a cellular communications network, the Internet, etc.
  • a system can be implemented for displaying additional information about a displayed point of interest.
  • a system can include, for example, a memory; and a processor in communications with the memory, wherein the system is configured to perform a method, wherein such a method comprises, for example, selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • the mobile device may be, for example, a Smartphone. It still other system embodiments, the mobile device can be a pad computing device. In yet other system embodiments, the mobile device may be, for example, a remote gaming device. In still other system embodiments, the aforementioned method can include synchronizing the display with the secondary display through a network.
  • a computer program product for displaying additional information about a displayed point of interest can be implemented.
  • Such a computer program product can include, for example, a non-transitory storage medium readable by a processor and storing instructions for execution by the processor for performing a method comprising: selecting a region thin a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device.

Abstract

Method, system and computer program product for providing additional information to a handheld device (HHD) about a displayed point of interest in video programming displayed on a multimedia display. A image of the video programming captured by a HHD camera can be used at a remote server to identify the video programming by matching it with archived programming. If identified, additional information related to the video programming can be obtained/provided. A region within a particular frame of displayed video programming can be selected at the HHD to access additional information about a point of interest associated with the region. The additional information can be displayed on the HHD or a secondary display, in response to selecting the region to access the additional information from a remote server.

Description

    CROSS-REFERENCE TO PATENT APPLICATIONS
  • This application claims the benefit of priority under 35 U.S.C. §119(e) as a Continuation-in-Part of U.S. patent application Ser. No. 13/413,859, entitled “Method, System and Computer Program Product for Obtaining and Displaying Supplemental Data About a Display Movie, Show, Event or Video Game,” which was filed on Mar. 7, 2012, and which is incorporated herein in its entirety. U.S. patent application Ser. No. 13/413,859 is a continuation of U.S. patent application Ser. Nos. 121976,148 and 13/345,382, which claim further priority to provisional filing and are both also incorporated herein in their entirety. U.S. patent application Ser. No. 12/976,148 claims the benefit of priority to U.S. Provisional patent application Ser. No. 61/291,837, entitled “System and Methods for Obtaining Background Data Associated With a Movie, Show or Live Sporting Event,” which was filed on Dec. 31, 2009, and to U.S. Provisional Patent Application Ser. No. 61/419,268, entitled “FlickIntel Annotation Systems and Webcast Infrastructure,” which was filed on Dec. 3, 2010, which are incorporated herein by reference in their entirety. U.S. patent application Ser. No. 13/345,382, entitled “Method, System and Processor-Readable Media for Bidirectional Communications and Data Sharing Between Wireless Hand Held Devices and Multimedia Display Systems,” was filed on Jan. 6, 2012. U.S. patent application Ser. No. 13/345,382 claims the benefit of priority to U.S. Provisional Application Ser. No. 61/581,226, entitled “Method, System and Processor-Readable Media for Bidirectional Communications and Data Sharing Between Wireless Hand Held Devices and Multimedia Display Systems,” which was filed on Dec. 29, 2011, and to U.S. Provisional Application Ser. No. 61/539,945, entitled “FlickIntel System Elements, Modules, Properties and Capabilities,” which was filed on Sep. 27, 2011, which are also both incorporated herein by reference in their entirety.
  • TECHNICAL FIELD
  • Embodiments relate to video content, video displays, and video compositing. Embodiments also relate to computer systems, user input devices, databases, and computer networks. Embodiments also relate to environments in which mobile device users can access supplemental data related to entertainment being displayed on multimedia devices. Embodiments are also related to methods, systems and processor-readable media for supporting entertainment programming identification and data retrieval.
  • BACKGROUND OF THE INVENTION
  • People have watched video content on televisions, big screens and other audio-visual devices for decades. They have also used gaming systems, personal computers, handheld devices, and other devices to enjoy interactive content. They often have questions about places, people and things' appearing as the video content is being displayed, and about the music they hear. Databases containing information about the video content such as the actors in a scene or the music being played already exist and provide users with the ability to learn more. The problem is that the information is not quickly or easily retrievable and is not tied or synchronized with the video content for easy retrieval.
  • The existing database solutions provide information about elements appearing in a movie or scene, but only in a very general way. A person curious about a scene element can obtain information about the scene (e.g., video programming) and hope that the information mentions the scene element in which the person is interested. Systems and methods that provide people with the ability to identify or locate information about a scene and then select a specific scene element to obtain information about only that element are needed.
  • BRIEF SUMMARY
  • The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiment and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
  • It is, therefore, one aspect of the disclosed embodiments to provide for methods, systems and applications for supporting the retrieval of information about a scene (e.g., live or recorded video programming) being displayed on a display screen without disturbing the scene as it is being rendered on the display screen.
  • It is another aspect of the disclosed embodiments to provide for methods, systems and processor-readable media enabling the display of supplemental data to wireless hand held devices and/or multimedia displays with respect to content/media of interest once determined from a scene being displayed on a display screen.
  • The aforementioned aspects and other objectives and advantages can now be achieved as described herein. Methods, systems and processor-readable media are disclosed for supporting scene identification, scene related information retrieval, specific element information retrieval based on an identified scene, bidirectional communications and data sharing. A method can include, for example, registering at least one wireless hand held device with a server, service or controller providing background information about scene and/or associated with at least one at least one multimedia display.
  • In accordance with additional features, a program can operate on a mobile device that enables the mobile device with a digital camera to capture a picture of a scene, video or a scene form video, rendering (being displayed) on a display screen (e.g., flat panel display), determining the identity of the scene or video by comparison of the capture picture with images of programming stored in a database, and providing the mobile device with access to data related to the identified scene that can include information about the scene (title, date, location, actors/characters, product placement, history, statistics) for review on or through the mobile device. The database can be associated with a remote server that can be access over a data communications network (wired or wirelessly) by the mobile device. Providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held mobile device based on a selection (capture) of the data rendering on the at least one multimedia display can require registration of the wireless hand held mobile device with the remote server (e.g., a service provided remotely to registered users).
  • In other embodiments, a step can be implemented for obtaining supplemental information from an event in the form of at least one of a movie, television program, recorded event, a live event, sporting event, or a multimedia game. In yet other embodiments, a step can be implemented for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. In still other embodiments, a step can be implemented for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device. In other embodiments, a step can be implemented for pushing the supplemental data for rendering via at least one other multimedia display from the wireless handheld mobile device. In yet other embodiments, a step can be implemented for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In still other embodiments, a step can be provided for filtering the supplemental data for rendering via a multimedia display.
  • In another embodiment a system can be implemented for supporting scene identification and related data retrieval with a mobile device that can include any of an integrated digital camera, a processor, wireless data communications, and a location determination module (e.g., GPS). The mobile device can be configured for selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display: and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • In another embodiment, such instructions can be further configured for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game. In yet another embodiment, such instructions can be further configured for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. In still other embodiments, such instructions can be further configured for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device. In yet another embodiment, such instructions can be configured for pushing the supplemental data for rendering via at least one other multimedia display from a remote server via the mobile device or directly from the server to the at least one other multimedia display over a data network. In other embodiments, such features can be further configured for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In other embodiments, such instructions can be further configured for filtering the supplemental data for rendering via a multimedia display.
  • In another embodiment, a processor-readable medium in a mobile device can be implemented for storing code representing instructions or an application to cause a processor on the mobile device to facilitate image capturing, image matching via a remote server, access to and retrieval of supplemental information related to programming matching the captured image, and rendering of the supplemental data on the mobile device or via a secondary display screen. To perform processes the mobile device can support bidirectional communications and data sharing.
  • In yet another embodiment, remote servers can include code to register at least one wireless hand held device with the remote server to gain access to image-program matching capabilities and access supplemental information related to programming that matches images of events rendering on multimedia display screens located near (within image capture and/or WIFI range of) the mobile devices. A profile can be associated with a registered mobile device/user.
  • In yet another embodiment, location information from a mobile device and/or profile information can be utilized by a remote server to locate at least one additional remote server to support supplemental data access by the mobile device. For example, an additional remote server located closer to the mobile device can reduce transmission inefficiencies associated with data networks and long distances. Furthermore, an additional remote server can be selected based on a profile associated with the registered user/mobile device so that information can be provided in different format pre-selected or related to the user/mobile device, such as a different language (e.g., in French or Spanish, instead of English).
  • In other embodiments, such code can further comprise code to provide the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game. In still other embodiment, such code can comprise code to manipulate via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. in yet other embodiments, such code can comprise code to store the supplemental data in a memory in response to a user input via the at east one wireless hand held device. In yet another embodiment, such code can comprise code to push the supplemental data for rendering via at least one other multimedia display. In still other embodiments, such code can comprise code to push the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In other embodiments, such code can comprise code to filter the supplemental data for rendering via a multimedia display.
  • In other embodiments, a method can be implemented for displaying additional information about a displayed point of interest. Such a method may include selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region, and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • In other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In still other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In yet other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • In some embodiments, the mobile device can be, for example, a Smartphone. In other embodiments, the mobile device can be, for example, a pad computing device (e.g., an iPad, an Android tablet computing device, a Kindle (Amazon) device etc. In still other embodiments, the mobile device can be a remote gaming device. In yet other embodiments, a step or operation can be implemented for synchronizing the display with the secondary display through a network. In other embodiments, the aforementioned network can be, for example, a wireless network (e.g., Wi-Fi), a cellular communications network, the Internet, etc.
  • In another embodiment, a system can be implemented for displaying additional information about a displayed point of interest. Such a system can include, for example, a memory; and a processor in communications with the memory, wherein the system is configured to perform a method, wherein such a method comprises, for example, selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • In an alternative system, for example, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In yet another system, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In still other system embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region, can further comprise selecting the region within the particular frame being rendered on a touch-sensitive display screen associated with the mobile device for selection via touch sensitivity of the touch-sensitive display by the mobile device user.
  • In some system embodiments, the mobile device may be, for example, a Smartphone. It still other system embodiments, the mobile device can be a tablet or pad computing device. In yet other system embodiments, the mobile device may be, for example, a remote gaming device. In still other system embodiments, the aforementioned method can include synchronizing the mobile device with a secondary display through a network.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description herein, serve to explain the principles of the disclosed embodiments.
  • FIG. I illustrates a high level diagram of portions of a system including a wireless Hand Held Device (HHD) visually near a multimedia display screen, which can capture images of video programming being presented on the display screen and/or be implemented as an active pointer, in accordance with the disclosed embodiments;
  • FIG. 2 illustrates two cursors being displayed over selectable and non-selectable screen areas, in accordance with the disclosed embodiments;
  • FIG. 3 illustrates a system that includes at least one HHD and at least one multimedia display, in accordance with the disclosed embodiments;
  • FIG. 4 illustrates a system that includes a user accessing video information, video annotation infrastructure and/or video programming identification a HHD having a video camera and adapted to capture a photographic image of video programming being displayed on the display screen, and/or alternatively via augmented reality device, in accordance with the disclosed embodiments.
  • FIG. 5 illustrates an annotated group video of a user's social contacts, in accordance with the disclosed embodiments;
  • FIG. 6 illustrates a system which includes differences between cursor control and element identification, in accordance with the disclosed embodiments;
  • FIG. 7 illustrates a system that depicts additional processes and methods of cursor selection, in accordance with the disclosed embodiments;
  • FIG. 8 illustrates a graphical view of a system that includes a multimedia display with an annotated element, in accordance with the disclosed embodiments;
  • FIG. 9 illustrates a system utilizing annotated video with a setting have multiple independent users viewing a single display such as a movie screen, in accordance with the disclosed embodiments;
  • FIG. 10 illustrates a system similar to that of system of FIG. 9, but adapted for sports venues, music venues, and theatrical events, and other events (e.g., political conventions, trade shows, etc.) in accordance with the disclosed embodiments;
  • FIG. 11 illustrates a system, in accordance with the disclosed embodiments;
  • FIG. 12 illustrates a system depicting a Sync Data Request that can be sent to an identification service, in accordance with an embodiment;
  • FIG. 13 illustrates a system that includes a Sync Data Response that an identification service can send in response to a sync data request, in accordance with an embodiment;
  • FIG. 14 illustrates a system for maintaining unique identifiers for elements that appear in videos and in using those unique identifiers for scattering and gathering user requests and responses, in accordance with the disclosed embodiments.
  • FIG. 15 illustrates a system generally including a multimedia display associated with a controller and which communicates with one or more wireless HHD's, in accordance with the disclosed embodiments.
  • FIG. 16 illustrates a system that generally includes a multimedia display and a secondary multimedia display, in accordance with the disclosed embodiments.
  • FIG. 17 illustrates a system in which an HHD can communicate wirelessly via a data network (e.g., the Internet) with a remote server associated with a database for providing at least one of: identification based on a captured image provided from a HHD of event or video programming rendering on a multimedia display screen, for obtaining supplemental data associated with the event or video programming by the HHD once the event or video programming is identified, and storage of the supplemental data in a memory or database for later retrieval, in accordance with the disclosed embodiments.
  • FIG. 18 illustrates a system in which multiple HHD's can communicate with the controller and/or the multimedia displays, and also illustrates that additional multimedia devices such as flat panel display screens can be provided for rendering supplemental data obtained by the HHD from a remote server based on the matching of an image captured by a camera integrated with the HHD and matched by the remote server, in accordance with the disclosed embodiments;
  • FIG. 19 illustrates a system in which multiple HHD's (or a single HHD in some cases) can communicate via a data network 520 with a remote server and database associated with and/or in communication with sever, in accordance with the disclosed embodiments;
  • FIG. 20 illustrates a graphical view of a system that includes a multimedia display and one or more wireless HDD's etc. in accordance with the disclosed embodiments;
  • FIG. 21 illustrates a flow chart depicting logical operational steps of a method for registering and associating an HHD with a display screen and displaying supplemental data in accordance with the disclosed embodiments;
  • FIG. 22 illustrates a flow chart depicting logical operational steps of a method for transmitting, storing and retrieving supplemental data and registering an HHD with a secondary screen, in accordance with the disclosed embodiments;
  • FIG. 23 illustrates a flow chart depicting logical operational steps of a method for designating and displaying profile icons for selection of content/media of interest and supplemental data thereof, in accordance with the disclosed embodiments;
  • FIG. 24 illustrates a flow chart depicting logical operation steps of a method for capturing with an HHD camera an image of video programming being displayed on a display screen, providing the image to a remote server for comparison to and matching with images of programming stored in a database, identifying a match and the availability of supplemental data to the HHD form the remote server, and enabling HHD access to the supplemental data including elements of interest associated or tagged in the image of video programming captured by the HHD;
  • FIG. 25 illustrates a schematic diagram of a set that includes a few elements such as actors and some props positioned in front of a “green screen,” in accordance with an example embodiment;
  • FIG. 26 illustrates a schematic diagram of an example wherein the scene of FIG. 25 is imaged to produce a Final view, that includes a background view has been composited in with a foreground view to produce the Final view, in accordance with an example embodiment;
  • FIG. 27 illustrates a schematic diagram of a view changing because the camera moves while imaging a scene, in accordance with an example embodiment;
  • FIG. 28 illustrates a schematic diagram of a scenario wherein a user can select an item by selecting a point within the final view (i.e., a spot on the screen), in accordance with an example embodiment; and
  • FIG. 29 illustrates a block diagram of a system that includes an annotation database accessible via an annotation service, in accordance with an example embodiment.
  • DETAILED DESCRIPTION
  • The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
  • The embodiments now will be described more fully hereinafter with reference to the accompanying drawings, in which illustrative are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosed embodiments. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which disclosed embodiments belong. It will be further understood that terms such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
  • As will be appreciated by one skilled in the art, the present invention can be embodied as a method, system, and/or a processor-readable medium. Accordingly, the embodiments may take the form of an entire hardware application, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the embodiments may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer-readable medium or processor-readable medium may be utilized including, for example, but not limited to, hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
  • Computer program code for carrying out operations of the disclosed embodiments may be written in an object oriented programming language (e.g., Java, C++, etc.). The computer program code, however, for carrying out operations of the disclosed embodiments may also be written in conventional procedural programming languages such as the “C” programming language, HTML, XML, etc., or in a visually oriented programming environment such as, for example, Visual Basic.
  • The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
  • The disclosed embodiments are described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products, and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
  • The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
  • Note that the instructions described herein such as, for example, the operations/instructions and steps discussed herein, and any other processes described herein can be implemented in the context of hardware and/or software. In the context of software, such operations/instructions of the methods described herein can be implemented as, for example, computer-executable instructions such as program modules being executed by a single computer or a group of computers or other processors and processing devices. In most instances, a “module” constitutes a software application.
  • Generally, program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions. Moreover, those skilled in the art will appreciate that the disclosed method and system may be practiced with other computer system configurations such as for example, hand-held devices, multi-processor systems, data networks, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, tablet computers (e.g., iPad and other “Pad” computing device), remote control devices, wireless hand held devices, Smartphones, mainframe computers, servers, and the like.
  • Note that the term module as utilized herein may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines, and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module. The term module may also simply refer to an application such as a computer program designed to assist in the performance of a specific task such as word processing, accounting, inventory management, etc. Additionally, the term “module” can also refer in some instances to a hardware component such as a computer chip or other hardware.
  • It will be understood that the circuits and other means supported by each block and combinations of blocks can be implemented by special purpose hardware, software or firmware operating on special or general-purpose data processors, or combinations thereof. It should also be noted that, in some alternative implementations, the operations noted in the blocks can occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or the varying embodiments described herein can be combined with one another or portions of such embodiments can be combined with portions of other embodiments in another embodiment.
  • FIG. 1 illustrates a high level diagram of portions of a system 100 including a wireless Hand Held Device (HHD) 120, located visually near a multimedia display screen, which can capture images of video programming being presented on the display screen and/or be implemented as an active pointer, in accordance with the disclosed embodiments. Note that the HHD 120 may be, for example, a wireless hand held device such as, for example, a Smartphone, cellular telephone, a remote control device such as a video game control device, a television/set-top box remote control, a table computing device (e.g., “iPad”) and so forth. An active pointer is one that emits a signal Remote controls emitting an ultraviolet signal, a flashlight, or a laser pointer are examples of an active pointer. Thus an active pointing application can be incorporated into an HHD such as HHD 120 and can also communicate via wireless bidirectional communications with other devices, such as, for example the multimedia display 121 shown in FIG. 1. Such a multimedia display can be, for example, a flat panel television or monitor or other display screen (e.g., a movie theater display screen), which in turn can be, in some scenarios, associated with a controller or set-top box, etc., or can integrate such features therein. As shown in FIG. 1 a user 122 can hold the HHD 120, which in turn can emit and receive wireless signals. The multimedia display device 121 (i.e., display device) generally includes a multimedia display area or multimedia display 126. The HHD 120 can also include an integrated digital camera and wireless data communications, which are features common in, for example, smartphone devices and tablets that are in wide use today. Configured in this manner, the HHD can capture an image (e.g. photograph) of video programming as it is rendering on the multimedia display 126.
  • When connecting and engaging gin bidirectional communication with the multimedia display, information can be embedded in the emitted signal such as identification information that identifies the pointer or HHD 120 or that identifies the user 122. The user 122 can be recognized by the HHD 120 or pointer in a variety of ways such as by requiring a passcode tapped out on one or more buttons, a unique gesture detected by accelerometers, or biometric data such as a fingerprint, retinal pattern, or one of the body's naturally occurring and easily measurable signals such as heartbeat that has been shown to often contain unique traits. One of the datums often encoded in the emitted signal is the state of a selection button. One very simple case would involve the use of a laser pointer as the HHD 120 that is simply on or off. An advantage of embedding a unique identifier in the emitted signal is that multiple pointers can be used simultaneously.
  • A pointer detector 114 can read and track the emitted signal The pointer detector 114 typically has multiple sensors and can determine where the pointer or HHD 120 is pointing. In many cases the pointer detector 114 and the display 126 being presented on a multimedia display such as multimedia display 126 are separate units that can be located near one another. A calibration routine can be utilized so that the pointer detector 114 can accurately determine where on the display 126 the pointer is aimed. The output of the pointer detector 114 can include the pointer's aim point and the status of the selection button or other selection actuator.
  • The multimedia display device 121 and hence the multimedia display 126 present video data such as a movie, television show, or sporting event to the user 122. The video data can be annotated by associating it with annotation selection data. The annotation selection data 110 specifies the times and screen locations at which certain scene elements, each having an element identifier 116, are displayed. The annotation selection data 110 can be included with the video data as a data channel similar to the red, green, blue channels present in some video formats, can be encoded into blank spaces in an NTSC type signal, can be separately transmitted as a premium service, can be downloaded and stored, or another technique by which the user has access to both the video data and the annotation data can be used.
  • It can be an important aspect that the annotation data and the video data be time synchronized because selectable scene elements can move around on the display. Thus, synchronization data 106 can be utilized to ensure that the video data and annotation selection data 110 are in sync. One example of synchronization data is the elapsed time from the video start until the current frame is displayed.
  • The annotation data can specify selectable zones. A simple image can have selectable areas. A video, however, is a timed sequence of images. As such, a selectable zone is a combination of selectable areas and the time periods during which the areas are selectable. A selectable zone has coordinates in both space and time and can be specified as a series of discrete coordinates, a series of temporal extents (time periods) and areal extents (bounding polygons or closed curves), as areal extents that change as a function of time, etc. Areal extents can change as a function of time when a selectable area moves on the display, changes size during a scene, or changes shape during a scene.
  • For brevity, the terms “selectable zone” and “cursor coordinate” will be used in this disclosure as having both spatial and temporal coordinates. As discussed above, a selectable zone can be specified in terms of on-screen extents and time periods. Similarly, a cursor coordinate combines the aim point and a media time stamp. Annotated scene elements can be specified as selectable zones associated with one or more element identifiers, such as, for example, element identifier 116.
  • U.S. Pub No.: 20080066129 A1 submitted by Katcher et al., titled: “Method and Apparatus for Interaction with Hyperlinks in a Television Broadcast” was filed Nov. 8, 2007 and is herein incorporated by reference in its entirety. U.S. Pub No.: 20080066129 A1 teaches and discloses the annotation of video, video display systems, video data systems, databases, and viewer interaction with annotated video to obtain information about on-screen items. It is for its teachings of video, video display systems, video data systems, databases, viewer interaction with annotated video, and viewer obtainment of information about on-screen items that U.S. Pub No.: 20080066129 A1 is herein incorporated by reference in its entirety.
  • U.S. Pub No.: 20100154007 A1 submitted by Touboul et al., titled: “Embedded Video Advertising Method and System” was filed Apr. 21, 2009 and is herein incorporated by reference in its entirety. U.S. Pub No.: 20100154007 A1 teaches and discloses the annotation of video, video display systems, video data systems, databases, and viewer interaction with annotated video to obtain information about on-screen items. It is for its teachings of video, video display systems, video data systems, databases, viewer interaction with annotated video, and viewer obtainment of information about on-screen items that U.S. Pub No.: 20100154007 A1 is herein incorporated by reference in its entirety.
  • As also shown in FIG. 1, a cursor control module 108 can change the appearance of a displayed cursor 124 based on the cursor coordinate. The aim point can be passed to the cursor control module 108 that also receives synchronization data. The synchronization data can indicate the displayed scene, elapsed time within a scene, elapsed time from video start, the media time stamp, or other data that helps determine the media time stamp. The cursor coordinate can be determined from the aim point and the media time stamp. The cursor control module 108 can examine the annotation selection data for the cursor coordinate to determine if the pointer is aimed at an annotated scene element. In terms of selectable zones, the cursor coordinate lies within a selectable zone or it doesn't. If it does, the cursor control module 108 can cause a “selectable” cursor to be displayed at the aim point. Otherwise, a different cursor can be displayed.
  • The cursor control module 108 can communicate with a media device 104, which in turn can communicate with the display device 121. The media device 104 also receives data from the annotated video data 102 and then sends data as, for example, synchronization data 106 to the cursor control module 108. The pointer detector 114 can transmit, for example, pointer aim point and/or selection aimpoint information 112 to the cursor control module 108. Additionally annotation selection data 110 can be provided to the cursor control module 108 and also to one or more element identifiers, such as, for example, the element identifier 116, which in turn can send information/data to a video annotation infrastructure.
  • Rather than, or in addition to, engaging in bi-directional communication with a multimedia display to retrieve data about a scene or determine the identity of the scene, the HHD 120 can capture an image of the scene (e.g., video programming) as it is being rendered (displayed) on the display device 121. The image can be captured with a digital camera commonly found integrated in HHDs. The captured image can then be used by the HHD to determine the identity of the video programming and obtain addition information about it by communicating with a remote server over a data communication network using wireless communication capabilities also commonly found in HHDs. Additional teachings of video capture, programming identification, and access to related data are further discussed with respect to FIG. 4 below.
  • FIG. 2 illustrates two cursors 125 and 127 being displayed over selectable and non-selectable screen areas, in accordance with the disclosed embodiments. The screen area occupied by the “Bletch Cola” image 123 is within a selectable zone and a solid ring cursor 127 is shown. Other areas are not inside selectable zones and an empty ring (i.e., cursor 125) is shown. Any easily distinguished set of cursors can be utilized because the reason for changing the cursor is to alert a user, such as user 122 that the screen element is selectable.
  • Returning to FIG. 1, aiming at a selectable scene element can result in a “selectable” cursor being displayed. The selectable element can be selected by actuating a trigger or button on the pointing device. Other possibilities include letting the cursor linger over the selectable element, awaiting a “select this” query to pop up and pointing at that, or performing a gesture with the pointing device/HHD 120.
  • The annotation data can include additional data associated with the selectable areas or selectable zones such as element identifiers, cursor identifiers, cursor bitmaps, and zone executable code. The zone executable code can be triggered when the cursor enters a selectable zone, leaves a selectable zone, lingers in a selectable zone, or when a selection type event occurs when the cursor coordinate is inside a selectable zone. The zone executable code can augment or replace default code that would otherwise be executed when the aforementioned events occur. For example, the default code for a selection event can cause an element identifier to be embedded in an element query that is then sent to a portion of the video annotation infrastructure. The zone executable code can augment the default code by sending tracking information to a market research firm, playing an audible sound, or some other action. In general, the additional data allows for customization based on the specific selectable zone.
  • FIG. 3 illustrates a system 300 that includes an HHD 120 and a multimedia display 126, in accordance with the disclosed embodiments. System 300 of FIG. 3 is similar to system 100 of FIG. 1 with some notable exceptions. System 100 of FIG. 1 includes “annotated video data” (i.e., annotated video data 102) wherein the annotation election data 110 is distributed with the video data. In system 300 of FIG. 3, the annotation selection data 110 is obtained from some other source. As in FIG. 1, however, the media device 104 still provides the synchronization data 106 used for determining the time datum for the cursor coordinate. Another difference is that a passive pointing device such as a hand can be utilized. A device such as, for example, the Microsoft Kinect® can determine an aim point, selection gesture, and other actions by analyzing the user's stance, posture, body position, or movements. Of course, HHD 120 may be, as indicated earlier, a Smartphone, game control device, tablet or pad computing device, and so forth. It can be appreciated, for example, the HHD 120 and the pointer detector 113 may be, for example, the same device, or associated with one another.
  • FIG. 3 illustrates an element identification module 115 examining the cursor coordinate and annotation data to determine an element identifier 116. The element identifier and some user preferences 140 can be combined to form a scene element query 142. The user preferences can include the user's privacy requests, location data, data from a previous query, or other information. Note that the privacy request can be governed by the law. For example, the US has statutes governing children's privacy and many nations have statutes governing the use and retention of anyone's private data.
  • The user's element query can pass to a query router 144. The query router 144 can send the query to a number of servers based on the element identifier 116 (e.g., element ID), the user's preferences, etc. Examples include: local servers that provide information tailored to the user's locale; product servers that provide information directly from manufacturers, distributors, or marketers, ad servers wherein specific queries are routed to certain servers in return for remuneration, and social network servers that share the query or indicated interest amongst the user's friends or a select subset of those friends. The query router 144 can route data to, for example, one or more scene element databases 148, which in turn provide information to an information display module 150. The information display module 150 can assemble the query responses and displays them to a user, such as user 122.
  • Note that FIG. 3 also illustrates a secondary multimedia display 129 on Which query results can be presented. On response is an advertisement, another is an offer for a coupon, and a third is an offer for online purchase. The secondary display 129 can be, for example, a cell phone, computer, tablet, television, or other web-connected device.
  • FIG. 4 illustrates a system 400 that includes a user 122 accessing video information, video annotation infrastructure and/or video programming identification with a HHD 120 having a video camera and adapted to capture a photographic image of video programming being displayed on the display screen 121, and/or alternatively video annotation infrastructure through an augmented reality device 161, in accordance with the disclosed embodiments. Here, the augmented reality (AR) device 161 can include a local display that the user 122 can observe, a video camera, and other devices for additional functionality. One example is that the display is a movie screen and the user 122 is one of many people watching a movie in a theater. The multimedia display device 121 in this example can include a movie screen and projector. The movie screen can include markers that help the AR device 161 determine the screen location and distance. The user 122 can watch the movie through the AR device 161 with the AR device 161 introducing a cursor into the user's view of the movie.
  • In the particular embodiment or example shown in FIG. 4, the AR device 161 can be one having a forward pointing camera and a display presenting what is in front of the AR device 161 to the user 122. In essence, users can view the world through their own eyes or can view an augmented version by viewing the world through the device. As seen in FIG. 4, the user 122 is aiming the camera at the display and is, oddly, watching the display on a secondary display 129 almost as if the AR device 161 is a transparent window with the exception that the secondary display 129 can also present annotation data, can zoom in or out, and can overlay various other visual elements within the user's view.
  • The AR device 161 can thus include a number of modules and features, such as, for example, a scene alignment module 160, pointer aimpoint and/or selection aimpoint 162, and a cursor control module 164. Other possible modules and or features can include a synchronization module 166, and an element identification module 170, along with an annotation data receiver 172, annotation selection data 174, and a user authentication module 178 along with user preferences 180. element identifier 176, user's scene element query 182, a query router 184, a local scene element database 186, and an information overlay module 188. The display 129 can be, in some examples, a local display. The information overlay module can also access, for example, data from a remote scene element database 190.
  • In the movie theater scenario discussed above, the theater can transmit synchronization data 168, for example, as a service to its customers, perhaps as a paid service or premium service. The AR device 161 can include a synchronization module 166 that obtains synchronization data 168 from some other source, perhaps by downloading video fingerprinting data or by processing signals embedded within the movie itself.
  • The scene alignment module 60 can locate the display position from one or more of the markers 153, 155, and 157, by simply observing the screen edges, or some other means. The display position can be used to map points on the local display, such as the cursor, to points on the main display (movie screen). Combining the cursor position and the synchronization data gives the cursor coordinate. Annotation selection data can be downloaded into the AR device 161 as the movie displays, ahead of time, or on-demand. Examining the annotation selection data for the cursor coordinate can yield the element identifier 176.
  • The user authentication module 178 is depicted within the AR device 161. In practice, and as discussed in relation to system 100 of FIG. 1, any of the disclosed pointing scenarios can include the use of an authentication module such as user authentication module 178. In some embodiments, the passive pointer 113 of FIG. 2, for example, can include authentication by recognizing the user's face, movements, retina, or pre-selected authentication gesture. User authentication is useful because it provides for tying a particular person to a particular purchase, query, or action.
  • The query router 144 can send queries to various data servers perhaps including a local data store within the AR device 161 themselves. The query results are gathered and passed to an overlay module for presentment to the user 122 via the HHD 120 and/or via the multimedia display 126 and/or the multimedia display 129. The local display 129 on the AR device 161 shows the cursor 127 and the query results overlaid on top of the displayed image of the movie screen. Here, a movie can be used as an example whereas the display the user is viewing through the AR device 161 is can be any display such as a television in a home, restaurant, or other social venue.
  • An HHD 120 including a camera, such as a smartphone or tablet, can be used instead of an augmented reality device (although the AR device features and functions could be carried out by a smartphone) to capture an image of the scene being displayed on the display device 121 for the purpose of identifying the scene and obtaining additional information related to the scene, such as elements of the scene described above. Rather than integrated or communicating wirelessly with the local infrastructure to accomplish identification and data retrieval, however, the HHD 120 can communicate with a remote server over a data communication network, provide the captured image of the scene (video programming displayed on the multimedia display 121) where the scene can be identified by matching the captured image to images of video programming stored in a database, e.g., remote scene element database 190, associated with the remote server. The server can then notify the HHD 120 of the availability of data related to the scene (e.g., information regarding scene elements) If the server can match the scene and identify it. Scene elements can be selected on a touch-sensitive display screen commonly included with HHDs by selection of the area or element of interest within the scene, which can then enable the server to provide additional information about the selected area/scene element. The sever and database can be provided in the form of a service where registered users can determine the identity of a scene (e.g., a captured image of live or recorded video programming) being displayed on any screen utilizing a captured image to match, identity and provide data about or related to the scene. Databases would require continuous updating in the event programs are live, Live scenes (e.g., live football or baseball games) would not likely be tagged or have limited scene selection capabilities, where recorded programs (e.g., movies) would be updated as scene tagging is updated, In either case, additional data can be collected and provided to users. Additionally, advertising content can be provided before, during or after the provision of scene-related data; thereby enabling revenue generation should the service be free to end users. An application (“App”) supporting this aspect of the embodiments can be downloaded from a server supporting the HHD (e.g., IOS, Android stores).
  • FIG. 5 illustrates an annotated group video 500 of a user's social contacts, in accordance with the disclosed embodiments. Automated facial recognition applications already exist for annotating still images; this video version combines the selectable areas of single images into the selectable zones of annotated video. In this application, selection of a scene element equates to selecting a person. The selection event can trigger bringing up the persons contact information, personal notes, and can even initiate contact through telephone, video chat, text message, or some other means. Additionally, lingering the cursor (or a sensed finger over a touch sensitive screen) over a face can bring up that person's information.
  • An interesting variation is that crowd videos or images can be artificially created. For example. an image of a friend can be isolated within a picture or video to create content with only that one person. The content from a set of friends or contacts can be combined to form a virtual crowd scene. This, combined with today's pinch-to-zoom type technology for touch screens yields an interesting interface to a contacts database. Selecting a person with the cursor or by tapping a touch screen overlying the display can automatically open a video chat window such as the “Vikram” window that can have Vikram's image and an indication that Vikram hasn't responded. Examples of a “not-connected” or “pending” indicator can include a dimmed or greyed window, a thin border frame, and an overlaid pattern such as the “connection pending text” shown or a connection pending icon. An. established connection can be indicated by a different means such as the thick border frame shown in FIG. 5.
  • FIG. 6 illustrates a system 600, which includes differences between cursor control and element identification, in accordance with the disclosed embodiments. System 600 includes, for example, a pointer detection module 197 which can send data to cursor control module 108, which in turn can provide data 192 indicative of cursor style and location, which can be provided to, for example, a display device such as display device 121 or even secondary displays such as multimedia display 129. Annotation selection data 110 can also be provided to the element identification module 115. As shown in FIG. 6, the pointer detection module 197 can determine a cursor coordinate. As already discussed, a cursor coordinate can include both a location on a display screen and a time stamp for synchronizing the time and position of the cursor with the time varying video image. The annotation data 196 can include selection zones such that the cursor control module determines if the cursor coordinates lies within a selection zone and determines the cursor style and location to present on the display device. Note that this example assumes that only default “selectable” and “not-selectable” styles are available.
  • Element identification requires slightly more information and the figure indicates this by show “annotation selection data” that can be the selection data augmented with element identifiers associated with each selectable zone. The element identification module receives data from a selection event that includes the time stamp and the cursor display location at the moment the selection was made. Recall that a selection can be made with an action as simple as clicking a button on a pointing device. The element identification module 115 can examine the annotation data 196 to determine which, if any, selectable zone was chosen and then determines the element Id associated with that selection zone. The element ID can then be employed for formulating a scene element query 194.
  • Note that the examples do not account for differently sized displays with different resolutions. Different display sizes can be compensated for by mapping all displays to a normalized size, such as a 1.0 by 1.0 square, and similarly maintaining the selection zones in that normalized coordinate space. The aim point can be translated into normalized space to determine if the cursor location lies within a selectable zone. For example, a screen can be 800 pixels wide and 600 pixels tall. The pixel at screen location (80, 120) would have normalized location (80/800, 1201600)=(0.1, 0.2). This normalization example is intended to be non-limiting because a wide variety of normalization mappings and schemes are functionally equivalent.
  • Also note that the examples do not account for windowed display environments wherein a graphical user interface can present a number of display windows on a display device. Windowed display environments generally include directives for mapping a cursor location on a display into a cursor location in a display window. If a video is played within a display window then the cursor location or aim point must be mapped into “window coordinates” before the cursor coordinate can be determined. Furthermore, different window sizes can be compensated for by mapping all displays to a normalized size as discussed above and similarly maintaining the selection zones in that normalized coordinate space.
  • FIG. 7 illustrates a system 700 that depicts additional processes and methods of cursor selection, in accordance with the disclosed embodiments. As indicated at block 202 in FIG. 7, annotation data can include, for example, a cursor image, template, and/or executable code data. Such information can be provided to a pointer detection module 204, which in turn can generate cursor style data 212, including, for example, cursor style and location information 210, which in turn is transmitted to a display device, such as, for example, multimedia display device 121 and/or other multimedia display devices. The pointer detection module 204 also provides for cursor position data and/or selection data, as shown at block 206. Such data can be provided to, for example, a cursor style selection module 214, which can communicate with and retrieve data from, for example, a cursor style database 218. Such a database 218 may be, for example, a remote database retrieved over, for example, a data network such as the Internet. The data shown at block 206 can also be provided to an element identification module 208, which in turn can generate a scene element query. Note that FIG. 7 is about having different on screen cursors based on what selectable element is under the cursor. Content developers can choose, for example, the cursor and an updateable database allows selection to be altered over time.
  • Cursor styles and actions can be dependent of the selectable or non-selectable zone containing the cursor. A cursor style is the cursors appearance. Most users are currently familiar with cursor styles such as arrows, insertion points, hands with a pointing finger, Xes, crossed lines, dots, bulls eyes, and others. An example of an insertion point is the blinking vertical bar often used within text editors. The blinking insertion point can be created in a variety of ways including an animated GIF, or executable computer code. An animated GIF is a series of images that are displayed sequentially one after the other in a continuous loop. The blinking cursor example can be made with two pictures, one with a vertical bar and the next without. Another property of cursors is that they often have transparent pixels as well as displayed pixels. That is why users can often see part of what is displayed under the cursor. The insertion point can alternatively be created with a small piece of executable code that draws the vertical bar and then erases it in a continuous loop.
  • The specific cursor to be displayed within a selectable zone can be stored within the annotation data, can be stored within a media device or display device, or can be obtained from a remote server. In one example, the cursor enters a selectable zone and thereby triggers a cursor style selection module to query a cursor style database to obtain the cursor to be displayed. In another example, the annotation data can contains a directive that the cursor style can be obtained from a local database. The local database can return a directive that that the cursor can be obtained from a remote database. The chain of requests can continue until eventually an actual displayable cursor is returned or until an error occurs such as a bad database address, a timeout, or another error. On error, a default cursor can be used. Cursor styles that are successfully fetched can be locally cached.
  • Note that the infrastructure for obtaining a cursor style has many components that are identical or similar to those for obtaining element data. In many systems, moving the cursor into a selectable area results in a “cursor entry” event that triggers the various actions required for changing the cursor appearance. A similar event, “cursor exited” can be triggered when the cursor leaves the selectable zone. Notice that with a simple image display these events only occur when the cursor moves into and out of a selectable area. Video data, having a time axis as well as the other axes, can have selectable zones that appear, move, and disappear as the video is displayed. As such, the cursor entered and cursor exited events can occur without cursor movement.
  • Another event is the “hover event” that can occurs when the cursor remains within a selectable zone for a set period of time. A hover event can cause a change within or near the selectable zone. One example is that a small descriptive text box appears.
  • FIG. 8 illustrates a graphical view of a system 800 that includes a multimedia display 128 with an annotated element 220, in accordance with the disclosed embodiments. FIG. 8 thus depicts a change within the selectable area wherein a finger is detected within the selectable zone associated with a can of Bletch Cola. The video can be streamed to the user's display device or the user can be using a device such as that shown in FIG. 4. At first, an event such as the cursor entered event is triggered and as a result the Bletch Cola is highlighted as shown in the figure. If the finger remains within the zone then the “Get Some” text box can be displayed in response to a “hover” type event that is triggered when the finger remains with the selectable zone for more than a preset time period. A selectable zone can be highlighted in a variety of way such as by brightening all the individual pixels (e.g. increasing the RGB color values), brightening pixels having certain values or properties, tinting pixels, or passing the pixels through an image filter. Image processing software such as Photoshop and the GIMP has a wide variety of image filters that can be applied to standard images. These filters can be applied to video data by applying them to each video frame. Furthermore, the selectable zone can define a filter mask for each frame so that the image filtering operations appear within the selectable zone and can even appear to move with the selectable zone.
  • FIG. 9 illustrates a system 810 utilizing annotated video with a setting have multiple independent users viewing a single display such as a movie screen, in accordance with the, disclosed embodiments. System 810 generally, includes for example, a movie theater infrastructure 240 and a movie screen display 242. Movie video data 246 and/or movie annotation data 248, and/or movie permission data 250 (e.g., can be limited to certain geographic areas and to certain time periods in some embodiments) can be provided to, for example, an HHD 120, which may be, for example, a Smartphone, a user's AR device, a pad computing device, etc. Coded ticket stubs (e.g., QR code) or an electronic ticket can have additional permissions data, as indicated at block 254, which also may be provided to the HHD 120.
  • An important aspect of this embodiment is that users do not interfere with each other's enjoyment of the movie unless invited to do so. The venue can display video on a large display while also providing movie annotation data to people watching the video through augmented reality devices as discussed above in reference to FIG. 3. The venue can also stream video data or annotated video data so that a user with a cell phone type device can receive view the large display or can view an augmented but smaller version on the cell phone. One of the issues that arise whenever users have camera within any venue is piracy. For example, people can use augmented reality devices or cell phones to record a movie as they watch it.
  • Transmitting permission data to the user's device so that the user can record the movie and/or watch the recording only when located within a certain space or within a certain time period can control recording. Examples include allowing the recording to be viewed only within the theater or on the same day as the user viewed a movie. GPS or cell tower triangulation type techniques are often used to determine location. The user's ticket to the venue can contain further permission information so that only paying customers can make recordings. Furthermore, the user can purchase or otherwise obtain additional permissions to expand the zones or times for permissibly viewing the recording.
  • FIG. 10 illustrates a system 820 similar to that of system 810 of FIG. 9, but adapted for sports venues, music venues, and theatrical events, and other events (e.g., political conventions, trade shows, etc.) in accordance with the disclosed embodiments. System 810 can include, for example, an annotation infrastructure 260, a sports venue infrastructure 262, one or more video cameras or other video recording devices and systems 264 with respect to a field of play shown at block 266. Video data 268, annotation data 270 and permissions data 272 (can be limited to certain geographic areas and to certain time periods) can be transmitted from a sports venue infrastructure 262 to, for example, an HHD 120 (e.g., user's Smartphone, pad computing device, AR device, etc.).
  • In some embodiments, the venue displays the action on a big screen and the user records that. In other embodiments, the venue streams camera data from numerous cameras and the user chooses one or more camera view. In yet other embodiments, users can point their own cameras (such as the AR device's front facing camera) directly at the field of play. Live venues can annotate video data in near real time to thereby provide users with a full FlickIntel experience.
  • FIG. 11 illustrates a system 830, in accordance with the disclosed embodiments. System 830 includes a number of potential features and modules, such as for example, finger printing, as indicated at block 280. Video data 120 can be provided to a finger pointing module 280 and/or a multimedia display device such as, for example, display device 121. A data acquisition device 290 (e.g., microphone, camera, etc.) can receive data from display 121 and be subject to a synchronization data request as depicted at block 288. An identification service 282 can, for example, respond to the synchronization data request 288 and provide data 284 (e.g., media ID, frame/scene estimate, synchronization refinement data).
  • A synchronization module 286 can receive data 284 and data indicative of a synchronization data request and can transmits synchronization data 298 which in turn provides selection data 300 and user language selection 310. A user request 302 based on selection data 300 can be transmitted to an element identification service 304, which in turn generates element information and other data 308 (e.g., element app, site, page, profile or tag), which can provided via messaging 318 (e.g., user email or other messaging). The element identification service 304 also provides for user language selection 310, which in turn can be provided to a language translation module 292, which in turn can provide for digital voice translations 294 which are provided to a user audio device 296. The element identification service can also generate an element offer/coupon 312 which can provide a user electronic wallet 314 and a printing service 316. The element offer/coupon 312 can also be rendered via a user selected display or device 320.
  • It is important that the annotation data be synchronized with the displayed video and various synchronization means have been presented. System 830 of FIG. 11 includes aspects of a fingerprinting service. Video data, including the audio channel, can be fingerprinted by analyzing it for certain identifying features. Video can be fingerprinted by simplifying the image data by, for example, converting color to grey scale having at least one hit per pixel to thereby greatly reduce the amount of data per video frame. The frame data can also be reduced by transform techniques. For example, one of the current video image compression techniques is based on the discreet cosine transform (DCT). DCT coefficients have also been used in pattern recognition. As such, compressed video already contains DCT descriptors for each scene and those descriptors can be used as fingerprints. Furthermore, video is a sequence of images and thereby presents a timed sequence of descriptors. It is therefore seen that current video compression technology provides one fingerprinting solution. In addition, the compression type algorithms can be applied to reduced data such as the grey scaled video or lower resolution video to provide fingerprints that are quicker to produce and match.
  • A display device or media device can calculate fingerprints for the video data being presented to a user. Alternatively, a camera, such as that of an AR device, can record the displayed video data and fingerprints be calculated from the recording. The fingerprint calculations can be calculated in near real time and submitted to an identification service. The identification service comprises a massive search engine containing fingerprint data for numerous known videos. The identification service accepts the fingerprints for the unknown video, matches them to a known video, and returns synchronization data. The search engine can determine a number of candidates to thereby reduce search size and can continue to receive and process fingerprints of the unknown video until only one candidate remains. Note that two of the candidates can be the same movie at different times. The fingerprint data can be calculated from a single video frames, a sequence of video frames, a portion of the soundtrack, or a combination of these or other data from the video.
  • FIG. 12 illustrates a system 840 depicting a Sync Data Request 359 that can be sent to an identification service, in accordance with an embodiment. Other embodiments can have additional or fewer elements. A Video Visual and/or Sound Data field 358 can include fingerprint data for the unknown video that is to be identified. Many of the illustrated elements can be used to reduce the size of the identification service's search. The User Id 332 identifies the user such that the user's viewing habits can be tracked. A fan of a certain weekly show is likely to be watching that show at a specific time every week. A Device ID 330 has similar utility. Location and time information can be used to limit an initial search to whatever is being transmitted at that location and time. Note that the time is not too useful unless the user is watching a recoding, in which case the recording time can be included in the request. Media History 340 can be used to target the initial search at one or more shows or genres. Service type 338 can indicate, satellite service, over the air, cable, or streamed which when combined with location and time (i.e. block 336 labeled “Location and Time”) greatly limits the search. Service, location, time, and channel in combination are almost certain to limit the search to a single show.
  • User authentication data 350 can be used to verify the user, to limit the identification service to only authorized or subscribing users, and even to tie criminal or unlawful acts, such as piracy or underage viewing, to a specific viewer. The request ID 334 is typically used to identify a matching response. Display Device Id 342 and Address 344 can be used favorably when the display device is a movie screen or similar device at a public venue because many requests from the same venue can be combined and because the identification service might already know what is being displayed by the display device. User preferences 356 can also be designated. Note that device id 330 and the display device id 342 may be unique because the device id 330 can refer to, for example, an AR device or to a device (e.g. Smartphone) with a display slaved to a primary display that has the display device id. Sync response history 356 can indicate recent response from the identification service because it is likely that a user has continued watching the same video. Synch test results 352 can also be provided.
  • FIG. 13 illustrates a system 850 that includes a Sync Data Response 411 that an identification service can send in response to a sync data request, in accordance with an embodiment. Other embodiments can have additional or fewer elements A response Id 402 can be included and used later in a request's sync response history, such as the sync response history 356 shown in FIG. 12. A request Id 404 can refer to a sync data request for which a sync data response is being sent. Note that a single request can result in numerous responses if the system is configured to process numerous responses. A piracy flag 408 can indicate that the video has been identified and appears to be being pirated, stolen, or used in violation of copyright based on the User Authentication, User Id, or other data. A parental block flag 410 can indicate that the video has been identified and that it appears that the user shouldn't be viewing it based on the user's age or on permissions or policies set for the user by the user's parent, guardian, or overseer. The ‘Num Candidates’ field 406 indicates the number of candidate videos being indicated in the data sync response.
  • The identification field 360 indicates that the video has been positively matched. A Media Id 362 and a timestamp are sufficient for identifying a frame in a video. Therefore, the identification field 360 can include nothing more than a media Id 362 and a sync estimate 264. The sync estimate 364 can be a timestamp or functionally equivalent information that can specify a frame in the identified media. The time difference between sending the request and receiving the response can be indeterminate and significant enough that the sync estimate is not precise enough and the user experience feels delayed, out of sync, jittery, or unresponsive. Test data 368 and a test algorithm 366 can be used to refine the sync estimate. The test data 368 can include the sync estimate and/or some of the fingerprinting data from the identification service 360. The test algorithm field 366 can specify which test algorithm to use or can include a test algorithm in the form of executable code. In any case, certain embodiments can have a synchronization module that can use a test algorithm on the test data and the video visual or sound data to produce or refine a sync estimate. A sync data field 370 is also shown.
  • The identification service 380 can submit one or more candidates when the video has not been positively identified. Candidate field 380 is thus shown in FIG. 13 and refers to a first candidate, but additional candidate fields can be provided for other candidates. Candidate field 380 thus includes media id 372, sync estimate 374, test algorithm 376, test data 378, and a test sync field 380. The candidate field 389 can contain data very similar to that in an identification field 360 such that all of the candidates can be tested to determine which, if any of the candidates are the unidentified video. In many cases a threshold test, a best match, a likelihood test, some other test or some combination of tests can be used to determine when the video has been positively identified.
  • One interesting case is when a scene has been positively identified but the scene is included in multiple videos. This case can occur when stock footage is used and when different versions exist such as a director's cut and a theatrical release. In these cases, candidates can be submitted to a synchronization module but with sync estimates that are in the future. The synchronization module can test the candidates anytime the needed fingerprint data from the unidentified video is available. In some cases, such as when a video stream is being viewed as it is downloaded, the needed fingerprint data will become available only after the time offset of the future sync estimate is reached. In other cases, the entire video is available such as when it is on a disk or already downloaded. In these cases the candidate having a sync estimate in the future can be tested immediately because all the needed data is available.
  • Another case that can occur is when a sync data request is submitted without any fingerprinting data (video visual and/or sound data). The other data in the request can contain sufficient information to limit the number of likely candidates. For example, the time, location, service type, and channel can almost certainly be used to fully identify a video and a sync estimate when the video starts on time and proceeds at a predictable rate. Candidates can be returned to further narrow the search and the sync test results field contains data that helps guide the search engine. The sync test results field 352 shown in FIG. 12, for example, can also indicate when the identified video or the candidates do not, match the unknown video.
  • Returning to FIG. 11, the synchronization module 286 can produce synchronization data 298 that can be used to keep the selection zones synchronized with the video and that can be used to synchronize alternative language tracks with the video. Videos are often streamed or distributed with only one or a select few languages. Typically, the video has a few audio tracks with different audio tracks having dialog in different languages. Dialog in additional languages can be distributed separately and can provide a good user experience if it is synchronized with the video. Multiple audio tracks can be mixed by, at, or near the user audio device with different tracks having different content. For example, some tracks can include background or other sounds that are not dialog and some other tracks can be dialog in various languages. Mixing data can specify relative sound levels and other details necessary for mixing the tracks. Here, track refers generically to electronically encoded audio that can include numerous audio channels for surround sound or stereo sound and with some tracks distributed with the video. For example, video, which includes some audio tracks, can be distributed on a data disk, a radio frequency transmission, satellite download, computer data file, or data stream. Additional audio tracks can be distributed separately via a data disk, a radio frequency transmission, satellite download, computer data file, or data stream.
  • The synchronization data 298 can be used to ensure that the additional tracks are played, perhaps after mixing with other tracks, in synchronization with the displayed video. A use can therefore select a video to watch and a preferred language. A system enabled according to one or more embodiments herein can then obtain dialog in the desired language, mix the sound tracks, and present the video to the user. The user will then hear dialog in the desired language. Certain embodiments can also include optionally displayed subtitles that can be rendered into video and overlayed on the video and in synchronization with the video. A further aspect is that a user can choose to have different actors speak in different languages. The different actors can each have audio tracks in different languages that can be mixed to provide a multilingual auditory experience.
  • The synchronization data 298 can also be used to keep the selection data synced up with the video. Selection data can be distributed separately from the video just as additional audio tracks can be. This is particularly true when the distributed video predates an enabled system or for some other reason is does not already include selection data, annotation data, or synchronization data. Recall that it is the selection data that can specify the selection zones. A selection zone can be identified from a screen coordinate and synchronization data that specifies the video or media being displayed as well as a time, frame, or likely time period. The selection data can reside in either a local data store or a remote data store.
  • A user request can specify what the user wants. Recalling the local display 129 of FIG. 4, a first user request brought up options for “Bletch Cola” that included “fetch” and “order online”. The options can be displayed to the user in a number of ways including as graphical overlays on a display, local display, or AR display. The graphical overlays are essentially screen elements with their own selection zones that are created in response to the first user request. A second user request, for example a selection of “order online”, can bring up a new set of options and selection zones for ordering some cola from an online merchant. Another possibility is that the user select the “$1@Marty's Grocer” to obtain an electronic coupon or printable coupon for use at a merchant location. It is important to note that the various selection zones can be dynamically created and dismissed (perhaps by selecting an area outside the selection zones) and that the selection data for the dynamically created selection zones can be distributed in a wide variety of ways and can accessed locally or remotely.
  • A further note is that broadcasters often overlay advertising or information onto displayed video. Examples include station identifiers and graphics of varying levels of intrusiveness advertising television shows at other times. These can also be examples of dynamically created screen elements with selection zones. User action may not have caused these screen elements to appear, but those elements can be selected. The dynamically created screen elements can obscure, occlude, or otherwise overlay other selectable screen elements. The selectable elements and selection zones associated with a video overlay should usually take precedence such that a user selecting an overlaying element does not receive data or a response based on an occluded or not-currently-visible screen element.
  • A user request can imply an element Id by specifying a selection zone or can explicitly include the element Id. In any case, element Ids can have ambiguities. An element identification service can resolve ambiguities. For example, two different videos can use the same element Id. An element identification service can use a media Id and an element Id to identify a specific thing. The element identification service can amend the user request to resolve any ambiguities and can send the user request to another service, server, or system to fulfill the user request. The illustrated examples include selecting a language, obtaining an offer for sale or coupon, obtaining information, or being directed to other data sources and services. The obtained information can be provided to the user, perhaps in accordance with the user request or a stored user profile, in email or another messaging service, or to a display device such as the user's cell phone, AR device, tablet, or television screen.
  • A user can be directed to other data services by being directed to an element app, site, page, profile, or tag. An element app is an application that can be downloaded and run on the user's computer or some other computing or media device. A site can be a web site about the identified element and perhaps even a page at an online store that includes a description and sale offer. A page, profile, or tag can refer to locations, information, and data streams within a social media site or service and associated with the identified element.
  • Returning to the multilingual example, a user can select a menu icon, a menu button, or in some other way cause a menu of viewing options to appear on one of the screens upon which the user can select screen elements. One of the options can provide the user with available voice translations at which time the user selects a language or dialect.
  • FIG. 14 illustrates a system 860 for maintaining unique identifiers for elements that appear in videos and in using those unique identifiers for scattering and gathering user requests and responses, in accordance with the disclosed embodiments. A universal element identifier (UEI) 420 is an identifier with an element descriptor 422 that is uniquely identified with a certain thing, such as “Bletch Cola”, so that whenever that thing appears on screen with a selectable zone that it is associated with the UEI. For example, every advertisement and product placement of Bletch Cola can be associated with that one UEI thereby greatly simplifying the routing of and responses to user requests. A UEI generator may, for example, generate data for storage in a UEI database 480.
  • The UEI database 480 can be kept on one or more servers (e.g., server 424) with the database storing data including, for example, one or more element records, such as, for example, element record 460. The UEI database 482 may be responsive to, for example, an element admin request 484. The element record 460 can be created in response to a request that includes an element descriptor. The request can be sent to a (UEI) server 424 or directly to the database server 480. In many cases, to help ensure data security, the UEI server 424 and the database 480 can be maintained on separate computer systems having extremely limited general access but with widely available read only access for UEI servers. The element record 460 can contain data including but not limited to, for example, the Universal Element Id 474, registered response servers 472, element descriptors 470, registration data, 466 administrator credentials 468, a cryptographic signature 464, and an element public key 462. The UEI server 424 can generate a UEI 426.
  • A UEI request 420 is an example of an administrative request, an example of which is also shown at block 484. Element administrators can have read, write, and modify access to element records and should be limited to writing and modifying only those records for which they are authorized. Different records can have different administrators. Element administrators have far more limited access and rights than server or database administrators. Element records can contain admin credentials specifying who can administer the record or similar password or authentication data. The admin credential can also contain contact information, account information, and billing information (if the UEI is a paid service). In many cases the admin credentials can refer to an administrator record in a database table. The important aspects are that only authorized admins can alter the element record, that the admins can be contacted, and that the record can be paid for if it is a paid service. Paid services can have various fees and payment options such as one time fees, subscription fees, prepayment, and automatic subscription renewal.
  • The element descriptor 422 is data describing the thing that is uniquely identified. It can be whatever an element record admin desires such as simply “Bletch Cola” or something far more detailed. Registration data can include a creation date and a log of changes to the record. It can be important that responses to user queries come from valid sources because otherwise a miscreant can inject a spurious and perhaps even dangerous response. A cryptographic signature and element public key are two parts of a public key encryption technique. Selectable zones and UEIs can include or be associated with a public key. Responses to user queries can be signed with the cryptographic signature and verified with the public key. Similarly, the queries can be signed with the public key and verified with the cryptographic signature.
  • The element record 460 can contain a list of registered response servers. A query routing module 426 can obtain the list of registered response servers and direct user queries to one or more of the registered servers as shown as block 448. Note that a query to one or more unregistered servers is depicted at block 450, which in turn will provide data to an unregistered data server 456. Unsigned element data 444 and/or signed element data 446 may be output from the unregistered data server 456 and in turn can be provided to an element data gathering module, which in turn can provide data to an element data resolution module, which in turn can provide element data and presentation directives as shown at block 440.
  • A typical query 428 may contain, for example a UEI 430 and a query type 432. The query routing module 426 can be a purpose built hardware device such as an Internet router or can be executable code run by a computing device. The list of registered servers can be obtained directly from the UEI database server 480 or from a different but authorized server (or responder) that has obtained the list. Public key techniques can ensure that the responder is authorized or that the response contains signed and valid data.
  • A man in the middle responder 434 can be authorized or can be relaying signed and authorized information. The man-in-the-middle responder 434 can communicate with a third party responder 452 and also the database/server 480. In the case of no or weak encryption, the man-in-the-middle responder 434 can provide spurious information and thereby cause the query routing module 436 to send the query to a hostile data server. For example, the makers of Bletch Cola would prefer that their own or only authorized servers respond to user queries having the UEI for their product. A competitor might want to send a different response such as a coupon for their own cola. A truly hostile entity such as a hacker might want to inject a response that compromises the user's media devices, communications devices or related consumer electronics.
  • In some scenarios a network operator or communications infrastructure provider may desire to add, delete, or modify the registered response server list. Such modification is not always nefarious because it may be simply redirecting the queries to a closer server or a caching server. The network operator might also wish to intercept “UEI not found” type responses and inject other information that is presumably helpful to the user. In other cases, the network operator may intentionally interfere with disclosed systems and services by responding to user queries or registered response service requests with information about its own services, premium data charges, or reasons for not allowing the queries on its network.
  • Ideally the query routing module 436 will send the query to at least one registered data/response server that returns a element data, perhaps signed, that can be presented to the user. When multiple servers are responding an element gathering module can collect the responses and format them for the user. A response gathering module can cooperate with the routing module such that it knows how many responses are expected and when to timeout on waiting for an expected response. An element data resolution module can inspect the gathered responses and resolve them by removing duplicate data, deleting expired data, applying corrections to data, and other operations. Data can be corrected when it is versioned and the responses contain an older version of the data along with ‘diffs’ that can be applied to older versions to obtain newer versions.
  • The user query can also be sent to an unregistered data server with good or had intent as discussed above. The unregistered server can not reply with signed data unless the encryption technique is compromised. The data element gathering module or a similar module can discard unsigned or wrongly signed responses. Alternatively, the module can be configured to accept unsigned response from certain sources or to accept wrongly signed responses with certain signatures. All responses can be treated the same once accepted and systems lacking encryption or verification capabilities will most likely accept all response to user queries.
  • Based on the foregoing, a number of options and features are possible. For example, a screen/display device/associated device that transmits an annotation signal can be provided that includes a media identifier as well as screen annotation data such that local devices (remotes, phones, etc.) can create “augmented reality” cursor and transmit cursor icon & location to display device etc. Note that the video data may be in a narrative format that does not respond to user inputs except, perhaps, for panning and zooming around a scene.
  • In some embodiments, a system can be implemented which includes a display device presenting a display of time varying video data; annotation data comprising time and position based annotations of display elements appearing on the display; a pointing device that transmits a pointer signal wherein the pointing device comprises a selection mechanism (always on or only when button pushed); a pointer sensor that detects the pointer signal and determines a cursor location wherein the pointing device is pointing at the cursor location; a cursor presented on the display at the cursor location; and an annotation identification module that determines when an on screen element is selected. Note that annotated video data can be video data that is associated with, bundled with, or distributed with annotation data. In another embodiment, the pointer signal can encode cursor data wherein the cursor data determines the on-screen cursor appearance. In another embodiment, the pointer signal can encode a pointer identifier. In yet other embodiments, the on-screen cursor can change appearance when overlying an annotated element. In still other embodiments, the pointing device can transmit the pointer signal when a pointer actuator is activated.
  • In yet other embodiments, a system can be implemented that includes annotated video data comprising annotation data; an annotation data transmitter that transmits the annotation data to a pointing device (e.g., Kinect or camera based, for example in the context of an AR browser). In a Kinect version, cameras may detect where a person/passive pointer (like specially painted or shaped baton or weapon type thing) is pointing, in order to detect who the person is via facial recognition, biometric measurement, gesture as password, etc. In other embodiments a contact list for video conferencing can be provided. In still other embodiments, a navigating screen with a remote control phone as a pointing device can be provided, and the phone camera and display used to aim an on-phone cursor at the display. Additionally, the phone camera and display can share images, poke phone, and the phone can send screen location data to the display which then treats it as a local selection. In a movie theater setting, an HHD can synch in on frame/position and communicates with a server. Also, theater/movie “cookies” can phone so that user can redisplay/rewatch movie for xxx time period such that queries can be made outside the theater. Addition ay, it is not a requirement that the device (e.g., HHD) be aimed at the movie screen.
  • Additionally, tagged searches are possible. For example, YouTube can be searched by a tag so that people can tag the videos and then search on the tags. An extension is to tag movie scenes so that a media identifier and offset into the ‘movie’ comes back, perhaps even a link for viewing that scene directly. As a further refinement, someone can obtain a tag by selecting a screen element and assemble a search string by combining tags some of which are entered via keyboard and some via element selection.
  • Synchronization data can be obtained through a technique like the fingerprinting type technique described above or more simply by knowing what is being watched (media Id or scene Id) and how far into it the person is (time stamp or frame number). A person can tag a scene or a frame with any of the above mentioned devices (AR device, cell phone, remote control, kinect type device, pointing device, . . . ). Basically, click to select frame/scene, key in or otherwise enter in the tag, and accept. On accept the tag and the sync data (includes media Id) is sent off to be stored in a server. People can search the tag database to check out tagged scenes. For example, Joe tags a scene “explosion” and Jill tags the same scene “gas”. A search for “gas explosion” can result in a link to watch the scene. A social media tie in is that Joe's friends might see that “Joe tagged a scene with ‘explosion’” along with a single frame from the scene and embedded links to the scene. If an actor, “Kyle Durden” was blown up in the explosion, Joe's tag could have been assembled by selecting Kyle Durden in the scene and requesting a tag (assumes style annotated video) and then adding ‘explosion’. The “Kyle Durden” tag can be the textual name, or a small graphic. The small graphic can be the actor's picture or a symbol or graphic people associate with the actor. Similarly, anything having a trademark (or service mark) can use that mark (word, logo, design, or combo mark) can use the mark as a tag. Searching for such a tag requires entering it in some way such as by “clicking” on it via the pointing device or HHD.
  • FIG. 15 illustrates a system 870 generally including a multimedia display 126 associated with a controller and which communicates with one or more wireless HHD's, in accordance with the disclosed embodiments. The controller 503 can be integrated with the multimedia display 126 or may be a separate device, such as, for example, a set-top box that communicates with the multimedia display 126. Communications between the multimedia display 126 and the control 503 may be wireless (e.g., bidirectional wireless communications) or wired (e.g., USB, HDMI, etc.). One or more HHD's, such as, for example, wireless HHD 502 can communicate via wireless bidirectional communications with the controller 503 and or the multimedia display 870 (e.g., in the case where the controller 503 is integrated with the multimedia display 126).
  • A user of the HHD 502 can utilize the HHD 502 to register the HHD 502 with the multimedia display 126. The HHD 502 is thus associated via this registration process with the multimedia display 126. Once registered and thus associated with that particular multimedia display 126 (and/or the controller 503), the user can utilize the HHD 502 to move an onscreen graphically displayed cursor to content/media of interest and then “click” the HHD 502 and select that area on the screen multimedia display 126, which then prompts a display of data supplemental to the selected content/media in a display of the HHD 502 itself and/or, for example, in a sub-screen 512 within the display 126. If within the display 126, the sub-screen would “pop up” and display the supplemental data within display 126. Alternatively, the supplemental data can be displayed for the user of the HHD 502 within a display area or display screen of the HHD 502.
  • FIG. 16 illustrates a system 880 that generally includes a multimedia display 126 and a secondary multimedia display 129, in accordance with the disclosed embodiments. As in FIG. 15, multimedia display 126 can be associated with a controller 503 and multimedia display 129 can be associated with controller 505. Controller 503 can communicate wirelessly or via wired means with multimedia display 126 and controller 505 can similarly communicate wirelessly or via wired means with multimedia display 129 (e.g., a secondary screen). The wireless HHD 502 can communicate via wireless bidirectional communications means with controller 502 and/or controller 505 and/or directly with multimedia displays 126 and/or 129. In the embodiment shown in FIG. 16, the supplemental data can be displayed via display 126. the HHD 502 and/or the secondary display 129. In some embodiments, the supplemental data can be displayed in a window or sub-screen 514 that “pop ups” in multimedia display 129. A similar display process for the supplemental data can occur via the window or sub-screen 512 that can “pop up” in multimedia display 126.
  • FIG. 17 illustrates and represents a system 890 in which HHD 512 can communicate wirelessly via a data network 520 (e.g., the Internet) for storage and/or retrieval supplemental data in a memory or database 522, in accordance with the disclosed embodiments. In some instances, a user of HHD 502 may wish to access from or save the supplemental data to the “cloud” for later viewing, rather displaying the data immediately via a pop up window or sub-screen 512 of via a display associated with the HHD 502. FIG. 17 further illustrates and represents a system 890 in which an HHD 512 can communicate wirelessly via a data network 520 (e.g., the Internet) with a remote server 522 associated with a database for providing at least one of identification based on a captured image provided from a HHD 512 of event or video programming rendering on a multimedia display screen 126, for obtaining supplemental data associated with the event or video programming by the HHD 502 once the event or video programming is identified, and storage of the supplemental data in a memory or database 522 for later retrieval, in accordance with the disclosed embodiments
  • FIG. 18 illustrates a system 900 in which multiple HHD's can communicate with the controller 503 and/or the multimedia displays 126 and/or 129, in accordance with the disclosed embodiments. Each HHD 502, 504, 506, etc., can be registered with the same controller 503 or with other controllers, such as controller 505 discussed previously. Similarly each controller HHD 502, 504 and/or 506, etc., can be registered with multimedia displays 126 and/or 129. All of the HHD's 502, 506, 506, etc. can interact with, for example, the main screen 126 and collaborate with one another (e.g., in a gaming environment). A user of HHD 504 may desire, for example, to display selected supplemental data on display 129 rather than display 126. A user of HHD 506 may desire to display the supplemental data only on his or her HHD. Each HHD 502, 504, 506, etc. can also independently interact with displayed media on either screen 126 or 129 to retrieve data of personal interest to that particular user of a respective HHD. In this sense, the ‘screen’ via a controller such as controllers 503 and/or 504 can function as director of two or more HHD's, particularly in the context of a gaming environment.
  • FIG. 19 illustrates a system 910 in which multiple HHD's (or a single HHD in some cases) can communicate with a data network 520 that communicates with a server 524 and database 522 associated with and/or in communication with sever 524, in accordance with the disclosed embodiments. Supplemental data can thus be stored via HHD 502, 504 and/or 506, etc. in database 522 or another appropriate memory for later retrieval. Thus data can be retrieved from remote servers such as server 524 via data network 520 (e.g., the Internet). Furthermore, the illustrated system 910 supports communication of HHDs 502 over a data network 520 to communication with a server 524 for purposes of identifying video programing based on an image captured by the HHD 502 of video programming being displayed on a multimedia display 126 by matching the image with images of video programming stored in a database 522. If there is a match found by the server 524, then the HHD can be notified of the availability of additional data related to the video programming in accordance with features described herein.
  • FIG. 20 illustrates a graphical view of a system 920 that includes a multimedia display 126 and one or more wireless HDD's 502, 504, 506, etc., in accordance with the disclosed embodiments. It can be assumed that the configuration shown in FIG. 20 can be implemented in the context of the other arrangements and systems disclosed herein (e.g., including various controllers, primary and secondary screens, data networks, servers, databases/memory components and so forth). Each user of an HHD can designate his or her own personal profile icon during, for example, the registration process or at a later time. Each profile icon can be displayed as a cursor via a multimedia display such as multimedia display 126. Thus, for example, a user of HHD 502 may select a profile icon 532 in the form of a dollar sign, which is displayed graphically in display 126. Similarly, a user of HHD 504 may be, for example, a hunter or outdoorsman and may select a profile icon 534 in the symbol of a deer as his profile icon. A user of HHD 506 may, for example, select a profile icon 538 in the symbol of skeleton as his or her profile icon. It can be assumed that users may select primary and alternate profile icons, so that in the case of two or more people with the same profile icon, a secondary profile icon may be displayed instead of a primary icon so as not to cause confusion among users of the various HHD's 502, 504, 508, etc. during the same session or event.
  • FIG. 21 illustrates a flow chart depicting logical operational steps of a method 940 for registering and associating an HHD with a display screen and displaying supplemental data, in accordance with the disclosed embodiments. As indicated at block 550, the process can be initiated. Next, as shown at block 552, an HHD such as, for example, the HHD's 502, 504, 506, etc. can be registered with a multimedia display and/or a controller such as described previously. Thereafter, as illustrated at block 555, following registration and association with the display (e.g., via a controller), an icon or cursor can be displayed, which associated with the HHD. Then, as depicted at block 558, the user of the HHD can select content/media of interest using the HHD with respect to the icon/cursor displayed on the multimedia display screen. Next, as shown at block 560, supplemental data can be displayed on the HHD display or on the multimedia display (or both).
  • FIG. 22 illustrates a flow chart depicting logical operational steps of a method 950 for transmitting, storing and retrieving supplemental data and registering an HHD with a secondary screen, in accordance with the disclosed embodiments. As indicate at block 570, the process begins. Then, as shown at block 572, the HHD can be registered with a secondary screen and/or associated controller. An example of a secondary screen is the multimedia display 129 discussed earlier. Thereafter, in response to a particular user input via the HHD, supplemental data can be transmitted to a database (e.g., database/memory 522) via a data network, (e.g., network 520, the Internet, etc.), as illustrated at block 576. Then, as shown at block 578, the supplemental data can be retrieved for display via the secondary screen, the primary screen and/or other screens. etc. The process can then terminate, as shown at block 580.
  • FIG. 23 illustrates a flow chart depicting logical operational steps of a method 930 for designating and displaying profile icons for selection of content/media of interest and supplemental data thereof, in accordance with the disclosed embodiments.. As shown at block 560, the process can be initiated. Next, as shown at block 552, the HHD (or multiple HHD's) can be registered (together or individually) with a multimedia display screen and/or associated controller. Thereafter, as shown at block 554, user profiles can be established via the HHD and one or more profile icons selected. Examples of profile icons include icons/ cursors 532, 534, and 536 shown in FIG. 20. Next, as illustrated at block 556, one or more of the profile icons can be displayed “on screen”.
  • Thereafter, as shown at block 558, content/media of interest can be selected via the HHD with respect to the profile icon displayed on screen. For example, a profile icon cursor may be moved via the HHD to an area of interest (e.g., an actor on screen). The user then “clicks” an appropriate HHD user control (or touch screen button in the case of an HHD touch screen) to initiate the display of supplemental data (with respect to the content/media of interest). The supplemental data can be displayed on, for example, an HHD display screen, as shown at block 560. The process can then end as depicted at block 562.
  • Referring to FIG. 24, illustrated is a flow diagram of an alternate process of using an image captured by an HHD of a scene (e.g., video programming) as it is being displayed on a multimedia display, determining the identity of the scene at a remote server, and providing access to additional information related to the scene from a database. As shown in Block 610, an APP on a HHD can be activated to capture an image of a scene of video programming displayed on a multimedia display and enable access to data associated with video programming from a remote server if the scene is matched and identified By the remote server. The image of the scene is captured using a digital camera integrated in the HHD. The HHD then wirelessly accesses a remote server to match the captured scene with images of video programs in a database to determine the identity of the scene and the availability of related data, as shown in Block 620. Wireless communication can be supported by wireless communications hardware integrated in the HHD to support WIFI or cellular communications (e.g., 802.xx, 4g, LTE, and so on). Then as shown in block 630, the HHD can receive notification of a match and the availability of related data that can be selectively retrieved from the server by HHD. Then as shown in Block 640, the HHD selectively accesses additional data related to the scene. The data can logically be accessed and manipulated consistent with many of the teachings provide herein
  • Most views actually contain layers of content. For example, a set such as the set 700 illustrated in FIG. 25 can have a few elements such as actors 705 and some props 715 positioned in front of a “green screen” 710. A green screen 710 is simply a known background that can be easily replaced by other imagery. Green screens were originally a single uniform color. The non-green parts of a scene were, by definition, the foreground 711 and the green parts were the background 712. Views of background scenes such as mountain vistas, moving traffic, or bustling cities can be captured independently of the foreground views. A final view can be produced by compositing the foreground view 711 and the background view 712. FIG. 26 illustrates an example wherein the scene of FIG. 25 is imaged to produce Final view 713, that includes background view 712 has been composited in with foreground view 711 to produce the Final view 713.
  • Compositing with a green screen amounts to taking the foreground view 711 and replacing everything green with the background view 712. Computer technology made this process particularly convenient because the green pixels are simply replaced by background pixels. The above example of using a green screen actually produced three content layers. The background, the dynamic set elements, and the static set elements. The static set elements are foreground things that don't move such as a table or a sphere resting on the table. The dynamic set elements are foreground things that do move such as the actor and anything the actor manipulates.
  • FIG. 27 illustrates a view changing because the camera moves while imaging a scene. Note that the foreground and background are shifted as indicated by arrows, but are otherwise unchanged. FIG. 27 more clearly illustrates this concept by showing a complete view and a series of presented views. A scene can be much larger than the view imaged by a camera and the camera can pan and tilt to move the view around and thereby capture different areas of the scene.
  • A movie, television show, or video is essentially a sequence of views. More conveniently though, the sequences of views are first combined into acts and the sequence of acts is combined to produce the movie. An act is often the sequence of views of a scene. For example, the views taken from two cameras filming a scene can be cut and spliced to produce an act in a movie. FIG. 28 illustrates two views being captured from the same scene.
  • Changes in technology have allowed the green screen technique to be replaced by computer algorithms that can automatically replace any element in a view, even moving elements. Similar algorithms can track elements as they move. Furthermore, in the example above a final view in composited from a foreground view and from a background view. In practice, many more layers can be composited to form final views.
  • Changes in technology have also allowed virtual scenes to be filmed by virtual cameras. Virtual scenes are computer generated and can be static, dynamic, foreground, background, or any other layer. The virtual cameras are mathematically defined to produce views of the virtual scenes. As with any other view, a view from a virtual scene can be composited with any other view.
  • Annotating views and scenes is the process of determining which spots on a display correspond to what elements in a scene. Each scene or view can have a “screen map” wherein specific areas in a view are identified as being an actor, consumer product. or other scene element. One very data intensive embodiment would use a different screen map for every frame of every scene in a movie. The user selects a scene element by pointing at it with a pointing device or finger, the point of aim/selection is identified then submitted to a software module that uses the screen map to determine what thing in the scene corresponds to that point. The identity of the pointed at thing is then returned for subsequent use such as finding an actor's biography or purchasing an on screen item.
  • Virtual scenes can be automatically annotated as they are created because the rendering software used to place the scene elements can also record the viewed locations of those elements..
  • The elements in any of the views can be annotated before compositing. Returning the FIG. 25, the actor, table and sphere can be annotated for each of the two images views. Similarly, the mountainous background view of FIGS. 26 and 27 can be annotated independently. Compositing annotated views together requires that the annotations also be composited. Annotation compositing can be performed by tracking which items are composited on top of other items and producing a final annotation for the final view. Another option is to preserve the annotations for each view. As shown in FIG. 28, a user can select an item by selecting a point within the final view (i.e., a spot on the screen). A software module can then determine if the point coincides with a foreground view element such as the actor. If not, then the software module can determine if the point coincides with an element in the background. If there are additional layers composited between the foreground and background, then each of those layers is checked in turn beginning with the foremost and progressing to the hindmost. Green screen areas are either not annotated at all or are annotated as green screen, transparency, or some other value that signals the software module to proceed to the next layer.
  • A complete view can also be annotated. The selected point of FIG. 28 does not move within the complete view but does move within the presented views. Vector addition of the view offset and the selection vector locates the selected point within the complete view. A similar mathematical operation (multiplication) provides for locating the selection point when the camera zooms into or out of the scene. As such, annotating the complete view and tracking both the view offset and zoom factor allows the same annotation data to be used as the camera pans, tilts, and zooms through a scene.
  • An excellent example of a complete view is a sports arena. The views change radically as a game progresses. Many scene elements, such as scoreboards, and advertisements, do not move. A complete view of a sports arena can be annotated once for each camera angle and the annotation then used as the background annotation when a game is played. Note that in this case the background is changing and is not composited into the final view but that the same background annotation can be used throughout a gaming season.
  • Annotation compositing and complete view annotation are advantageous because they greatly ease burden of annotating views. An entire scene can be annotated once and then the view trajectories recorded. View trajectories are records of view positions and zoom factors at different times as a scene is imaged. Just as the annotation task is reduced, the amount of annotation data is reduced because view trajectories are much smaller than screen maps. A further advantage is that the different views in different layers (foreground, background, etc.) can be reused in different parts of a presentation or even in completely different presentation. Movies, videos, and sporting events are examples of presentations.
  • An annotation database accessible via an annotation server 800, as shown in FIG. 29, can annotate every pixel in a frame, can annotate only those pixels overlying certain elements, or can annotate only certain points in select frames. Every annotated point can be termed an “annotation coordinate”. Every piece of annotated media content is identified by a media identifier and every annotation coordinate includes a frame (or subframe) identifier, a coordinate, and an element identifier. Annotation coordinates can also contain the media identifier. Similarly, the user's selected element can be specified by selection data containing a screen coordinate, media content identifier, and frame identifier. The annotation coordinate best matching the selection data has the same media identifier and is closest in both time and space. Closest in space can be a Euclidean, Mahalinobis, or Manhattan distance. Closest in time can be measured in frames or seconds. A useful distance measure for annotation purposes can be a weighted sum of spatial and temporal distance, can step forward or backward through frames seeking a nearby annotation coordinate, or can combine the spatial and temporal date in other ways. As such, an element that does not move during a contiguous series of frames can be annotated with a set of annotation coordinates arranged at either end of the frame sequence and spatially centered over the element. A further refinement to overcome irregularities in temporal distance measurement is to track scene changes and to exclude annotation coordinates from other scenes.
  • A yet further refinement is give each scene in a movie, show, sporting event, etc. a unique media content identifier. This refinement addresses issues that may arise when clips or portions of a show are presented instead of the entire performance.
  • A yet further refinement is to annotate a media clip with a media content identifier and a time offset specifying at what time in the original does the media clip begin. For example, a media clip could begin 35 minutes into a movie. In this manner it is far easier to annotate previews, clips, mash-ups, or different cuts (theatrical, director's, unrated, etc.) of a presentation. Mash-ups are a series of clips taken from different presentations or rearrangements of a single presentation. Note that sound tracks, songs, dialog, and other sounds can be clipped, mixed, or otherwise arranged within a mashup. An annotation coordinate can refer to a place or a sound as easily as to an on screen element by specifying special spatial elements as flags for such place elements or sound elements. For example, (−1,−1) could indicate sound while (−2,−2) could indicate location.
  • Compositing can be used to replace scene elements in live or recorded video content. Returning to the stadium discussed above, there are many advertisements placed around the stadium such as posters, billboards, signage, and lighted signage. The giant video screens present in many stadiums often show multiple advertisements at the same time as well as images, video, and information about an event (game, concert, race, etc.) occurring at the stadium, about athletes, about attendees, and about stadium activities. For example, the “kiss cam” shows attendees who are encouraged to kiss. Stadium activities can include singing the national anthem, throwing t-shirts and swag into the audience, mascot races, half time shows, etc. Compositing can replace the stadium advertisements with other advertisements or images.
  • In fact, compositing has been used to replace stadium advertisements with other advertisements for some years. Stadium attendees can see actual physical billboards and posters while remote views, such as a TV audience, can see other images. For example, attendees at a Brazilian soccer game might see Portuguese language billboards advertising a Brazilian beer. A viewer in Germany might instead see a German language advertisement for a German beer. The German advertisement has been composited over the Brazilian advertisements for viewers in Germany.
  • Similar compositing can be performed for recorded video. The Brazilian soccer game can be recorded and shown years later with completely different ads composited over the Brazilian beer advertisement. Such compositing can be particularly useful when a product or service cannot be legally advertised or becomes socially unacceptable. An example of that is cigarette ads which have become regulated in some countries and are sometimes viewed with disdain or antipathy.
  • Compositing can also be used to modernize or otherwise replace product placement within movies and television shows. Product placement occurs when an item is placed within a scene as a scene element but is not overtly advertised. Examples include what an actor eats, drinks, or wears while on screen. Examples also include background elements such as an advertisement on a bench that an actor walks past or sits on. Compositing can update a product placement when a can of Tab in decades old video is composited out in favor of a can of Coke, Jolt, etc. Compositing can also be used to adjust product placements based on geographic markets with an American seeing Jolt instead of Tab and an Australian seeing Solo instead of Tab. Geographic markets can be narrowed to the point that people in different parts of the same city, perhaps even in neighboring houses, see different scene elements. Such fine levels of marketing may require location data from a person's mobile or wearable devices. A persons IP (internet protocol) address can also be used to determine location data. Such location data is currently available.
  • Other compositing opportunities include demographic based or targeted advertisements. Returning to the beer can example, a viewer's location or profile can be queried and the composited in product placements and stadium ads targeted at the viewer. Sadly, targeted advertising via compositing can be used to harass specific individuals or groups when another person or group obtains a right to place products or advertisements into that individual's viewed video content. Compositing also provides a solution wherein the harassed individual opts out of targeted ads or obtains the right to place products or ads into his own viewed video content. Here, “obtains a right” can mean a purchase or license of advertising rights. For generality, “obtains a right” can also include stealing or hijacking wherein video content is intercepted and scene elements composited in (or out) without the consent of the video content provider and/or the person viewing the content.
  • The issue now is: What of the annotation tags related to the advertisements and product placements that were composited out? Composited out means replaced by something else via compositing. Composited in means replacing something that was originally a scent element via compositing. As discussed above, the composited in scene elements are in a video layer in front of the background and the original foreground. As also discussed above, annotations can be composited. The annotation selection data can therefore be composited such that the correct element identifier is obtained. The annotations can be composited when the video is composited or can be annotated downstream, such as at the users media device (element 104 of FIG. 1).
  • Some embodiments can use local or downloaded annotation data, or composited annotation data while others can assemble a query containing information sufficient to specify a cursor aim point and a frame of video. Without loss of generality, the data sufficient to a frame of video is herein referred to as a media tag. The query can be submitted to an annotation server 800 that queries an annotation database to determine the element identifier of the object selected by the user. This scenario is complicated when the frame of video has new scene elements composited in because the media tag of the original video may not be associated with or linked to the composited in material. One solution is to add annotation data for the composited in material to the annotation database (or another annotation database) and to cause the server 800 to find the element identifier for the composited in object, if selected. The media tag can be augmented with a “composited annotation indicator” indicating that there is annotation data corresponding to composited in video. The composited annotation indicator can contain annotation data for the composited in scene elements or can contain a tag, address, URL (uniform resource locator), pointer, or identifier that can be used access annotation data for the composited in scene elements. This augmented media tag can contain an ordered list of composited annotation indicators if multiple layers of video have been composited over the original video. The list's order can indicate the order in which compositing occurred. Recall that the composited layers can contain large transparent regions through which lower video layers are visible.
  • Alternatively, the media tag can be replaced with a new media tag. The new media tag can reference the annotation data and the media tag of the original (or underlying) video. The underlying video can also have composited in items. Another alternative is for the new media tag to be an ordered list of older media tags with the bottom media tag indicating the original video and the rest indicating video layers composited over one another, and the list order indicating the order in which the compositing occurred. Lists can be ordered by putting them in order or by ranking each list element such that the list can be put in order.
  • The annotation data for the composited in scene elements can indicate which screen coordinates in a frame correspond to the composited in items. In this manner, the determination of an element identifier is similar to that for the non-composited video. If the user selected a composited in item, the element identifier for that item is returned and the annotations for the original video need not be searched (although searching in parallel can lead to higher performance). For example, “Jolt” is composited over “Tab” and the user selects “Jolt” in a particular frame. The annotation data for the composited in scene elements provides the “Jolt” element identifier. The annotation data for the original video provides the “Tab” element identifier. The “Jolt” element identifier is selected because the composited in scene elements overlay the original ones. Note that if the user selected a hat in the original video then the annotation data for the composited in scene elements can return “no item” thereby indicating that the hat's element identifier obtained from the annotations for the original video is the correct element identifier. Another example is that a new scene element may be composited into the video in which case the annotation data for the composited in scene elements provides an element identifier.
  • Alternatively, the annotation data can indicate an element identifier that should replace one in the original video.. For example, the annotation data for the original video can indicate that the user selected a can of Tab. The annotation data can indicate that the “Jolt” element identifier should be returned instead of the “Tab” element identifier. This alternative requires that lower level annotation data, such as that for the original video, be searched such that lower level element Ids can be replaced.
  • When multiple layers of video are composited over top of one another there can be multiple layers of annotations for composited in elements. The annotation data can be queried as discussed above with the element identifier for the topmost item at the selected coordinates being the correct element identifier.
  • Returning to the stadium, the people at the stadium can see advertisements and activities with their naked eyes, can see selected views on the stadium's giant screen (or screens), or can view their surroundings through mobile devices such as smart phones, tablet computers, glasses with integrated displays, and other wearable devices with display capabilities. For example, an attendee's surroundings can be recorded or streamed from the integrated camera in a mobile device and viewed on the display of the streaming device or another device that is registered with the recording/streaming device. When viewing through a device, a person can view an augmented version of reality with overlaid information related to tagged content or recognized content. Another augmentation is that the viewer use the camera or display device to magnify or zoom-in on an activity or object.
  • Remote viewers of an event happening at the stadium can also see advertisements and activities. The remote viewers, however, can be provided with video having composted in items. The remote viewers can select a composited in item and receive information about the composited in element.
  • As shown in FIG. 29, streaming services or other intermediaries can receive video from mobile device cameras (or other cameras) and record it or stream the video to other intermediaries or display devices. In this scenario, the display, camera, and intermediaries would be registered directly or indirectly with one another. One example of indirect registration is when devices or intermediaries are not directly registered with each other, may be unaware of each other, but can receive data from one another. For example, the camera and display may both register with an intermediary that receives the camera video stream and provides it to the display. The video stream can pass through multiple intermediaries in series and in parallel. The video stream can reach multiple display devices. Every one of those intermediaries has the opportunity to composite items and advertisements into the video stream before passing it along. Certain intermediaries can interpret annotation data for the composited in scene elements and composite over those scene elements. Certain other intermediaries can interpret media tags and use the data therein to attempt to register with or contact the camera streaming the video or an intermediary closer, streamwise, to the camera.
  • Based on the foregoing, it can be appreciated that a number of embodiments, disclosed and alternative, are disclosed herein. For example, in an embodiment, a method can be implemented for supporting bidirectional communications and data sharing. Such a method can include, for example, registering at least one wireless hand held device with a controller associated with at least one at least one multimedia display; selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • In other embodiments, a step can be implemented for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game. In yet other embodiments, a step can be implemented for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. In still other embodiments, a step can be implemented for storing the supplemental data in a memory in response to, a user input via the at east one wireless hand held device. In other embodiments, a step can be implemented for pushing the supplemental data for rendering via at least one other multimedia display. In yet other embodiments, a step can be implemented for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In still other embodiments, a step can be provided for filtering the supplemental data for rendering via a multimedia display. “Filtering” may include, for example, automatically censing supplemental data based on a user profile or parameters for censoring or filing such data. For example, a public place such as a bar or restaurant, may wish to prevent certain types of supplemental data (e.g., pornography, violence, etc.) from being displayed on multimedia displays in its establishment. Filtering may occur via the controller (set-top box), controls or settings on the displays themselves and/or via the HHD accessing media of interest.
  • In another embodiment a system can be implemented for supporting bidirectional communications and data sharing. Such a system can include a processor, and a data bus coupled to the processor. In additional, such a system can include a computer-usable medium embodying computer code, the computer-usable medium being coupled to the data bus, the computer program code comprising instructions executable by the processor and configured for: registering at least one wireless hand held device with a controller associated with at least one at least one multimedia display; selecting at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and providing supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • In another embodiment, such instructions can be further configured for providing the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game. In yet another embodiment, such instructions can be further configured for manipulating via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. In still other embodiments, such instructions can be further configured for storing the supplemental data in a memory in response to a user input via the at east one wireless hand held device. In yet another embodiment, such instructions can be configured for pushing the supplemental data for rendering via at least one other multimedia display. In other embodiments, such instructions can be further configured for pushing the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In other embodiments, such instructions can be further configured for filtering the supplemental data for rendering via a multimedia display.
  • In another embodiment, a processor-readable medium can be implemented for storing code representing instructions to cause a processor to perform a process to support bidirectional communications and data sharing. Such code can in some embodiments, comprise code to register at least one wireless hand held device with a controller associated with at least one at least one multimedia display; select at least one profile icon for use as a cursor during interaction of the at least one wireless hand held device with the at least one multimedia display during rendering of an event as data on the at least one multimedia display; and provide supplemental data from at least one of the at least one multimedia display and/or a remote database to the at least one wireless hand held device based on a selection of the data rendered on the at least one multimedia display marked by the cursor utilizing the at least one wireless hand held device.
  • In other embodiments, such code can further comprise code to provide the event as at least one of a movie, television program, recorded event, a live event, or a multimedia game. In still other embodiment, such code can comprise code to manipulate via the controller, the data rendered on the at least one multimedia display utilizing the at least one wireless hand held device. In yet other embodiments, such code can comprise code to store the supplemental data in a memory in response to a user input via the at east one wireless hand held device. In yet another embodiment, such code can comprise code to push the supplemental data for rendering via at least one other multimedia display. In still other embodiments, such code can comprise code to push the supplemental data via at least one of the at least one wireless hand held device, a remote server, or the controller associated with the at least one multimedia display. In other embodiments, such code can comprise code to filter the supplemental data for rendering via a multimedia display.
  • In other embodiments, a method can be implemented for displaying additional information about a displayed point of interest. Such a method may include selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region, and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • In other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In still other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In yet other embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • In some embodiments, the mobile device can be, for example, a Smartphone. In other embodiments, the mobile device can be, for example, a pad computing device (e.g., an iPad, an Android tablet computing device, a Kindle (Amazon) device etc. In still other embodiments, the mobile device can be a remote gaming device. In yet other embodiments, a step or operation can be implemented for synchronizing the display with the secondary display through a network. In other embodiments, the aforementioned network can be, for example, a wireless network (e.g., a cellular communications network, the Internet, etc.
  • In another embodiment, a system can be implemented for displaying additional information about a displayed point of interest. Such a system can include, for example, a memory; and a processor in communications with the memory, wherein the system is configured to perform a method, wherein such a method comprises, for example, selecting a region within a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
  • In an alternative system, for example, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a mobile device. In yet another system, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a gyro-controlled pointing device. In still other system embodiments, selecting the region within the particular frame to access the additional information about the point of interest associated with the region can further comprise selecting the region within the particular frame utilizing a laser pointer.
  • In some system embodiments, the mobile device may be, for example, a Smartphone. It still other system embodiments, the mobile device can be a pad computing device. In yet other system embodiments, the mobile device may be, for example, a remote gaming device. In still other system embodiments, the aforementioned method can include synchronizing the display with the secondary display through a network.
  • In other embodiments, a computer program product for displaying additional information about a displayed point of interest can be implemented. Such a computer program product can include, for example, a non-transitory storage medium readable by a processor and storing instructions for execution by the processor for performing a method comprising: selecting a region thin a particular frame of a display to access additional information about a point of interest associated with the region; and displaying the additional information on a secondary display, in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region. In some embodiments of such a computer program product, selecting the region within the particular frame to access the additional information about the point of interest associated with the region, can further comprise selecting the region within the particular frame utilizing a mobile device.
  • It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art, which are also intended to be encompassed by the following, claims.

Claims (20)

What is claimed is:
1. A method for obtaining additional information about a point of interest in a scene of video programming rendering on a multimedia display, comprising:
capturing an image of a scene from video programming rendering on a multimedia display with a handheld device camera;
sending the image to a remote server to identify the video programming by matching the image with images of video programming stored in a database;
receiving notice of a match from the remote server and the availability of supplemental information related to the video programming:
obtaining the supplemental information with the handheld device from at least one of the remote server or a supplemental server containing the supplemental information;
selecting a region within a particular point of interest on the scene of the video programming rendering on a multimedia display to access additional information about the point of interest associated with the region; and
displaying the additional information on at least one of the handheld device or a secondary display in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
2. The method of claim 1, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a mobile device.
3. The method of claim 1, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a gyro-controlled pointing device.
4. The method of claim 1, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a laser pointer.
5. The method of claim 2, wherein the mobile device comprises a Smartphone.
6. The method of claim 2, wherein the mobile device comprises a pad computing device.
7. The method of claim 2, wherein the mobile device comprising a remote gaming device.
8. The method of claim 1, wherein the instructions are further configured for synchronizing the display with the secondary display through a network.
9. The method of claim 7, wherein the network comprises the Internet.
10. The method of claim 7, wherein the network comprises a wireless network.
11. A system for displaying additional information about a displayed point of interest, comprising:
a hand held device including a memory, a processor and wireless communication, the handheld device programmed to enable the processor, memory wireless communication to operative an application enabling the system to perform a method, the method comprising:
capturing an image of a scene from video programming rendering on a multimedia display with a handheld device camera;
sending the image to a remote server to identify the video programming by matching the image with images of video programming stored in a database;
receiving notice of a match from the remote server and the availability of supplemental information related to the video programming;
obtaining the supplemental information with the handheld device from at least one of the remote server or a supplemental server'containing; the supplemental information;
selecting a region within a particular point of interest on the scene of the video programming rendering on a multimedia display to access additional information about the point of interest associated with the region; and
displaying the additional information on at least one of the handheld device or a secondary display in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
12. The system of claim 11, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a mobile device.
13. The system of claim 11, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a gyro-controlled pointing device.
14. The system of claim 11, wherein selecting the region within the particular frame to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular frame utilizing a laser pointer.
15. The method of claim 12, wherein the mobile device comprises a Smartphone.
16. The method of claim 12, wherein the mobile device comprises a pad computing device.
17. The method of claim 12, wherein the mobile device comprising a remote gaming device.
18. The method of claim 11, further comprising synchronizing the display with the secondary display through a network.
19. A computer program product for enabling a handheld device to determine the identity of video programming rendering on a display screen, and retrieve and display additional information about the video programming, the computer program product comprising a non-transitory storage medium readable by a processor and storing instructions for execution by the processor for performing a method comprising:
capturing an image of a scene from video programming rendering on a multimedia display with a handheld device camera;
sending the image to a remote server to identify the video programming by matching the image with images of video programming stored in a database;
receiving notice of a match from the remote server and the availability of supplemental information related to the video programming;
obtaining the supplemental information with the handheld device from at least one of the remote server or a supplemental server containing the supplemental information.
selecting a region within a particular point of interest on the scene of the video programming rendering on a multimedia display to access additional information about the point of interest associated with the region; and
displaying the additional information on at least one of the handheld device or a secondary display in response to selecting the region within the particular frame of the display to access the additional information about the point of interested associated with the region.
20. The computer program product of claim 19, wherein selecting the region within the particular point of interest to access the additional information about the point of interest associated with the region, further comprises selecting the region within the particular point of interest utilizing a display screen integrated in a hand held device.
US14/976,493 2009-12-31 2015-12-21 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game Abandoned US20160182971A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/976,493 US20160182971A1 (en) 2009-12-31 2015-12-21 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US16/845,604 US11496814B2 (en) 2009-12-31 2020-04-10 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US29183709P 2009-12-31 2009-12-31
US41926810P 2010-12-03 2010-12-03
US12/976,148 US9508387B2 (en) 2009-12-31 2010-12-22 Flick intel annotation methods and systems
US201161539945P 2011-09-27 2011-09-27
US201161581226P 2011-12-29 2011-12-29
US13/345,382 US8751942B2 (en) 2011-09-27 2012-01-06 Method, system and processor-readable media for bidirectional communications and data sharing between wireless hand held devices and multimedia display systems
US13/413,859 US9465451B2 (en) 2009-12-31 2012-03-07 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US14/976,493 US20160182971A1 (en) 2009-12-31 2015-12-21 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/413,859 Continuation-In-Part US9465451B2 (en) 2009-12-31 2012-03-07 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/845,604 Continuation US11496814B2 (en) 2009-12-31 2020-04-10 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Publications (1)

Publication Number Publication Date
US20160182971A1 true US20160182971A1 (en) 2016-06-23

Family

ID=56131059

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/976,493 Abandoned US20160182971A1 (en) 2009-12-31 2015-12-21 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US16/845,604 Active 2031-01-03 US11496814B2 (en) 2009-12-31 2020-04-10 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/845,604 Active 2031-01-03 US11496814B2 (en) 2009-12-31 2020-04-10 Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game

Country Status (1)

Country Link
US (2) US20160182971A1 (en)

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045568A1 (en) * 2006-02-21 2010-02-25 AT&T Intellectual Property I,L.P.f/k/a BellSouth Intellectual Property Corporation Methods, Systems, And Computer Program Products For Providing Content Synchronization Or Control Among One Or More Devices
US9459762B2 (en) 2011-09-27 2016-10-04 Flick Intelligence, LLC Methods, systems and processor-readable media for bidirectional communications and data sharing
US9465451B2 (en) 2009-12-31 2016-10-11 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US9508387B2 (en) 2009-12-31 2016-11-29 Flick Intelligence, LLC Flick intel annotation methods and systems
US20160379056A1 (en) * 2015-06-24 2016-12-29 Intel Corporation Capturing media moments
US9560425B2 (en) 2008-11-26 2017-01-31 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9703947B2 (en) 2008-11-26 2017-07-11 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9716736B2 (en) 2008-11-26 2017-07-25 Free Stream Media Corp. System and method of discovery and launch associated with a networked media device
US20180109820A1 (en) * 2016-10-14 2018-04-19 Spotify Ab Identifying media content for simultaneous playback
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US20180130167A1 (en) * 2016-11-10 2018-05-10 Alibaba Group Holding Limited Multi-Display Interaction
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US20180357907A1 (en) * 2016-12-13 2018-12-13 drive.ai Inc. Method for dispatching a vehicle to a user's location
US10231023B2 (en) * 2010-04-01 2019-03-12 Sony Interactive Entertainment Inc. Media fingerprinting for content determination and retrieval
US20190080175A1 (en) * 2017-09-14 2019-03-14 Comcast Cable Communications, Llc Methods and systems to identify an object in content
US20190132622A1 (en) * 2018-08-07 2019-05-02 Setos Family Trust System for temporary access to subscriber content over non-proprietary networks
US10292045B2 (en) * 2017-08-24 2019-05-14 Nanning Fugui Precision Industrial Co., Ltd. Information obtaining method and information obtaining device
CN109844854A (en) * 2016-08-12 2019-06-04 奇跃公司 Word flow comment
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10334264B2 (en) * 2016-11-18 2019-06-25 Ainsworth Game Technology Limited Method of encoding multiple languages in a video file for a gaming machine
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
WO2020018382A1 (en) * 2018-07-17 2020-01-23 Vidit, LLC Systems and methods for archiving and accessing of image content
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US10764347B1 (en) 2017-11-22 2020-09-01 Amazon Technologies, Inc. Framework for time-associated data stream storage, processing, and replication
CN111738905A (en) * 2020-08-17 2020-10-02 北京理工大学 Large-scale crowd performance display screen auxiliary art modeling generation method
US10878028B1 (en) * 2017-11-22 2020-12-29 Amazon Technologies, Inc. Replicating and indexing fragments of time-associated data streams
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10902461B2 (en) * 2018-08-03 2021-01-26 International Business Machines Corporation Environmental modification using tone model analysis
US10944804B1 (en) 2017-11-22 2021-03-09 Amazon Technologies, Inc. Fragmentation of time-associated data streams
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US11025691B1 (en) 2017-11-22 2021-06-01 Amazon Technologies, Inc. Consuming fragments of time-associated data streams
US11062483B1 (en) * 2020-01-15 2021-07-13 Bank Of America Corporation System for dynamic transformation of electronic representation of resources
CN113163227A (en) * 2020-01-22 2021-07-23 浙江大学 Method and device for obtaining video data and electronic equipment
US11212594B2 (en) * 2017-04-28 2021-12-28 Konami Digital Entertainment Co., Ltd. Server device and storage medium for use therewith
US11282045B2 (en) * 2016-12-21 2022-03-22 Alibaba Group Holding Limited Methods, devices, and systems for verifying digital tickets at a client
US11301966B2 (en) * 2018-12-10 2022-04-12 Apple Inc. Per-pixel filter
US11360639B2 (en) * 2018-03-27 2022-06-14 Spacedraft Pty Ltd Media content planning system
US11375347B2 (en) * 2013-02-20 2022-06-28 Disney Enterprises, Inc. System and method for delivering secondary content to movie theater patrons
WO2022155107A1 (en) * 2021-01-12 2022-07-21 Iamchillpill Llc. Synchronizing secondary audiovisual content based on frame transitions in streaming content
US11418845B2 (en) * 2013-01-31 2022-08-16 Paramount Pictures Corporation System and method for interactive remote movie watching, scheduling, and social connection
EP4016528A3 (en) * 2016-04-08 2022-10-12 Source Digital, Inc. Media environment driven content distribution platform
US11496814B2 (en) 2009-12-31 2022-11-08 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US11493348B2 (en) 2017-06-23 2022-11-08 Direct Current Capital LLC Methods for executing autonomous rideshare requests
US20220385711A1 (en) * 2021-05-28 2022-12-01 Flir Unmanned Aerial Systems Ulc Method and system for text search capability of live or recorded video content streamed over a distributed communication network
US11551441B2 (en) * 2016-12-06 2023-01-10 Enviropedia, Inc. Systems and methods for a chronological-based search engine
CN116248937A (en) * 2018-03-26 2023-06-09 索尼公司 Information processing apparatus and information processing method
US11709889B1 (en) * 2012-03-16 2023-07-25 Google Llc Content keyword identification
WO2024063867A1 (en) * 2022-09-22 2024-03-28 Qualcomm Incorporated Multi-source multimedia output and synchronization

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255709A (en) 2020-02-11 2021-08-13 阿里巴巴集团控股有限公司 Image element matching method and device, model training method and device and data processing method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020167484A1 (en) * 2001-05-09 2002-11-14 Fujitsu Limited Control system for controlling display device, server, medium and controlling method
US20040208372A1 (en) * 2001-11-05 2004-10-21 Boncyk Wayne C. Image capture and identification system and process
US20040233233A1 (en) * 2003-05-21 2004-11-25 Salkind Carole T. System and method for embedding interactive items in video and playing same in an interactive environment
US7158676B1 (en) * 1999-02-01 2007-01-02 Emuse Media Limited Interactive system
US20090007023A1 (en) * 2007-06-27 2009-01-01 Sundstrom Robert J Method And System For Automatically Linking A Cursor To A Hotspot In A Hypervideo Stream
US20090077503A1 (en) * 2007-09-18 2009-03-19 Sundstrom Robert J Method And System For Automatically Associating A Cursor with A Hotspot In A Hypervideo Stream Using A Visual Indicator
US20090138906A1 (en) * 2007-08-24 2009-05-28 Eide Kurt S Enhanced interactive video system and method
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
US20100251285A1 (en) * 2009-03-02 2010-09-30 Irdeto Access B.V. Conditional entitlement processing for obtaining a control word
US20110138317A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller, method for operating the augmented remote controller, and system for the same
US20110216206A1 (en) * 2009-12-31 2011-09-08 Sony Computer Entertainment Europe Limited Media viewing
US8799401B1 (en) * 2004-07-08 2014-08-05 Amazon Technologies, Inc. System and method for providing supplemental information relevant to selected content in media

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3006314A (en) 1959-12-14 1961-10-31 Outboard Marine Corp Gas tank oil gauge
DE4446139C2 (en) 1993-12-30 2000-08-17 Intel Corp Method and device for highlighting objects in a conference system
US6727887B1 (en) 1995-01-05 2004-04-27 International Business Machines Corporation Wireless pointing device for remote cursor control
JP2957938B2 (en) 1995-03-31 1999-10-06 ミツビシ・エレクトリック・インフォメイション・テクノロジー・センター・アメリカ・インコーポレイテッド Window control system
US20020056136A1 (en) * 1995-09-29 2002-05-09 Wistendahl Douglass A. System for converting existing TV content to interactive TV programs operated with a standard remote control and TV set-top box
US6496981B1 (en) * 1997-09-19 2002-12-17 Douglass A. Wistendahl System for converting media content for interactive TV use
US5995102A (en) 1997-06-25 1999-11-30 Comet Systems, Inc. Server system and method for modifying a cursor image
US6842190B1 (en) 1999-07-06 2005-01-11 Intel Corporation Video bit stream extension with supplementary content information to aid in subsequent video processing
KR100319157B1 (en) 1999-09-22 2002-01-05 구자홍 User profile data structure for specifying multiple user preference items, And method for multimedia contents filtering and searching using user profile data
US6757907B1 (en) * 2000-02-09 2004-06-29 Sprint Communications Company, L.P. Display selection in a video-on-demand system
US6834308B1 (en) 2000-02-17 2004-12-21 Audible Magic Corporation Method and apparatus for identifying media content presented on a media playing device
US7343617B1 (en) * 2000-02-29 2008-03-11 Goldpocket Interactive, Inc. Method and apparatus for interaction with hyperlinks in a television broadcast
US20020049975A1 (en) * 2000-04-05 2002-04-25 Thomas William L. Interactive wagering system with multiple display support
US20020002707A1 (en) 2000-06-29 2002-01-03 Ekel Sylvain G. System and method to display remote content
US6970860B1 (en) 2000-10-30 2005-11-29 Microsoft Corporation Semi-automatic annotation of multimedia objects
US7562012B1 (en) 2000-11-03 2009-07-14 Audible Magic Corporation Method and apparatus for creating a unique audio signature
WO2002082271A1 (en) 2001-04-05 2002-10-17 Audible Magic Corporation Copyright detection and protection system and method
US6968337B2 (en) 2001-07-10 2005-11-22 Audible Magic Corporation Method and apparatus for identifying an unknown work
US7529659B2 (en) 2005-09-28 2009-05-05 Audible Magic Corporation Method and apparatus for identifying an unknown work
US7877438B2 (en) 2001-07-20 2011-01-25 Audible Magic Corporation Method and apparatus for identifying new media content
US20040205482A1 (en) 2002-01-24 2004-10-14 International Business Machines Corporation Method and apparatus for active annotation of multimedia content
US7421660B2 (en) 2003-02-04 2008-09-02 Cataphora, Inc. Method and apparatus to visually present discussions for data mining purposes
AU2003239385A1 (en) * 2002-05-10 2003-11-11 Richard R. Reisman Method and apparatus for browsing using multiple coordinated device
US7197234B1 (en) 2002-05-24 2007-03-27 Digeo, Inc. System and method for processing subpicture data
GB2399983A (en) 2003-03-24 2004-09-29 Canon Kk Picture storage and retrieval system for telecommunication system
US7242389B1 (en) 2003-10-07 2007-07-10 Microsoft Corporation System and method for a large format collaborative display for sharing information
US8065665B1 (en) 2004-02-28 2011-11-22 Oracle America, Inc. Method and apparatus for correlating profile data
US20050251823A1 (en) * 2004-05-05 2005-11-10 Nokia Corporation Coordinated cross media service
EP1610263A1 (en) * 2004-06-18 2005-12-28 Sicpa Holding S.A. Item carrying at least two data storage elements
US8086575B2 (en) 2004-09-23 2011-12-27 Rovi Solutions Corporation Methods and apparatus for integrating disparate media formats in a networked media system
US20060075360A1 (en) * 2004-10-04 2006-04-06 Edwards Systems Technology, Inc. Dynamic highlight prompting apparatus and method
US20060080130A1 (en) 2004-10-08 2006-04-13 Samit Choksi Method that uses enterprise application integration to provide real-time proactive post-sales and pre-sales service over SIP/SIMPLE/XMPP networks
US7818672B2 (en) * 2004-12-30 2010-10-19 Microsoft Corporation Floating action buttons
US7852317B2 (en) 2005-01-12 2010-12-14 Thinkoptics, Inc. Handheld device for handheld vision based absolute pointing system
US20070003223A1 (en) * 2005-04-11 2007-01-04 Phatcat Media, Inc. User initiated access to secondary content from primary video/audio content
US7672968B2 (en) 2005-05-12 2010-03-02 Apple Inc. Displaying a tooltip associated with a concurrently displayed database object
US20060282776A1 (en) 2005-06-10 2006-12-14 Farmer Larry C Multimedia and performance analysis tool
US20070093275A1 (en) * 2005-10-25 2007-04-26 Sony Ericsson Mobile Communications Ab Displaying mobile television signals on a secondary display device
US8402503B2 (en) 2006-02-08 2013-03-19 At& T Intellectual Property I, L.P. Interactive program manager and methods for presenting program content
US20070288640A1 (en) 2006-06-07 2007-12-13 Microsoft Corporation Remote rendering of multiple mouse cursors
US8089455B1 (en) * 2006-11-28 2012-01-03 Wieder James W Remote control with a single control button
US20080208589A1 (en) 2007-02-27 2008-08-28 Cross Charles W Presenting Supplemental Content For Digital Media Using A Multimodal Application
US7938727B1 (en) * 2007-07-19 2011-05-10 Tim Konkle System and method for providing interactive content for multiple networked users in a shared venue
HUE037450T2 (en) * 2007-09-28 2018-09-28 Dolby Laboratories Licensing Corp Treating video information
US8285121B2 (en) 2007-10-07 2012-10-09 Fall Front Wireless Ny, Llc Digital network-based video tagging system
KR101382499B1 (en) 2007-10-22 2014-04-21 삼성전자주식회사 Method for tagging video and apparatus for video player using the same
US8875212B2 (en) * 2008-04-15 2014-10-28 Shlomo Selim Rakib Systems and methods for remote control of interactive video
FR2924507B1 (en) * 2007-11-30 2010-02-26 Thales Sa DEVICE FOR CONTROLLING A COMPUTER POINTER IN A SYSTEM COMPRISING DIFFERENT TYPES OF DISPLAYS
US8229865B2 (en) 2008-02-04 2012-07-24 International Business Machines Corporation Method and apparatus for hybrid tagging and browsing annotation for multimedia content
US20090228492A1 (en) 2008-03-10 2009-09-10 Verizon Data Services Inc. Apparatus, system, and method for tagging media content
WO2009145848A1 (en) * 2008-04-15 2009-12-03 Pvi Virtual Media Services, Llc Preprocessing video to insert visual elements and applications thereof
WO2009143086A2 (en) * 2008-05-17 2009-11-26 Qwizdom, Inc. Digitizing tablet devices, methods and systems
US20100235379A1 (en) 2008-06-19 2010-09-16 Milan Blair Reichbach Web-based multimedia annotation system
US10248931B2 (en) 2008-06-23 2019-04-02 At&T Intellectual Property I, L.P. Collaborative annotation of multimedia content
US20090319884A1 (en) 2008-06-23 2009-12-24 Brian Scott Amento Annotation based navigation of multimedia content
EP2332328A4 (en) * 2008-08-18 2012-07-04 Ipharro Media Gmbh Supplemental information delivery
US8665374B2 (en) * 2008-08-22 2014-03-04 Disney Enterprises, Inc. Interactive video insertions, and applications thereof
JP2010081262A (en) * 2008-09-25 2010-04-08 Sony Corp Device, method and system for processing information, and program
US20100138517A1 (en) 2008-12-02 2010-06-03 At&T Intellectual Property I, L.P. System and method for multimedia content brokering
US20100154007A1 (en) * 2008-12-17 2010-06-17 Jean Touboul Embedded video advertising method and system
US20100235865A1 (en) 2009-03-12 2010-09-16 Ubiquity Holdings Tagging Video Content
US20110066689A1 (en) * 2009-09-14 2011-03-17 Sony Ericsson Mobile Communications Ab Reimbursements for advertisements in communications
US9838744B2 (en) * 2009-12-03 2017-12-05 Armin Moehrle Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects
US9465451B2 (en) 2009-12-31 2016-10-11 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US9508387B2 (en) 2009-12-31 2016-11-29 Flick Intelligence, LLC Flick intel annotation methods and systems
US20160182971A1 (en) 2009-12-31 2016-06-23 Flickintel, Llc Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US8751942B2 (en) 2011-09-27 2014-06-10 Flickintel, Llc Method, system and processor-readable media for bidirectional communications and data sharing between wireless hand held devices and multimedia display systems
US20110167447A1 (en) * 2010-01-05 2011-07-07 Rovi Technologies Corporation Systems and methods for providing a channel surfing application on a wireless communications device
US20110183654A1 (en) * 2010-01-25 2011-07-28 Brian Lanier Concurrent Use of Multiple User Interface Devices
US8370878B2 (en) * 2010-03-17 2013-02-05 Verizon Patent And Licensing Inc. Mobile interface for accessing interactive television applications associated with displayed content
WO2011149558A2 (en) 2010-05-28 2011-12-01 Abelow Daniel H Reality alternate
US8730354B2 (en) * 2010-07-13 2014-05-20 Sony Computer Entertainment Inc Overlay video content on a mobile device
US20120062468A1 (en) 2010-09-10 2012-03-15 Yu-Jen Chen Method of modifying an interface of a handheld device and related multimedia system
US8913171B2 (en) * 2010-11-17 2014-12-16 Verizon Patent And Licensing Inc. Methods and systems for dynamically presenting enhanced content during a presentation of a media content instance
AU2012289866B2 (en) * 2011-08-04 2015-07-23 Ebay Inc. Content display systems and methods

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7158676B1 (en) * 1999-02-01 2007-01-02 Emuse Media Limited Interactive system
US20020167484A1 (en) * 2001-05-09 2002-11-14 Fujitsu Limited Control system for controlling display device, server, medium and controlling method
US20040208372A1 (en) * 2001-11-05 2004-10-21 Boncyk Wayne C. Image capture and identification system and process
US20040233233A1 (en) * 2003-05-21 2004-11-25 Salkind Carole T. System and method for embedding interactive items in video and playing same in an interactive environment
US8799401B1 (en) * 2004-07-08 2014-08-05 Amazon Technologies, Inc. System and method for providing supplemental information relevant to selected content in media
US20090007023A1 (en) * 2007-06-27 2009-01-01 Sundstrom Robert J Method And System For Automatically Linking A Cursor To A Hotspot In A Hypervideo Stream
US20090138906A1 (en) * 2007-08-24 2009-05-28 Eide Kurt S Enhanced interactive video system and method
US20090077503A1 (en) * 2007-09-18 2009-03-19 Sundstrom Robert J Method And System For Automatically Associating A Cursor with A Hotspot In A Hypervideo Stream Using A Visual Indicator
US20090276805A1 (en) * 2008-05-03 2009-11-05 Andrews Ii James K Method and system for generation and playback of supplemented videos
US20100251285A1 (en) * 2009-03-02 2010-09-30 Irdeto Access B.V. Conditional entitlement processing for obtaining a control word
US20110138317A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller, method for operating the augmented remote controller, and system for the same
US20110216206A1 (en) * 2009-12-31 2011-09-08 Sony Computer Entertainment Europe Limited Media viewing

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045568A1 (en) * 2006-02-21 2010-02-25 AT&T Intellectual Property I,L.P.f/k/a BellSouth Intellectual Property Corporation Methods, Systems, And Computer Program Products For Providing Content Synchronization Or Control Among One Or More Devices
US9866925B2 (en) 2008-11-26 2018-01-09 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10142377B2 (en) 2008-11-26 2018-11-27 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9961388B2 (en) 2008-11-26 2018-05-01 David Harrison Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US10986141B2 (en) 2008-11-26 2021-04-20 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9560425B2 (en) 2008-11-26 2017-01-31 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US9591381B2 (en) 2008-11-26 2017-03-07 Free Stream Media Corp. Automated discovery and launch of an application on a network enabled device
US9686596B2 (en) 2008-11-26 2017-06-20 Free Stream Media Corp. Advertisement targeting through embedded scripts in supply-side and demand-side platforms
US9706265B2 (en) 2008-11-26 2017-07-11 Free Stream Media Corp. Automatic communications between networked devices such as televisions and mobile devices
US9703947B2 (en) 2008-11-26 2017-07-11 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9716736B2 (en) 2008-11-26 2017-07-25 Free Stream Media Corp. System and method of discovery and launch associated with a networked media device
US9838758B2 (en) 2008-11-26 2017-12-05 David Harrison Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9848250B2 (en) 2008-11-26 2017-12-19 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9854330B2 (en) 2008-11-26 2017-12-26 David Harrison Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US10419541B2 (en) 2008-11-26 2019-09-17 Free Stream Media Corp. Remotely control devices over a network without authentication or registration
US10334324B2 (en) 2008-11-26 2019-06-25 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10977693B2 (en) 2008-11-26 2021-04-13 Free Stream Media Corp. Association of content identifier of audio-visual data with additional data through capture infrastructure
US9967295B2 (en) 2008-11-26 2018-05-08 David Harrison Automated discovery and launch of an application on a network enabled device
US10425675B2 (en) 2008-11-26 2019-09-24 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10880340B2 (en) 2008-11-26 2020-12-29 Free Stream Media Corp. Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9986279B2 (en) 2008-11-26 2018-05-29 Free Stream Media Corp. Discovery, access control, and communication with networked services
US10032191B2 (en) 2008-11-26 2018-07-24 Free Stream Media Corp. Advertisement targeting through embedded scripts in supply-side and demand-side platforms
US10074108B2 (en) 2008-11-26 2018-09-11 Free Stream Media Corp. Annotation of metadata through capture infrastructure
US10567823B2 (en) 2008-11-26 2020-02-18 Free Stream Media Corp. Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US10791152B2 (en) 2008-11-26 2020-09-29 Free Stream Media Corp. Automatic communications between networked devices such as televisions and mobile devices
US10631068B2 (en) 2008-11-26 2020-04-21 Free Stream Media Corp. Content exposure attribution based on renderings of related content across multiple devices
US10771525B2 (en) 2008-11-26 2020-09-08 Free Stream Media Corp. System and method of discovery and launch associated with a networked media device
US9465451B2 (en) 2009-12-31 2016-10-11 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US11496814B2 (en) 2009-12-31 2022-11-08 Flick Intelligence, LLC Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US9508387B2 (en) 2009-12-31 2016-11-29 Flick Intelligence, LLC Flick intel annotation methods and systems
US10231023B2 (en) * 2010-04-01 2019-03-12 Sony Interactive Entertainment Inc. Media fingerprinting for content determination and retrieval
US9965237B2 (en) 2011-09-27 2018-05-08 Flick Intelligence, LLC Methods, systems and processor-readable media for bidirectional communications and data sharing
US9459762B2 (en) 2011-09-27 2016-10-04 Flick Intelligence, LLC Methods, systems and processor-readable media for bidirectional communications and data sharing
US11709889B1 (en) * 2012-03-16 2023-07-25 Google Llc Content keyword identification
US11818417B1 (en) 2013-01-31 2023-11-14 Paramount Pictures Corporation Computing network for synchronized streaming of audiovisual content
US11418845B2 (en) * 2013-01-31 2022-08-16 Paramount Pictures Corporation System and method for interactive remote movie watching, scheduling, and social connection
US11375347B2 (en) * 2013-02-20 2022-06-28 Disney Enterprises, Inc. System and method for delivering secondary content to movie theater patrons
US20160379056A1 (en) * 2015-06-24 2016-12-29 Intel Corporation Capturing media moments
US10719710B2 (en) * 2015-06-24 2020-07-21 Intel Corporation Capturing media moments of people using an aerial camera system
EP4016528A3 (en) * 2016-04-08 2022-10-12 Source Digital, Inc. Media environment driven content distribution platform
CN109844854A (en) * 2016-08-12 2019-06-04 奇跃公司 Word flow comment
US20180109820A1 (en) * 2016-10-14 2018-04-19 Spotify Ab Identifying media content for simultaneous playback
US10506268B2 (en) * 2016-10-14 2019-12-10 Spotify Ab Identifying media content for simultaneous playback
US20180130167A1 (en) * 2016-11-10 2018-05-10 Alibaba Group Holding Limited Multi-Display Interaction
US10334264B2 (en) * 2016-11-18 2019-06-25 Ainsworth Game Technology Limited Method of encoding multiple languages in a video file for a gaming machine
US20230360394A1 (en) * 2016-12-06 2023-11-09 Enviropedia, Inc. Systems and methods for providing an immersive user interface
US11741707B2 (en) 2016-12-06 2023-08-29 Enviropedia, Inc. Systems and methods for a chronological-based search engine
US11551441B2 (en) * 2016-12-06 2023-01-10 Enviropedia, Inc. Systems and methods for a chronological-based search engine
US20180357907A1 (en) * 2016-12-13 2018-12-13 drive.ai Inc. Method for dispatching a vehicle to a user's location
US10818188B2 (en) * 2016-12-13 2020-10-27 Direct Current Capital LLC Method for dispatching a vehicle to a user's location
US11282045B2 (en) * 2016-12-21 2022-03-22 Alibaba Group Holding Limited Methods, devices, and systems for verifying digital tickets at a client
US11212594B2 (en) * 2017-04-28 2021-12-28 Konami Digital Entertainment Co., Ltd. Server device and storage medium for use therewith
US11493348B2 (en) 2017-06-23 2022-11-08 Direct Current Capital LLC Methods for executing autonomous rideshare requests
US10292045B2 (en) * 2017-08-24 2019-05-14 Nanning Fugui Precision Industrial Co., Ltd. Information obtaining method and information obtaining device
US20190080175A1 (en) * 2017-09-14 2019-03-14 Comcast Cable Communications, Llc Methods and systems to identify an object in content
US10764347B1 (en) 2017-11-22 2020-09-01 Amazon Technologies, Inc. Framework for time-associated data stream storage, processing, and replication
US10878028B1 (en) * 2017-11-22 2020-12-29 Amazon Technologies, Inc. Replicating and indexing fragments of time-associated data streams
US11025691B1 (en) 2017-11-22 2021-06-01 Amazon Technologies, Inc. Consuming fragments of time-associated data streams
US10944804B1 (en) 2017-11-22 2021-03-09 Amazon Technologies, Inc. Fragmentation of time-associated data streams
CN116248937A (en) * 2018-03-26 2023-06-09 索尼公司 Information processing apparatus and information processing method
US11360639B2 (en) * 2018-03-27 2022-06-14 Spacedraft Pty Ltd Media content planning system
WO2020018382A1 (en) * 2018-07-17 2020-01-23 Vidit, LLC Systems and methods for archiving and accessing of image content
US10902461B2 (en) * 2018-08-03 2021-01-26 International Business Machines Corporation Environmental modification using tone model analysis
US20190132622A1 (en) * 2018-08-07 2019-05-02 Setos Family Trust System for temporary access to subscriber content over non-proprietary networks
US11301966B2 (en) * 2018-12-10 2022-04-12 Apple Inc. Per-pixel filter
US20220188989A1 (en) * 2018-12-10 2022-06-16 Apple Inc. Per-pixel filter
US11062483B1 (en) * 2020-01-15 2021-07-13 Bank Of America Corporation System for dynamic transformation of electronic representation of resources
CN113163227A (en) * 2020-01-22 2021-07-23 浙江大学 Method and device for obtaining video data and electronic equipment
CN111738905A (en) * 2020-08-17 2020-10-02 北京理工大学 Large-scale crowd performance display screen auxiliary art modeling generation method
US11483535B2 (en) 2021-01-12 2022-10-25 Iamchillpill Llc. Synchronizing secondary audiovisual content based on frame transitions in streaming content
WO2022155107A1 (en) * 2021-01-12 2022-07-21 Iamchillpill Llc. Synchronizing secondary audiovisual content based on frame transitions in streaming content
US20220385711A1 (en) * 2021-05-28 2022-12-01 Flir Unmanned Aerial Systems Ulc Method and system for text search capability of live or recorded video content streamed over a distributed communication network
WO2024063867A1 (en) * 2022-09-22 2024-03-28 Qualcomm Incorporated Multi-source multimedia output and synchronization

Also Published As

Publication number Publication date
US20200245036A1 (en) 2020-07-30
US11496814B2 (en) 2022-11-08

Similar Documents

Publication Publication Date Title
US11496814B2 (en) Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US9965237B2 (en) Methods, systems and processor-readable media for bidirectional communications and data sharing
US9465451B2 (en) Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
KR102271191B1 (en) System and method for recognition of items in media data and delivery of information related thereto
US10277861B2 (en) Storage and editing of video of activities using sensor and tag data of participants and spectators
US10650442B2 (en) Systems and methods for presentation and analysis of media content
KR102361213B1 (en) Dynamic binding of live video content
US9026596B2 (en) Sharing of event media streams
US8730354B2 (en) Overlay video content on a mobile device
JP5651231B2 (en) Media fingerprint for determining and searching content
KR101796005B1 (en) Media processing methods and arrangements
CN109416931A (en) Device and method for eye tracking
JP2021509206A (en) Systems and methods for presenting complementary content in augmented reality
US20170048597A1 (en) Modular content generation, modification, and delivery system
US11277668B2 (en) Methods, systems, and media for providing media guidance
KR20150083355A (en) Augmented media service providing method, apparatus thereof, and system thereof
US20130205334A1 (en) Method and apparatus for providing supplementary information about content in broadcasting system
KR20100118896A (en) Method and apparatus for providing information of objects in contents and contents based on the object
WO2022103471A1 (en) Inserting digital contents into a multi-view video
KR100671147B1 (en) Apparatus for experiencing famous scene and Method thereof
KR20180053221A (en) Display device and method for control thereof

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: FLICK INTELLIGENCE, LLC, NEW MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLICK INTEL, LLC;REEL/FRAME:049951/0783

Effective date: 20190601

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION