US20170099514A1 - Gateway multi-view video stream processing for second-screen content overlay - Google Patents

Gateway multi-view video stream processing for second-screen content overlay Download PDF

Info

Publication number
US20170099514A1
US20170099514A1 US14/876,419 US201514876419A US2017099514A1 US 20170099514 A1 US20170099514 A1 US 20170099514A1 US 201514876419 A US201514876419 A US 201514876419A US 2017099514 A1 US2017099514 A1 US 2017099514A1
Authority
US
United States
Prior art keywords
screen
content
screen content
gateway
screen device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/876,419
Other versions
US9628839B1 (en
Inventor
Joseph F. Wodka
Jehan Wickramasuriya
Venugopal Vasudevan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arris Enterprises LLC
Original Assignee
Arris Enterprises LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to ARRIS ENTERPRISES, INC. reassignment ARRIS ENTERPRISES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VASUDEVAN, VENUGOPAL, WICKRAMASURIYA, JEHAN, WODKA, JOSEPH F.
Priority to US14/876,419 priority Critical patent/US9628839B1/en
Application filed by Arris Enterprises LLC filed Critical Arris Enterprises LLC
Priority to CA3000847A priority patent/CA3000847C/en
Priority to PCT/US2016/055416 priority patent/WO2017062404A1/en
Priority to DE112016004560.3T priority patent/DE112016004560T5/en
Priority to GB201805025A priority patent/GB2558452B/en
Assigned to ARRIS ENTERPRISES LLC reassignment ARRIS ENTERPRISES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES INC
Publication of US20170099514A1 publication Critical patent/US20170099514A1/en
Publication of US9628839B1 publication Critical patent/US9628839B1/en
Application granted granted Critical
Assigned to ARRIS ENTERPRISES LLC reassignment ARRIS ENTERPRISES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. TERM LOAN SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., ARRIS TECHNOLOGY, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. ABL SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., ARRIS TECHNOLOGY, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: ARRIS ENTERPRISES LLC
Assigned to WILMINGTON TRUST reassignment WILMINGTON TRUST SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARRIS ENTERPRISES LLC, ARRIS SOLUTIONS, INC., COMMSCOPE TECHNOLOGIES LLC, COMMSCOPE, INC. OF NORTH CAROLINA, RUCKUS WIRELESS, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4347Demultiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4348Demultiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8451Structuring of content, e.g. decomposing content into time segments using Advanced Video Coding [AVC]

Definitions

  • Multi-screen solutions display second-screen content on second-screen devices while a user watches first-screen content (e.g., a television show) on a first-screen device (e.g., television).
  • Second-screen applications allow users to interact with their second-screen devices while viewing first-screen content on first-screen devices.
  • a user may watch a television show on a television. Then, the user can use his/her second-screen device to access second-screen content, such as supplemental content for the television show or advertisements, while watching the television show.
  • the first-screen content may be delivered via a cable television network to the television.
  • the user may then use a content source's application on the second-screen device to access the second-screen content via another communication medium, such as the Internet. For example, while watching the television show on a television network, the user may open the television network's application to request the second-screen content via the Internet.
  • the second-screen device may connect via a different communication network to receive the second-screen content from the communication network delivering the first-screen content.
  • the latency may cause problems with some content, such as in real-time programs (e.g., sports), where latency in the synchronization is not acceptable.
  • FIG. 1 depicts a system for delivering first-screen content and second-screen content using multi-view coding (MVC) extensions according to one embodiment.
  • MVC multi-view coding
  • FIG. 2 depicts a more detailed example of a head-end according to one embodiment.
  • FIG. 3 depicts a more detailed example of a gateway for de-multiplexing the content stream according to one embodiment.
  • FIG. 4 depicts a more detailed example of a second-screen processor according to one embodiment.
  • FIG. 5 depicts a simplified flowchart of a method for delivering second-screen content using MVC extensions according to one embodiment.
  • FIG. 6 illustrates an example of a special purpose computer system configured with the multi-view delivery system, the multi-view stream processor, and the second-screen processor according to one embodiment.
  • Described herein are techniques for a second-screen delivery system using multi-view coding extensions.
  • numerous examples and specific details are set forth in order to provide a thorough understanding of particular embodiments.
  • Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
  • a system uses multi-stream capabilities designed for delivering multi-view content to a first-screen device.
  • the system uses the multi-stream capabilities to enable the second-screen experience.
  • encoding standards have incorporated multi-stream capabilities.
  • the multi-stream capabilities allow a system to deliver multiple video streams to a single source.
  • a multi-view coding (MVC) extension is used to provide multiple views to a first-screen device.
  • MVC multi-view coding
  • a three dimensional (3D) movie includes a main video stream and another stream for a second view. The main video stream and second view are sent to the first-screen device, which combines the second view with the main video stream to create the 3D picture on the first-screen device.
  • the second view is encoded into a single stream with the main video stream using the MVC extension.
  • Particular embodiments use the MVC extension to provide second-screen content along with the first-screen content.
  • a head-end multiplexes the first-screen content with the second-screen content into a single content stream.
  • the second-screen content is added to the video stream according to the MVC extension requirements.
  • the gateway de-multiplexes the first-screen content and the second-screen content.
  • the gateway can then send the first-screen content to the first-screen device while caching the second-screen content.
  • the gateway determines that the second-screen content should be displayed on the second-screen device, the gateway can send the second-screen content to the second-screen device for display on the second-screen of the second-screen device.
  • FIG. 1 depicts a system 100 for delivering first-screen content and second-screen content using MVC extensions according to one embodiment.
  • System 100 includes a head-end 102 and customer premise location 104 .
  • Head-end 102 may be servicing multiple customer premise locations 104 (not shown).
  • Each customer premise location 104 may be offered a personalized second-screen experience using methods described herein.
  • Head-end 102 may deliver first-screen content to customer premise location 104 .
  • head-end 102 is part of a cable television network that delivers video content for different television networks via broadcast and also on demand.
  • the first-screen content may be delivered via the cable television network using a variety of communication protocols. Different communication protocols schemes may be used, such as quadrature amplitude modulation (QAM) or Internet Protocol, to deliver the video content.
  • QAM quadrature amplitude modulation
  • Internet Protocol Internet Protocol
  • a multi-view delivery system 106 includes multiple computing devices that send first-screen content to customer premise location 104 .
  • Customer premise location 104 may include a gateway 112 that is a device that interfaces with an outside wide area network 103 (e.g., cable network and/or the Internet) and a local area network 105 within location 104 .
  • a first-screen (1 st screen) device 108 and a set of second-screen devices 108 are connected to gateway 112 via local area network 105 .
  • the first-screen device 108 may be considered a primary screen, such as a television, that a user is primarily watching. For example, a user may watch a television show on first-screen device 108 .
  • Second-screen (2 nd screen) devices 110 may be secondary screens in which supplemental or second-screen content can be viewed while a user is watching first-screen device 108 . Examples of second-screen devices 110 include mobile devices, such as smartphones, tablets, and laptop computers.
  • Multi-view delivery system 106 may deliver the first-screen content destined for first-screen device 108 for display on a first-screen. Also, multi-view delivery system 106 may deliver second-screen content destined for one or more of second-screen devices 110 .
  • the second-screen content may be considered supplemental content to the first-screen content.
  • the second-screen content may include advertisements, additional information for first-screen content, promotion coupons or offers, and other types of information.
  • An encoding standard such as H.264, high-efficiency video coding (HEVC), or other similar protocols, allow multi-views to be sent in single video stream.
  • H.264/motion pictures experts group (MPEG)-4, advanced video coding (AVC) standard, the joint video teams of the international telecommunications union (ITU)-T video coding experts group (VCEG), and the international standard (ISO)/international electro-technical commission (IEC) moving picture experts group (MPEG) standardized an extension of a transcoder.
  • the extension refers to multi-view video coding, which is amendment 4 to the H.264/AVC standard.
  • the extension states multiple video streams can be multiplexed via different P frames into a single stream.
  • Other extensions may be used, such as supplemental enhancement information (SEI), which may be used in the standard to allow metadata to be sent with the first-screen content, and also other alternative approaches to providing multi-view capabilities.
  • SEI Supplemental Enhancement Information
  • extension One common use of the extension is to provide at least two multi-view video streams to allow a single-screen to experience three-dimensional video. However, particular embodiments may leverage the extension to provide second-screen content. In this case, multi-view delivery system 106 is enhanced to enable delivery of multi-view streams that include first-screen content and second-screen content.
  • gateway 112 is specially configured to process the second-screen content that is sent using the MVC extension. For example, conventionally, a gateway would have received the two multi-view streams using the MVC extension in a single content stream. Then, the gateway would have sent both multi-view streams to only first-screen device 108 (e.g., not to any second-screen devices). This is because the MVC extension is meant to provide multi-views on a single device. However, gateway 112 uses a multi-view stream processor 114 to de-multiplex the first-screen content and the second-screen content that is included in a content stream from multi-view delivery system 106 . Multi-view stream processor 114 may analyze the content stream to determine where to send the different streams.
  • the content streams may be intended entirely for first-screen device 108 , such as when a 3D movie is being watched and the multi-view content includes the additional view.
  • multi-view stream processor 114 may send both the first-screen content and the multi-view content to first-screen device 108 .
  • both of the multi-view streams are merged again and encoded using an encoder, and then sent to first-screen device 108 .
  • encoder 308 When using the MVC extension to enable the second-screen environment, encoder 308 re-encodes the first-screen content and then sends the first-screen content to first-screen device 108 , which can then display the first-screen content.
  • a set top box (STB) 116 may receive the first-screen content, decode the content, and then display the content on first-screen device 108 .
  • multi-view stream processor 114 may determine whether the multi-view stream is for the first-screen or the second-screen. The determination may be made based on metadata associated with the multi-view stream that may indicate whether or not the multi-view content is first-screen content or second-screen content.
  • a second-screen processor 118 may then send the second-screen content to second-screen device 110 at the appropriate time.
  • multi-view stream processor 114 may first cache the second-screen content.
  • second-screen processor 118 synchronizes the display of the second-screen content with first-screen content being displayed on first-screen device 108 .
  • an encoder encodes the second-screen content for sending to second-screen devices 110 .
  • a user may request the second-screen content when desired. Other methods of providing the second-screen content may be appreciated and will be described in more detail below.
  • the MVC extension is used to send both the first-screen content and the second-screen content in multiple views in a content stream.
  • An intelligent gateway 112 is used to parse the content stream to separate out the first-screen content and the second-screen content based on metadata.
  • the second-screen content can be sent to second-screen devices 110 without being sent to first-screen device 108 .
  • Particular embodiments use gateway 112 because gateway 112 sits between headend 102 and first screen devices 108 /second-screen devices 110 .
  • Gateway 112 has the processing power to decode the stream and determine whether one view is for the second-screen devices.
  • Gateway 112 can then re-encode the streams and send separate content streams for the first-screen content and the second-screen content to the appropriate destinations.
  • first screen devices 108 /second-screen devices 110 This allows first screen devices 108 /second-screen devices 110 to not have to be changed to handle MVC extensions for providing second-screen content. For example, either first screen device 108 or second-screen devices 110 would have had to receive the single stream with both the first-screen content and the second-stream content and determine how to process the second-screen content. Gateway 112 sits naturally in between first screen devices 108 /second-screen devices 110 , and can determine how to send the second-screen content.
  • head-end 102 may multiplex the first-screen content and the second-screen content together into a content stream.
  • FIG. 2 depicts a more detailed example of head-end 102 according to one embodiment.
  • a primary stream processor 202 and a supplemental stream processor 204 may determine the first-screen content and the second-screen content, respectively, to add to the single content stream.
  • multiple content streams may be processed, such as content streams for multiple television broadcasts. Any number of the broadcasts may include second-screen content.
  • Primary stream processor 202 may receive the first-screen content 206 may be received from other content sources in real time via satellite or other networks. In other embodiments, first-screen content may retrieve the first-screen content from storage 205 , which may be cache memory or other types of long term storage. Although one content stream is described, primary stream processor 202 may be sending multiple content streams for multiple television channels to locations 104 . Some of the television channels may have related second-screen content, and some may not.
  • Supplemental stream processor 204 may receive second-screen content from a second-screen content provider 208 .
  • Second-screen content provider 208 may include an advertiser, a service provider, a retailer, or even a cable television provider. Also, second-screen content provider 208 may be the same content source that provided the first-screen content. In one embodiment, second-screen content provider 208 may provide second-screen content to head-end 102 , which is stored in storage 205 at 210 .
  • Second-screen content provider 208 can now target specific user devices, and also service providers can provide enhancements to the first-screen content. For example a service provider could provide the player statistics for a sporting event video stream. Supplemental stream processor 204 may then determine which second-screen content is appropriate to send with the first-screen content. In one example, supplemental stream processor 204 determines second-screen content targeted to a user of second-screen device 110 or first-screen device 108 . Once determining the second-screen content, supplemental stream processor 204 sends the second-screen content to a multiplexer 212 .
  • Multiplexer 212 receives the first-screen content and the second-screen content, and multiplexes them together into a single multi-view content stream. Multiplexer 212 may multiplex the first-screen content and the second-screen content based on the MVC extension. Also, metadata to identify the second-screen content as being “second-screen content” or for a specific second-screen device 110 may be added to the content stream. The metadata may be needed because the MVC extension is being used for a purpose other than sending multi-views to a single device. The metadata allows gateway 112 to determine when second-screen content is included in the single content stream. Then, encoder 214 may encode the first-screen content and the second-screen content together into an encoded content stream.
  • encoder 214 encodes the second-screen content using the MVC extension.
  • the second-screen content is sent as a multi-view stream with the first-screen content.
  • Encoder 214 can then send the single encoded content stream through network 103 to customer premise location 104 .
  • FIG. 3 depicts a more detailed example of gateway 112 for de-multiplexing the content stream according to one embodiment.
  • Gateway 112 receives the encoded content stream that includes the multiplexed the first-screen content and the second-screen content. Because of the multiplexing, a de-multiplexer 302 de-multiplexes the content stream to separate the multi-view streams. A decoder 304 can then decode the first-screen content and the second-screen content.
  • Multi-view stream processor 114 can then determine whether the multi-view streams include first-screen content and the second-screen content, or are conventional multi-view streams. For example, depending on the metadata associated with the second-screen content, multi-view stream processor 114 may prepare the second-screen content for forwarding to second-screen device 110 . In other embodiments, the second-screen content may actually be meant for first-screen device 108 (in this case, it would not be referred to as second-screen content and is actually multi-view content being traditionally used). If the content stream included traditional multi-view content, then the first-screen content and the second-screen content may be recombined into a single stream, and then an encoder 308 re-encodes the single stream, which is sent to first-screen device 108 .
  • multi-view stream processor 114 determines where to send the first-screen content and the second-screen content. For example, multi-view stream processor 114 sends the first-screen content to set top box 116 (encoded by encoder 308 ). Then, multi-view stream processor 114 determines where to send the second-screen content. In this embodiment, multi-view stream processor 114 stores the second-screen content in cache memory 306 . Although cache memory is described, any type of storage may be used.
  • second-screen processor 118 may determine when and where to send the second-screen content to second-screen device 110 .
  • Encoder 308 (this may be the same encoder used to encode the single stream with multi-views or a different encoder) may encode the second-screen content into a stream. This stream is different as it only includes second-screen content and is not multiplexed with first-screen content. This type of content stream may be in a format that second-screen device 110 is configured to process (that is, second-screen device 110 does not have to de-multiplex a content stream with both first-screen content and second-screen content). Encoder 308 then sends the second-screen content to second-screen device 110 . It should be noted that encoding may be performed at any time before delivery to second-screen device 110 .
  • FIG. 4 depicts a more detailed example, of second-screen processor 118 according to one embodiment.
  • Second-screen processor 118 may deliver the second-screen content differently. For example, second-screen processor 118 may forward all second-screen content to second-screen device 110 with metadata that is selected based on how and when to display the second-screen content. Or, second-screen processor 118 may detect different events (e.g., in the first screen content) and send the second-screen content in a synchronized manner.
  • Second-screen processor 118 also may determine which second-screen device 110 is connected to gateway 112 and determine which second-screen device 110 should be the destination for the second-screen content. For example, second-screen processor 118 maintains a list of devices within location 104 that are associated with a user or users. This information may be determined via a user profile 408 for the user (or multiple user profiles for multiple users). The user profile information may be found in a subscription profile (when using an application supported by the MSO) or provided by a user. Also, second-screen processor 118 may include a second-screen device detector 402 to detect which second-screen devices 110 are active in customer premise location 104 . Second-screen device detector 402 may also track which applications 404 are being used by second-screen devices 110 .
  • second-screen device detector 402 may message with second-screen devices 110 to determine which second-screen devices 110 are active and in what location. This may involve sending a message to application 404 and having a user confirm the activity and location. Also, second-screen device detector 402 may use fingerprinting or application detection methods may be used to maintain the list of devices. For example, second-screen device detector 402 may activate a microphone of second-screen device 110 to detect the audio being output in the location of second-screen device 110 . Then, second-screen device 110 may determine a fingerprint of the first-screen content being output by first-screen device 108 .
  • a television may be outputting a television show, and second-screen device 110 may take a fingerprint of the audio within a range of second-screen device 110 .
  • Second-screen device 110 or second-screen device detector 402 (or a back-end device) can then determine that a user is watching the television show when the fingerprint matches a fingerprint from the television show. Further, second-screen device detector 402 may detect which application the user is using by intercepting transfers between the application and a wide area network, such as the Internet.
  • cache 306 buffers the second-screen content. Also, metadata about the second-screen data may be stored in cache 306 . The metadata may include information that can be used to determine when the second-screen content should be output to second-screen device 110 .
  • a content delivery processor 406 determines when second-screen content should be provided to second-screen device 110 .
  • Content delivery processor 406 may monitor the first-screen content being sent and metadata for the second-screen content in cache 306 . For example, when a first-screen device renderer requests a change in the content view via a channel change, content delivery processor 406 records the change such that content delivery processor 406 knows the channel first-screen device 108 is watching. Then, content delivery processor 406 can retrieve second-screen content for second-screen device 110 appropriately. For example, content delivery processor 406 may retrieve second-screen content for the current channel at a time defined by the metadata for the second-screen content. This synchronizes the second-screen content with the first-screen content.
  • Content delivery processor 406 may use user profile 408 for users that second-screen device detector 402 built to personalize the second-screen content delivery.
  • the user profile may store personal information for the user, such as user preferences for second-screen content, such as what types of advertisements the user likes to view.
  • Content delivery processor 406 may then determine which second-screen content to provide to second-screen application 404 .
  • Content delivery processor 406 may sit within a protocol stack on gateway 112 to allow it to disseminate second-screen content to various second-screen devices 110 .
  • a software development kit can be used by a second-screen application 404 to allow interaction with content delivery processor 406 in gateway 112 to receive second-screen content.
  • second-screen applications 404 can subscribe to and access different capabilities provided by gateway 112 .
  • the software development kit allows second-screen applications 404 to interface with content delivery processor 406 and request specific second-screen sub-streams based on provided parameters.
  • content delivery processor 406 may automatically determine which second-screen content to send based on the user profile.
  • gateway 112 can use the user profile for a user and disseminate the appropriate second-screen content to the second-screen devices 110 .
  • gateway 112 can use the user profile for a user and disseminate the appropriate second-screen content to the second-screen devices 110 .
  • one second-screen device 110 may be targeted with first second-screen content based on the user profile and another second-screen device may be provided with general second-screen content that is not targeted.
  • a first second-screen device 108 may receive general coupons, and a second second-screen device may receive personalized recipes.
  • the second-screen content may include sign language-enabled broadcasts in which sign language can be displayed on second-screen devices 110 .
  • the standard method for hearing-impaired services is to provide closed caption or in some broadcasts to set up a picture-in-a-picture (PIP) where a sign language source may be in the corner of the first-screen device display screen while the first-screen content is displayed in the remainder of the first-screen device display screen.
  • PIP picture-in-a-picture
  • This may not be ideal for viewers that may be in the same household. For example, it may either disrupt the viewing experience for users that do not need the sign-language view or overlay too much of the sign-language view over the first-screen broadcast.
  • the PIP window may be too small to view the sign language.
  • the first-screen content may include the main broadcast program and the second-screen content may include sign language information that is associated with the first-screen content.
  • Gateway 112 may track second-screen devices 110 that are active and determine that a user who is hearing-impaired is watching the first-screen content via any detection process. Gateway 112 may then determine that the sign language information should be sent to this second-screen device. Then, the user can watch the sign-language display on his/her own second-screen device 110 without interrupting the television show. Also, this can be enhanced to allow the user to design the elements of how the sign language view and the first-screen content view should be laid out and cast back to the primary screen renderer. For example, a user can segment the first-screen content and the second-screen content as desired.
  • the second-screen content can be provided without the need for the second-screen application to use any first-screen content detection, such as fingerprint detection. Rather, gateway 112 has access to the first-screen content and can perform this detection itself. Further, second-screen device 110 does not need any over-the-top capabilities as the second-screen content is sent with the first-screen content. This may also help synchronization as the second-screen content arrives with the first-screen content and experiences the same latency.
  • first-screen content detection such as fingerprint detection.
  • gateway 112 has access to the first-screen content and can perform this detection itself.
  • second-screen device 110 does not need any over-the-top capabilities as the second-screen content is sent with the first-screen content. This may also help synchronization as the second-screen content arrives with the first-screen content and experiences the same latency.
  • Gateway 112 also allows for new application capabilities that go beyond simply overlaying content on second-screen devices 110 based on first-screen content contacts. For example, extended features not only at the content source, but also by application developers may be used. For example, a cooking show can produce multi-stream views that include the main program, detailed recipe instructions, and ingredient manufacturer coupons. Hence, a second-screen application designer can create different overlays that allow the user to view the recipe details and store them on their home recipe file while previewing the manufacturer coupons and storing the coupons in user-specific logs at the same time as watching the first-screen content.
  • a user is viewing a channel on first-screen device 108 while accessing application 404 on second-screen device 110 .
  • content delivery processor 406 detects the user is watching certain first-screen content. Then, content delivery processor 406 may send second-screen content to application 404 including metadata for when and how the second-screen content should be presented to the user.
  • the second-screen content may include time-based synchronized advertisements to the first-screen content, promotion offers, such as coupons, or supplemental content, such as detailed episode instructions in the form of additional video.
  • the episode-related material may be cooking instructions or detailed auto inspection information that relates to the first-screen content being viewed.
  • second-screen application 404 can display second-screen content related to the first-screen content without the need to have a connection to an external source through a wide area network, such as the Internet or an over-the-top connection, different from the connection being used by first-screen device 108 . That is, the second-screen content is received via the same communication network and content stream as the first-screen content. Further, second-screen device 110 does not need to rely on external technologies to determine what the user is watching and to retrieve related material. Gateway 112 can detect the existing second-screen devices 110 being used and through context build user profile information along with information sent from second-screen applications 404 to determine the appropriate second-screen content to provide to users.
  • a wide area network such as the Internet or an over-the-top connection
  • gateway 112 may detect which second-screen devices 110 are active. Then, gateway 112 may consult a user profile to determine which second-screen content may be of interest to this user using this second-screen device 110 . For example, if a mobile telephone that is associated with a user #1 is active, and this user likes cooking shows, then gateway 112 may send a message to head-end 102 indicating that user #1 is active and likes cooking shows.
  • head-end 102 may determine that recipe information should be sent to gateway 112 as second-screen content. In this case, head-end 102 may selectively provide second-screen content to different users. This may more efficiently use bandwidth as only second-screen content may be sent based on active second-screen devices 110 and only to users that may be interested in this second-screen content. Alternatively, second-screen content can be always sent with first-screen content.
  • FIG. 5 depicts a simplified flowchart 500 of a method for delivering second-screen content using MVC extensions according to one embodiment.
  • gateway 112 receives a content stream including first-screen content and second-screen content.
  • Head-end 102 sent the content stream using the MVC extension configured to be used to provide multi-view content for the first-screen content.
  • gateway 112 separates the first-screen content and the second-screen content from the content stream.
  • Demultiplexer 302 may be used to perform demultiplexing.
  • gateway 112 may decode the first-screen content and the second-screen content.
  • gateway 112 determines that the second-screen content is for a second-screen device.
  • gateway 112 can store the second-screen content in cache 306 .
  • gateway 112 detects a second-screen device actively connected to the gateway. Also, gateway 112 may determine that this second-screen device is the destination for the second-screen content. Then, at 514 , gateway 112 sends the first-screen content to a first-screen device. Also, at 516 , gateway 112 sends the second-screen content to the second-screen device.
  • FIG. 6 illustrates an example of a special purpose computer system 600 configured with multi-view delivery system 106 , multi-view stream processor 114 , and second-screen processor 118 according to one embodiment.
  • computer system 600 - 1 describes head-end 102 .
  • computer system 600 - 2 describes gateway 112 . Only one instance of computer system 600 will be described for discussion purposes, but it will be recognized that computer system 600 may be implemented for other entities described above, such as multi-view delivery system 106 , multi-view stream processor 114 , and second-screen processor 118 , first-screen devices 108 , STB 116 , and/or second-screen devices 110 .
  • Computer system 600 includes a bus 602 , network interface 604 , a computer processor 606 , a memory 608 , a storage device 610 , and a display 612 .
  • Bus 602 may be a communication mechanism for communicating information.
  • Computer processor 606 may execute computer programs stored in memory 608 or storage device 608 . Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single computer system 600 or multiple computer systems 600 . Further, multiple computer processors 606 may be used.
  • Memory 608 may store instructions, such as source code or binary code, for performing the techniques described above. Memory 608 may also be used for storing variables or other intermediate information during execution of instructions to be executed by processor 606 . Examples of memory 608 include random access memory (RAM), read only memory (ROM), or both.
  • RAM random access memory
  • ROM read only memory
  • Storage device 610 may also store instructions, such as source code or binary code, for performing the techniques described above. Storage device 610 may additionally store data used and manipulated by computer processor 606 .
  • storage device 610 may be a database that is accessed by computer system 600 .
  • Other examples of storage device 610 include random access memory (RAM), read only memory (ROM), a hard drive, a magnetic disk, an optical disk, a CD-ROM, a DVD, a flash memory, a USB memory card, or any other medium from which a computer can read.
  • Memory 608 or storage device 610 may be an example of a non-transitory computer-readable storage medium for use by or in connection with computer system 600 .
  • the non-transitory computer-readable storage medium contains instructions for controlling a computer system 600 to be configured to perform functions described by particular embodiments.
  • the instructions when executed by one or more computer processors 606 , may be configured to perform that which is described in particular embodiments.
  • Computer system 600 includes a display 612 for displaying information to a computer user.
  • Display 612 may display a user interface used by a user to interact with computer system 600 .
  • Computer system 600 also includes a network interface 604 to provide data communication connection over a network, such as a local area network (LAN) or wide area network (WAN). Wireless networks may also be used.
  • network interface 604 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • Computer system 600 can send and receive information through network interface 604 across a network 614 , which may be an Intranet or the Internet.
  • Computer system 600 may interact with other computer systems 600 through network 614 .
  • client-server communications occur through network 614 .
  • implementations of particular embodiments may be distributed across computer systems 600 through network 614 .
  • Particular embodiments may be implemented in a non-transitory computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine.
  • the computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments.
  • the computer system may include one or more computing devices.
  • the instructions, when executed by one or more computer processors, may be configured to perform that which is described in particular embodiments.

Abstract

Particular embodiments use a multi-view coding (MVC) extension to provide second-screen content along with the first-screen content. In one embodiment, a head-end multiplexes the first-screen content with the second-screen content into a single content stream. The second-screen content is added to the video stream according to the MVC extension requirements. At the user end, such as at a gateway, instead of sending the first-screen content and second-screen content to the first-screen device, the gateway de-multiplexes the first-screen content and the second-screen content. The gateway can then send the first-screen content to the first-screen device while caching the second-screen content. When the gateway determines that the second-screen content should be displayed on the second-screen device, the gateway can send the second-screen content to the second-screen device for display on the second-screen of the second-screen device.

Description

    BACKGROUND
  • Multi-screen solutions display second-screen content on second-screen devices while a user watches first-screen content (e.g., a television show) on a first-screen device (e.g., television). Second-screen applications allow users to interact with their second-screen devices while viewing first-screen content on first-screen devices. In one example, a user may watch a television show on a television. Then, the user can use his/her second-screen device to access second-screen content, such as supplemental content for the television show or advertisements, while watching the television show. In one example, the first-screen content may be delivered via a cable television network to the television. The user may then use a content source's application on the second-screen device to access the second-screen content via another communication medium, such as the Internet. For example, while watching the television show on a television network, the user may open the television network's application to request the second-screen content via the Internet.
  • While second-screen use has increased, the overall uptake has been limited. Some issues may be limiting the uptake, such as the user typically has to download an application to view the second-screen content. In some cases, for each different content source, the user needs to download a different application to view the second-screen content. For example, a first television network has a first application and a second television network has a second application. Also, there may be problems with the synchronization between the first-screen content and the second-screen content. For example, the second-screen content should be output in coordination with the first-screen content. However, there may be latency in retrieving content for the second-screen in response to the first-screen event, and also there may be latency when the second-screen device has to connect via a different communication network to receive the second-screen content from the communication network delivering the first-screen content. The latency may cause problems with some content, such as in real-time programs (e.g., sports), where latency in the synchronization is not acceptable.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts a system for delivering first-screen content and second-screen content using multi-view coding (MVC) extensions according to one embodiment.
  • FIG. 2 depicts a more detailed example of a head-end according to one embodiment.
  • FIG. 3 depicts a more detailed example of a gateway for de-multiplexing the content stream according to one embodiment.
  • FIG. 4 depicts a more detailed example of a second-screen processor according to one embodiment.
  • FIG. 5 depicts a simplified flowchart of a method for delivering second-screen content using MVC extensions according to one embodiment.
  • FIG. 6 illustrates an example of a special purpose computer system configured with the multi-view delivery system, the multi-view stream processor, and the second-screen processor according to one embodiment.
  • DETAILED DESCRIPTION
  • Described herein are techniques for a second-screen delivery system using multi-view coding extensions. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of particular embodiments. Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
  • Particular embodiments provide a second-screen experience for users on a second-screen device. A system uses multi-stream capabilities designed for delivering multi-view content to a first-screen device. However, the system uses the multi-stream capabilities to enable the second-screen experience. For example, encoding standards have incorporated multi-stream capabilities. The multi-stream capabilities allow a system to deliver multiple video streams to a single source. Typically, a multi-view coding (MVC) extension is used to provide multiple views to a first-screen device. For example, a three dimensional (3D) movie includes a main video stream and another stream for a second view. The main video stream and second view are sent to the first-screen device, which combines the second view with the main video stream to create the 3D picture on the first-screen device. The second view is encoded into a single stream with the main video stream using the MVC extension.
  • Particular embodiments use the MVC extension to provide second-screen content along with the first-screen content. In one embodiment, a head-end multiplexes the first-screen content with the second-screen content into a single content stream. The second-screen content is added to the video stream according to the MVC extension requirements. At the user end, such as at a gateway, instead of sending the first-screen content and second-screen content to the first-screen device, the gateway de-multiplexes the first-screen content and the second-screen content. The gateway can then send the first-screen content to the first-screen device while caching the second-screen content. When the gateway determines that the second-screen content should be displayed on the second-screen device, the gateway can send the second-screen content to the second-screen device for display on the second-screen of the second-screen device.
  • FIG. 1 depicts a system 100 for delivering first-screen content and second-screen content using MVC extensions according to one embodiment. System 100 includes a head-end 102 and customer premise location 104. Head-end 102 may be servicing multiple customer premise locations 104 (not shown). Each customer premise location 104 may be offered a personalized second-screen experience using methods described herein.
  • Head-end 102 may deliver first-screen content to customer premise location 104. In one embodiment, head-end 102 is part of a cable television network that delivers video content for different television networks via broadcast and also on demand. The first-screen content may be delivered via the cable television network using a variety of communication protocols. Different communication protocols schemes may be used, such as quadrature amplitude modulation (QAM) or Internet Protocol, to deliver the video content. Although a head-end and cable network is described, other types of networks that can deliver content using the MVC extension can be used.
  • In one embodiment, a multi-view delivery system 106 includes multiple computing devices that send first-screen content to customer premise location 104. Customer premise location 104 may include a gateway 112 that is a device that interfaces with an outside wide area network 103 (e.g., cable network and/or the Internet) and a local area network 105 within location 104. A first-screen (1st screen) device 108 and a set of second-screen devices 108 are connected to gateway 112 via local area network 105. The first-screen device 108 may be considered a primary screen, such as a television, that a user is primarily watching. For example, a user may watch a television show on first-screen device 108. Second-screen (2nd screen) devices 110 may be secondary screens in which supplemental or second-screen content can be viewed while a user is watching first-screen device 108. Examples of second-screen devices 110 include mobile devices, such as smartphones, tablets, and laptop computers.
  • Multi-view delivery system 106 may deliver the first-screen content destined for first-screen device 108 for display on a first-screen. Also, multi-view delivery system 106 may deliver second-screen content destined for one or more of second-screen devices 110. The second-screen content may be considered supplemental content to the first-screen content. For example, the second-screen content may include advertisements, additional information for first-screen content, promotion coupons or offers, and other types of information.
  • An encoding standard, such as H.264, high-efficiency video coding (HEVC), or other similar protocols, allow multi-views to be sent in single video stream. For example, an extension of H.264/motion pictures experts group (MPEG)-4, advanced video coding (AVC) standard, the joint video teams of the international telecommunications union (ITU)-T video coding experts group (VCEG), and the international standard (ISO)/international electro-technical commission (IEC) moving picture experts group (MPEG) standardized an extension of a transcoder. The extension refers to multi-view video coding, which is amendment 4 to the H.264/AVC standard. The extension states multiple video streams can be multiplexed via different P frames into a single stream. Other extensions may be used, such as supplemental enhancement information (SEI), which may be used in the standard to allow metadata to be sent with the first-screen content, and also other alternative approaches to providing multi-view capabilities.
  • One common use of the extension is to provide at least two multi-view video streams to allow a single-screen to experience three-dimensional video. However, particular embodiments may leverage the extension to provide second-screen content. In this case, multi-view delivery system 106 is enhanced to enable delivery of multi-view streams that include first-screen content and second-screen content.
  • Also, gateway 112 is specially configured to process the second-screen content that is sent using the MVC extension. For example, conventionally, a gateway would have received the two multi-view streams using the MVC extension in a single content stream. Then, the gateway would have sent both multi-view streams to only first-screen device 108 (e.g., not to any second-screen devices). This is because the MVC extension is meant to provide multi-views on a single device. However, gateway 112 uses a multi-view stream processor 114 to de-multiplex the first-screen content and the second-screen content that is included in a content stream from multi-view delivery system 106. Multi-view stream processor 114 may analyze the content stream to determine where to send the different streams. In some embodiments, the content streams may be intended entirely for first-screen device 108, such as when a 3D movie is being watched and the multi-view content includes the additional view. In this case, multi-view stream processor 114 may send both the first-screen content and the multi-view content to first-screen device 108. For example, both of the multi-view streams are merged again and encoded using an encoder, and then sent to first-screen device 108.
  • When using the MVC extension to enable the second-screen environment, encoder 308 re-encodes the first-screen content and then sends the first-screen content to first-screen device 108, which can then display the first-screen content. In one embodiment, a set top box (STB) 116 may receive the first-screen content, decode the content, and then display the content on first-screen device 108. However, instead of sending the multi-view content to first-screen device 108, multi-view stream processor 114 may determine whether the multi-view stream is for the first-screen or the second-screen. The determination may be made based on metadata associated with the multi-view stream that may indicate whether or not the multi-view content is first-screen content or second-screen content.
  • When the multi-view content is second-screen content, a second-screen processor 118 may then send the second-screen content to second-screen device 110 at the appropriate time. For example, multi-view stream processor 114 may first cache the second-screen content. Then, at the appropriate time, second-screen processor 118 synchronizes the display of the second-screen content with first-screen content being displayed on first-screen device 108. For example, an encoder encodes the second-screen content for sending to second-screen devices 110. In other embodiments, a user may request the second-screen content when desired. Other methods of providing the second-screen content may be appreciated and will be described in more detail below.
  • Accordingly, the MVC extension is used to send both the first-screen content and the second-screen content in multiple views in a content stream. An intelligent gateway 112 is used to parse the content stream to separate out the first-screen content and the second-screen content based on metadata. The second-screen content can be sent to second-screen devices 110 without being sent to first-screen device 108. Particular embodiments use gateway 112 because gateway 112 sits between headend 102 and first screen devices 108/second-screen devices 110. Gateway 112 has the processing power to decode the stream and determine whether one view is for the second-screen devices. Gateway 112 can then re-encode the streams and send separate content streams for the first-screen content and the second-screen content to the appropriate destinations. This allows first screen devices 108/second-screen devices 110 to not have to be changed to handle MVC extensions for providing second-screen content. For example, either first screen device 108 or second-screen devices 110 would have had to receive the single stream with both the first-screen content and the second-stream content and determine how to process the second-screen content. Gateway 112 sits naturally in between first screen devices 108/second-screen devices 110, and can determine how to send the second-screen content.
  • Head-End Encoding
  • As mentioned above, head-end 102 may multiplex the first-screen content and the second-screen content together into a content stream. FIG. 2 depicts a more detailed example of head-end 102 according to one embodiment. A primary stream processor 202 and a supplemental stream processor 204 may determine the first-screen content and the second-screen content, respectively, to add to the single content stream. Although only content stream is described, it will be recognized that multiple content streams may be processed, such as content streams for multiple television broadcasts. Any number of the broadcasts may include second-screen content.
  • Primary stream processor 202 may receive the first-screen content 206 may be received from other content sources in real time via satellite or other networks. In other embodiments, first-screen content may retrieve the first-screen content from storage 205, which may be cache memory or other types of long term storage. Although one content stream is described, primary stream processor 202 may be sending multiple content streams for multiple television channels to locations 104. Some of the television channels may have related second-screen content, and some may not.
  • Supplemental stream processor 204 may receive second-screen content from a second-screen content provider 208. Second-screen content provider 208 may include an advertiser, a service provider, a retailer, or even a cable television provider. Also, second-screen content provider 208 may be the same content source that provided the first-screen content. In one embodiment, second-screen content provider 208 may provide second-screen content to head-end 102, which is stored in storage 205 at 210.
  • Second-screen content provider 208 can now target specific user devices, and also service providers can provide enhancements to the first-screen content. For example a service provider could provide the player statistics for a sporting event video stream. Supplemental stream processor 204 may then determine which second-screen content is appropriate to send with the first-screen content. In one example, supplemental stream processor 204 determines second-screen content targeted to a user of second-screen device 110 or first-screen device 108. Once determining the second-screen content, supplemental stream processor 204 sends the second-screen content to a multiplexer 212.
  • Multiplexer 212 receives the first-screen content and the second-screen content, and multiplexes them together into a single multi-view content stream. Multiplexer 212 may multiplex the first-screen content and the second-screen content based on the MVC extension. Also, metadata to identify the second-screen content as being “second-screen content” or for a specific second-screen device 110 may be added to the content stream. The metadata may be needed because the MVC extension is being used for a purpose other than sending multi-views to a single device. The metadata allows gateway 112 to determine when second-screen content is included in the single content stream. Then, encoder 214 may encode the first-screen content and the second-screen content together into an encoded content stream. In one embodiment, encoder 214 encodes the second-screen content using the MVC extension. In this case, the second-screen content is sent as a multi-view stream with the first-screen content. Encoder 214 can then send the single encoded content stream through network 103 to customer premise location 104.
  • Gateway De-Multiplexing
  • FIG. 3 depicts a more detailed example of gateway 112 for de-multiplexing the content stream according to one embodiment. Gateway 112 receives the encoded content stream that includes the multiplexed the first-screen content and the second-screen content. Because of the multiplexing, a de-multiplexer 302 de-multiplexes the content stream to separate the multi-view streams. A decoder 304 can then decode the first-screen content and the second-screen content.
  • Multi-view stream processor 114 can then determine whether the multi-view streams include first-screen content and the second-screen content, or are conventional multi-view streams. For example, depending on the metadata associated with the second-screen content, multi-view stream processor 114 may prepare the second-screen content for forwarding to second-screen device 110. In other embodiments, the second-screen content may actually be meant for first-screen device 108 (in this case, it would not be referred to as second-screen content and is actually multi-view content being traditionally used). If the content stream included traditional multi-view content, then the first-screen content and the second-screen content may be recombined into a single stream, and then an encoder 308 re-encodes the single stream, which is sent to first-screen device 108.
  • When the content stream includes first-screen content and the second-screen content, multi-view stream processor 114 determines where to send the first-screen content and the second-screen content. For example, multi-view stream processor 114 sends the first-screen content to set top box 116 (encoded by encoder 308). Then, multi-view stream processor 114 determines where to send the second-screen content. In this embodiment, multi-view stream processor 114 stores the second-screen content in cache memory 306. Although cache memory is described, any type of storage may be used.
  • Once the second-screen content has been stored in cache memory 306, second-screen processor 118 may determine when and where to send the second-screen content to second-screen device 110. Encoder 308 (this may be the same encoder used to encode the single stream with multi-views or a different encoder) may encode the second-screen content into a stream. This stream is different as it only includes second-screen content and is not multiplexed with first-screen content. This type of content stream may be in a format that second-screen device 110 is configured to process (that is, second-screen device 110 does not have to de-multiplex a content stream with both first-screen content and second-screen content). Encoder 308 then sends the second-screen content to second-screen device 110. It should be noted that encoding may be performed at any time before delivery to second-screen device 110.
  • Second-Screen Delivery
  • FIG. 4 depicts a more detailed example, of second-screen processor 118 according to one embodiment. Second-screen processor 118 may deliver the second-screen content differently. For example, second-screen processor 118 may forward all second-screen content to second-screen device 110 with metadata that is selected based on how and when to display the second-screen content. Or, second-screen processor 118 may detect different events (e.g., in the first screen content) and send the second-screen content in a synchronized manner.
  • Second-screen processor 118 also may determine which second-screen device 110 is connected to gateway 112 and determine which second-screen device 110 should be the destination for the second-screen content. For example, second-screen processor 118 maintains a list of devices within location 104 that are associated with a user or users. This information may be determined via a user profile 408 for the user (or multiple user profiles for multiple users). The user profile information may be found in a subscription profile (when using an application supported by the MSO) or provided by a user. Also, second-screen processor 118 may include a second-screen device detector 402 to detect which second-screen devices 110 are active in customer premise location 104. Second-screen device detector 402 may also track which applications 404 are being used by second-screen devices 110.
  • In detecting which second-screen devices 110 are active, second-screen device detector 402 may message with second-screen devices 110 to determine which second-screen devices 110 are active and in what location. This may involve sending a message to application 404 and having a user confirm the activity and location. Also, second-screen device detector 402 may use fingerprinting or application detection methods may be used to maintain the list of devices. For example, second-screen device detector 402 may activate a microphone of second-screen device 110 to detect the audio being output in the location of second-screen device 110. Then, second-screen device 110 may determine a fingerprint of the first-screen content being output by first-screen device 108. In one example, a television may be outputting a television show, and second-screen device 110 may take a fingerprint of the audio within a range of second-screen device 110. Second-screen device 110 or second-screen device detector 402 (or a back-end device) can then determine that a user is watching the television show when the fingerprint matches a fingerprint from the television show. Further, second-screen device detector 402 may detect which application the user is using by intercepting transfers between the application and a wide area network, such as the Internet.
  • As discussed above, cache 306 buffers the second-screen content. Also, metadata about the second-screen data may be stored in cache 306. The metadata may include information that can be used to determine when the second-screen content should be output to second-screen device 110.
  • Then, a content delivery processor 406 determines when second-screen content should be provided to second-screen device 110. Content delivery processor 406 may monitor the first-screen content being sent and metadata for the second-screen content in cache 306. For example, when a first-screen device renderer requests a change in the content view via a channel change, content delivery processor 406 records the change such that content delivery processor 406 knows the channel first-screen device 108 is watching. Then, content delivery processor 406 can retrieve second-screen content for second-screen device 110 appropriately. For example, content delivery processor 406 may retrieve second-screen content for the current channel at a time defined by the metadata for the second-screen content. This synchronizes the second-screen content with the first-screen content.
  • Content delivery processor 406 may use user profile 408 for users that second-screen device detector 402 built to personalize the second-screen content delivery. The user profile may store personal information for the user, such as user preferences for second-screen content, such as what types of advertisements the user likes to view. Content delivery processor 406 may then determine which second-screen content to provide to second-screen application 404.
  • Content delivery processor 406 may sit within a protocol stack on gateway 112 to allow it to disseminate second-screen content to various second-screen devices 110. A software development kit can be used by a second-screen application 404 to allow interaction with content delivery processor 406 in gateway 112 to receive second-screen content. For example, second-screen applications 404 can subscribe to and access different capabilities provided by gateway 112. For example, the software development kit allows second-screen applications 404 to interface with content delivery processor 406 and request specific second-screen sub-streams based on provided parameters. In other embodiments, content delivery processor 406 may automatically determine which second-screen content to send based on the user profile.
  • Because gateway 112 is context aware via the second-screen device detection, gateway 112 can use the user profile for a user and disseminate the appropriate second-screen content to the second-screen devices 110. For example, when two second-screen devices 110 exist within customer premise location 104 and are active while first-screen device 108 is active, one second-screen device 110 may be targeted with first second-screen content based on the user profile and another second-screen device may be provided with general second-screen content that is not targeted. In one example, when watching a cooking show on the first-screen, a first second-screen device 108 may receive general coupons, and a second second-screen device may receive personalized recipes.
  • In another embodiment, the second-screen content may include sign language-enabled broadcasts in which sign language can be displayed on second-screen devices 110. The standard method for hearing-impaired services is to provide closed caption or in some broadcasts to set up a picture-in-a-picture (PIP) where a sign language source may be in the corner of the first-screen device display screen while the first-screen content is displayed in the remainder of the first-screen device display screen. This may not be ideal for viewers that may be in the same household. For example, it may either disrupt the viewing experience for users that do not need the sign-language view or overlay too much of the sign-language view over the first-screen broadcast. Also, the PIP window may be too small to view the sign language. Using particular embodiments, the first-screen content may include the main broadcast program and the second-screen content may include sign language information that is associated with the first-screen content. Gateway 112 may track second-screen devices 110 that are active and determine that a user who is hearing-impaired is watching the first-screen content via any detection process. Gateway 112 may then determine that the sign language information should be sent to this second-screen device. Then, the user can watch the sign-language display on his/her own second-screen device 110 without interrupting the television show. Also, this can be enhanced to allow the user to design the elements of how the sign language view and the first-screen content view should be laid out and cast back to the primary screen renderer. For example, a user can segment the first-screen content and the second-screen content as desired.
  • The second-screen content can be provided without the need for the second-screen application to use any first-screen content detection, such as fingerprint detection. Rather, gateway 112 has access to the first-screen content and can perform this detection itself. Further, second-screen device 110 does not need any over-the-top capabilities as the second-screen content is sent with the first-screen content. This may also help synchronization as the second-screen content arrives with the first-screen content and experiences the same latency.
  • Gateway 112 also allows for new application capabilities that go beyond simply overlaying content on second-screen devices 110 based on first-screen content contacts. For example, extended features not only at the content source, but also by application developers may be used. For example, a cooking show can produce multi-stream views that include the main program, detailed recipe instructions, and ingredient manufacturer coupons. Hence, a second-screen application designer can create different overlays that allow the user to view the recipe details and store them on their home recipe file while previewing the manufacturer coupons and storing the coupons in user-specific logs at the same time as watching the first-screen content.
  • In one example, a user is viewing a channel on first-screen device 108 while accessing application 404 on second-screen device 110. When the user tunes to the channel to view the first-screen content, content delivery processor 406 detects the user is watching certain first-screen content. Then, content delivery processor 406 may send second-screen content to application 404 including metadata for when and how the second-screen content should be presented to the user. The second-screen content may include time-based synchronized advertisements to the first-screen content, promotion offers, such as coupons, or supplemental content, such as detailed episode instructions in the form of additional video. The episode-related material may be cooking instructions or detailed auto inspection information that relates to the first-screen content being viewed.
  • Accordingly, second-screen application 404 can display second-screen content related to the first-screen content without the need to have a connection to an external source through a wide area network, such as the Internet or an over-the-top connection, different from the connection being used by first-screen device 108. That is, the second-screen content is received via the same communication network and content stream as the first-screen content. Further, second-screen device 110 does not need to rely on external technologies to determine what the user is watching and to retrieve related material. Gateway 112 can detect the existing second-screen devices 110 being used and through context build user profile information along with information sent from second-screen applications 404 to determine the appropriate second-screen content to provide to users.
  • Head-End Enhancements to Personalize User Experience
  • In some embodiments, gateway 112 may detect which second-screen devices 110 are active. Then, gateway 112 may consult a user profile to determine which second-screen content may be of interest to this user using this second-screen device 110. For example, if a mobile telephone that is associated with a user #1 is active, and this user likes cooking shows, then gateway 112 may send a message to head-end 102 indicating that user #1 is active and likes cooking shows.
  • When user #1 requests a cooking show, head-end 102 may determine that recipe information should be sent to gateway 112 as second-screen content. In this case, head-end 102 may selectively provide second-screen content to different users. This may more efficiently use bandwidth as only second-screen content may be sent based on active second-screen devices 110 and only to users that may be interested in this second-screen content. Alternatively, second-screen content can be always sent with first-screen content.
  • Method Flow
  • FIG. 5 depicts a simplified flowchart 500 of a method for delivering second-screen content using MVC extensions according to one embodiment. At 502, gateway 112 receives a content stream including first-screen content and second-screen content. Head-end 102 sent the content stream using the MVC extension configured to be used to provide multi-view content for the first-screen content.
  • At 504, gateway 112 separates the first-screen content and the second-screen content from the content stream. Demultiplexer 302 may be used to perform demultiplexing. At 506, gateway 112 may decode the first-screen content and the second-screen content.
  • At 508, gateway 112 determines that the second-screen content is for a second-screen device. At 510, gateway 112 can store the second-screen content in cache 306.
  • At 512, gateway 112 detects a second-screen device actively connected to the gateway. Also, gateway 112 may determine that this second-screen device is the destination for the second-screen content. Then, at 514, gateway 112 sends the first-screen content to a first-screen device. Also, at 516, gateway 112 sends the second-screen content to the second-screen device.
  • Computer System
  • FIG. 6 illustrates an example of a special purpose computer system 600 configured with multi-view delivery system 106, multi-view stream processor 114, and second-screen processor 118 according to one embodiment. In one embodiment, computer system 600-1 describes head-end 102. Also, computer system 600-2 describes gateway 112. Only one instance of computer system 600 will be described for discussion purposes, but it will be recognized that computer system 600 may be implemented for other entities described above, such as multi-view delivery system 106, multi-view stream processor 114, and second-screen processor 118, first-screen devices 108, STB 116, and/or second-screen devices 110.
  • Computer system 600 includes a bus 602, network interface 604, a computer processor 606, a memory 608, a storage device 610, and a display 612.
  • Bus 602 may be a communication mechanism for communicating information. Computer processor 606 may execute computer programs stored in memory 608 or storage device 608. Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single computer system 600 or multiple computer systems 600. Further, multiple computer processors 606 may be used.
  • Memory 608 may store instructions, such as source code or binary code, for performing the techniques described above. Memory 608 may also be used for storing variables or other intermediate information during execution of instructions to be executed by processor 606. Examples of memory 608 include random access memory (RAM), read only memory (ROM), or both.
  • Storage device 610 may also store instructions, such as source code or binary code, for performing the techniques described above. Storage device 610 may additionally store data used and manipulated by computer processor 606. For example, storage device 610 may be a database that is accessed by computer system 600. Other examples of storage device 610 include random access memory (RAM), read only memory (ROM), a hard drive, a magnetic disk, an optical disk, a CD-ROM, a DVD, a flash memory, a USB memory card, or any other medium from which a computer can read.
  • Memory 608 or storage device 610 may be an example of a non-transitory computer-readable storage medium for use by or in connection with computer system 600. The non-transitory computer-readable storage medium contains instructions for controlling a computer system 600 to be configured to perform functions described by particular embodiments. The instructions, when executed by one or more computer processors 606, may be configured to perform that which is described in particular embodiments.
  • Computer system 600 includes a display 612 for displaying information to a computer user. Display 612 may display a user interface used by a user to interact with computer system 600.
  • Computer system 600 also includes a network interface 604 to provide data communication connection over a network, such as a local area network (LAN) or wide area network (WAN). Wireless networks may also be used. In any such implementation, network interface 604 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • Computer system 600 can send and receive information through network interface 604 across a network 614, which may be an Intranet or the Internet. Computer system 600 may interact with other computer systems 600 through network 614. In some examples, client-server communications occur through network 614. Also, implementations of particular embodiments may be distributed across computer systems 600 through network 614.
  • Particular embodiments may be implemented in a non-transitory computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine. The computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments. The computer system may include one or more computing devices. The instructions, when executed by one or more computer processors, may be configured to perform that which is described in particular embodiments.
  • As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
  • The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims.

Claims (20)

1. A method comprising:
receiving, by a gateway, a content stream including first-screen content and second-screen content, the second-screen content not comprising a second view of the first-screen content, wherein the second-screen content is sent in the content stream using a multi-view coding extension configured to be used to provide multi-view content;
separating, by the gateway, the first-screen content and the second-screen content from the content stream;
determining, by the gateway, whether the second-screen content is for second-screen devices;
selecting, by the gateway, a second-screen device that is connected to the gateway and is a destination for the second-screen content;
sending, by the gateway, the first-screen content to a first-screen device; and
sending, by the gateway, the second-screen content to the second-screen device.
2. The method of claim Error! Reference source not found., wherein selecting the second-screen device that is connected to the gateway and the destination for the second-screen content comprises:
selecting a plurality of second-screen devices that are connected to the gateway; and
selecting the second-screen device from the plurality of second-screen devices as the destination for the second-screen content.
3. The method of claim Error! Reference source not found., wherein the selecting is performed based on a user profile associated with the second-screen device.
4. The method of claim Error! Reference source not found., wherein the selecting is performed based on the second-screen device being detected to be actively within a range of the first-screen device.
5. The method of claim 4, wherein fingerprint detection of the first-screen content is used by the second-screen device to determine the second-screen device is within the range of the first-screen device.
6. The method of claim Error! Reference source not found., further comprising:
communicating with the second-screen device to determine a user associated with the second-screen device is watching the first-screen content.
7. The method of claim 6, wherein communicating with the second-screen device comprises:
communicating with an application on the second-screen device to determine the user is watching the first-screen content.
8. The method of claim Error! Reference source not found., further comprising:
storing the second-screen content prior to sending the second-screen content to the second-screen device.
9. The method of claim 8, further comprising:
selecting a time to send the second-screen content to the second-screen device; and
sending the second-screen content to the second-screen device at the time.
10. The method of claim 9, wherein selecting the time is based on an event occurring in the first-screen content being sent to the first-screen device.
11. The method of claim 8, further comprising:
selecting metadata for the second-screen device to use to display the second-screen content; and
sending the metadata and the second-screen content to the second-screen device, wherein the second-screen device uses the metadata to determine when to display the second-screen content.
12. The method of claim Error! Reference source not found., wherein determining whether the second-screen content is for second-screen devices comprising:
selecting metadata associated with the second-screen content; and
determining whether the second-screen content should be sent to the first-screen device or the second-screen device based on the metadata.
13. The method of claim Error! Reference source not found., wherein when the gateway determines that the second-screen content is for the first-screen device, sending the second-screen content to the first-screen device instead of the second-screen device.
14. The method of claim Error! Reference source not found., wherein separating comprises:
demultiplexing the content stream to determine the first-screen content and the second-screen content.
15. The method of claim Error! Reference source not found., wherein the first-screen content and the second-screen content are sent from a single source.
16. The method of claim Error! Reference source not found., wherein the first-screen content and the second-screen content are sent from a same network connection in a single content stream.
17. The method of claim Error! Reference source not found., further comprising:
decoding the first-screen content and the second-screen content at the gateway; and
re-encoding, by the gateway, the first-screen content for the first-screen device and the second-screen content for the second-screen device in separate content streams.
18. An apparatus comprising:
one or more computer processors; and
a non-transitory computer-readable storage medium containing instructions that, when executed, control the one or more computer processors to be configured for:
receiving a content stream including first-screen content and second-screen content, the second-screen content not comprising a second view of the first-screen content, wherein the second-screen content is sent in the content stream using a multi-view coding extension configured to be used to provide multi-view content;
separating the first-screen content and the second-screen content from the content stream;
determining whether the second-screen content is for second-screen devices;
selecting a second-screen device that is connected to the apparatus and is a destination for the second-screen content;
sending the first-screen content to a first-screen device; and
sending the second-screen content to the second-screen device.
19. A system comprising:
a gateway device configured to be communicatively coupled, via a network, to a head-end device that comprises:
one or more first computer processors; and
a first non-transitory computer-readable storage medium comprising instructions, that when executed, control the one or more first computer processors to be configured for:
multiplexing first-screen content and second-screen content into a single content stream based on a multi-view coding extension configured to be used to provide multi-view content wherein the second-screen content does not comprise a second view of the first-screen content; and
adding metadata to the single content stream, the metadata identifying that the second-screen content is for second-screen devices;
the gateway device comprising:
one or more second computer processors; and
a second non-transitory computer-readable storage medium comprising instructions, that when executed, control the one or more second computer processors to be configured for:
receiving the single content stream including the first-screen content and the second-screen content;
separating the first-screen content and the second-screen content from the single content stream;
determining whether the second-screen content is for second-screen devices based on the metadata;
sending the first-screen content to a first-screen device; and
sending the second-screen content to a second-screen device.
20. The system of claim Error! Reference source not found., wherein the gateway device is further configured to be communicatively coupled to a plurality of second-screen devices, wherein the gateway is further configured to select the second-screen device from the plurality of second-screen devices in which to send the second-screen content.
US14/876,419 2015-10-06 2015-10-06 Gateway multi-view video stream processing for second-screen content overlay Active US9628839B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/876,419 US9628839B1 (en) 2015-10-06 2015-10-06 Gateway multi-view video stream processing for second-screen content overlay
CA3000847A CA3000847C (en) 2015-10-06 2016-10-05 Gateway multi-view video stream processing for second-screen content overlay
PCT/US2016/055416 WO2017062404A1 (en) 2015-10-06 2016-10-05 Gateway multi-view video stream processing for second-screen content overlay
DE112016004560.3T DE112016004560T5 (en) 2015-10-06 2016-10-05 Gateway multi-view video stream processing for second screen content overlay
GB201805025A GB2558452B (en) 2015-10-06 2016-10-05 Gateway multi-view video stream processing for second-screen content overlay

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/876,419 US9628839B1 (en) 2015-10-06 2015-10-06 Gateway multi-view video stream processing for second-screen content overlay

Publications (2)

Publication Number Publication Date
US20170099514A1 true US20170099514A1 (en) 2017-04-06
US9628839B1 US9628839B1 (en) 2017-04-18

Family

ID=57184823

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/876,419 Active US9628839B1 (en) 2015-10-06 2015-10-06 Gateway multi-view video stream processing for second-screen content overlay

Country Status (5)

Country Link
US (1) US9628839B1 (en)
CA (1) CA3000847C (en)
DE (1) DE112016004560T5 (en)
GB (1) GB2558452B (en)
WO (1) WO2017062404A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170374401A1 (en) * 2016-04-11 2017-12-28 Rovi Guides, Inc. Method and systems for enhancing media viewing experiences on multiple devices
US20190208235A1 (en) * 2017-12-28 2019-07-04 Dish Network L.L.C. Remotely generated encoding metadata for local content encoding
US11070873B2 (en) 2017-12-28 2021-07-20 Dish Network L.L.C. Locally generated spot beam replacement

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3070942B1 (en) * 2015-03-17 2023-11-22 InterDigital CE Patent Holdings Method and apparatus for displaying light field video data
US11818181B1 (en) * 2020-07-27 2023-11-14 Rgb Spectrum Systems, methods, and devices for a persistent content sharing platform

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110159929A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Multiple remote controllers that each simultaneously controls a different visual presentation of a 2d/3d display
US20150022721A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Multi contents view display apparatus and method for displaying multi contents view

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144375A (en) * 1998-08-14 2000-11-07 Praja Inc. Multi-perspective viewer for content-based interactivity
US7325245B1 (en) 1999-09-30 2008-01-29 Intel Corporation Linking to video information
US7782363B2 (en) * 2000-06-27 2010-08-24 Front Row Technologies, Llc Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences
US8468227B2 (en) 2002-12-31 2013-06-18 Motorola Solutions, Inc. System and method for rendering content on multiple devices
US10664138B2 (en) * 2003-03-14 2020-05-26 Comcast Cable Communications, Llc Providing supplemental content for a second screen experience
EP3629575A1 (en) 2005-01-11 2020-04-01 TVNGO Ltd. Method and apparatus for facilitating toggling between internet and tv broadcasts
AU2006272401B2 (en) * 2005-07-22 2011-03-31 Fanvision Entertainment Llc System and methods for enhancing the experience of spectators attending a live sporting event
US9319741B2 (en) * 2006-09-07 2016-04-19 Rateze Remote Mgmt Llc Finding devices in an entertainment system
US20080240010A1 (en) 2007-03-26 2008-10-02 Motorola, Inc. Intelligent orchestration of cross-media communications
US20080320041A1 (en) 2007-06-21 2008-12-25 Motorola, Inc. Adding virtual features via real world accessories
US20130074139A1 (en) 2007-07-22 2013-03-21 Overlay.Tv Inc. Distributed system for linking content of video signals to information sources
US8943425B2 (en) 2007-10-30 2015-01-27 Google Technology Holdings LLC Method and apparatus for context-aware delivery of informational content on ambient displays
US8683067B2 (en) * 2007-12-19 2014-03-25 Nvidia Corporation Video perspective navigation system and method
US20090172745A1 (en) 2007-12-28 2009-07-02 Motorola, Inc. Method and Apparatus Regarding Receipt of Audio-Visual Content Information and Use of Such Information to Automatically Infer a Relative Popularity of That Content
US20090241155A1 (en) 2008-03-18 2009-09-24 Motorola, Inc. Method and Apparatus to Facilitate Automatically Forming an Aggregation of Multiple Different Renderable Content Items
US20090288120A1 (en) 2008-05-15 2009-11-19 Motorola, Inc. System and Method for Creating Media Bookmarks from Secondary Device
US20100115596A1 (en) 2008-10-31 2010-05-06 Motorola, Inc. Method and System for Creating and Sharing Bookmarks of Media Content
US20100121763A1 (en) 2008-11-13 2010-05-13 Motorola, Inc. Method and apparatus to facilitate using a virtual-world interaction to facilitate a real-world transaction
US9253430B2 (en) 2009-01-15 2016-02-02 At&T Intellectual Property I, L.P. Systems and methods to control viewed content
US20110299544A1 (en) * 2010-06-04 2011-12-08 David Lundgren Method and system for managing bandwidth by a broadband gateway
US20100293032A1 (en) 2009-05-12 2010-11-18 Motorola, Inc. System and method for sharing commercial information
US8411746B2 (en) * 2009-06-12 2013-04-02 Qualcomm Incorporated Multiview video coding over MPEG-2 systems
EP2534833B1 (en) * 2010-02-12 2016-04-27 Thomson Licensing Method for synchronized content playback
KR101843592B1 (en) * 2010-04-30 2018-03-29 톰슨 라이센싱 Primary screen view control through kinetic ui framework
US8713604B2 (en) * 2010-06-23 2014-04-29 Echostar Technologies L.L.C. Systems and methods for processing supplemental information associated with media programming
US20120116869A1 (en) 2010-11-08 2012-05-10 Motorola-Mobility, Inc. Coordinating advertising among user devices
US9826270B2 (en) 2011-04-27 2017-11-21 Echostar Ukraine Llc Content receiver system and method for providing supplemental content in translated and/or audio form
CN103493493A (en) * 2011-04-28 2014-01-01 索尼公司 Encoding device and encoding method, and decoding device and decoding method
US20140075471A1 (en) * 2011-05-11 2014-03-13 Echostar Ukraine Llc Apparatus, systems and methods for accessing supplemental information pertaining to a news segment
US8621548B2 (en) * 2011-05-12 2013-12-31 At&T Intellectual Property I, L.P. Method and apparatus for augmenting media services
US20130031581A1 (en) 2011-07-25 2013-01-31 General Instrument Corporation Preparing an alert in a multi-channel communications environment
US20110289532A1 (en) 2011-08-08 2011-11-24 Lei Yu System and method for interactive second screen
KR101473254B1 (en) * 2011-10-12 2014-12-17 주식회사 케이티 Method and device for providing multi angle video to devices
US20130132998A1 (en) 2011-11-21 2013-05-23 General Instrument Corporation Sending a message within a television-content deliver environment
US20130144709A1 (en) 2011-12-05 2013-06-06 General Instrument Corporation Cognitive-impact modeling for users having divided attention
US9554185B2 (en) 2011-12-15 2017-01-24 Arris Enterprises, Inc. Supporting multiple attention-based, user-interaction modes
US20130160036A1 (en) 2011-12-15 2013-06-20 General Instrument Corporation Supporting multiple attention-based, user-interaction modes
US20130173765A1 (en) * 2011-12-29 2013-07-04 United Video Properties, Inc. Systems and methods for assigning roles between user devices
US20130194310A1 (en) 2012-01-26 2013-08-01 General Instrument Corporation Automatically adaptation of application data responsive to an operating condition of a portable computing device
US8995822B2 (en) 2012-03-14 2015-03-31 General Instrument Corporation Sentiment mapping in a media content item
US9106979B2 (en) 2012-03-14 2015-08-11 Arris Technology, Inc. Sentiment mapping in a media content item
US10681427B2 (en) 2012-03-14 2020-06-09 Arris Enterprises Llc Sentiment mapping in a media content item
US8943020B2 (en) * 2012-03-30 2015-01-27 Intel Corporation Techniques for intelligent media show across multiple devices
WO2013152801A1 (en) 2012-04-13 2013-10-17 Telefonaktiebolaget L M Ericsson (Publ) An improved method and apparatus for providing extended tv data
US20130347018A1 (en) * 2012-06-21 2013-12-26 Amazon Technologies, Inc. Providing supplemental content with active media
US9630095B2 (en) 2012-08-14 2017-04-25 Google Technology Holdings LLC Software-application initiation
CN104145434B (en) 2012-08-17 2017-12-12 青岛海信国际营销股份有限公司 The channel switch device of intelligent television
US20140074923A1 (en) 2012-09-12 2014-03-13 General Instrument Corporation Selective content disclosure in an ad-hoc network based on social cohesion
US9635438B2 (en) 2012-09-27 2017-04-25 Arris Enterprises, Inc. Providing secondary content to accompany a primary content item
US8484676B1 (en) 2012-11-21 2013-07-09 Motorola Mobility Llc Attention-based, multi-screen advertisement scheduling
US9544647B2 (en) 2012-11-21 2017-01-10 Google Technology Holdings LLC Attention-based advertisement scheduling in time-shifted content
US20140143043A1 (en) 2012-11-21 2014-05-22 General Instrument Corporation Multi-screen advertisement correlation based on purchases
TWI505698B (en) * 2012-12-06 2015-10-21 Inst Information Industry Synchronous displaying system for displaying multi-view frame and method for synchronously displaying muti-view frame
US9729920B2 (en) 2013-03-15 2017-08-08 Arris Enterprises, Inc. Attention estimation to control the delivery of data and audio/video content
US20150281787A1 (en) * 2013-03-15 2015-10-01 Google Inc. Social Network Augmentation of Broadcast Media
CN105264903B (en) * 2013-06-24 2019-11-05 英特尔公司 Technology and systems for multiple display media presentations
WO2015038657A2 (en) 2013-09-10 2015-03-19 Arris Enterprises, Inc. Creating derivative advertisements
US10796344B2 (en) 2013-09-12 2020-10-06 Arris Enterprises Llc Second screen advertisement correlation using scheduling information for first screen advertisements
US9497497B2 (en) * 2013-10-31 2016-11-15 Verizon Patent And Licensing Inc. Supplemental content for a video program
US20150245081A1 (en) * 2014-02-27 2015-08-27 United Video Properties, Inc. Methods and systems for presenting supplemental content in response to detecting a second device
US20150319509A1 (en) * 2014-05-02 2015-11-05 Verizon Patent And Licensing Inc. Modified search and advertisements for second screen devices
US10540696B2 (en) * 2014-09-30 2020-01-21 At&T Intellectual Property I, L.P. Enhanced shared media experiences

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110159929A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Multiple remote controllers that each simultaneously controls a different visual presentation of a 2d/3d display
US20150022721A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Multi contents view display apparatus and method for displaying multi contents view

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170374401A1 (en) * 2016-04-11 2017-12-28 Rovi Guides, Inc. Method and systems for enhancing media viewing experiences on multiple devices
US10425669B2 (en) * 2016-04-11 2019-09-24 Rovi Guides, Inc. Method and systems for enhancing media viewing experiences on multiple devices
US20190208235A1 (en) * 2017-12-28 2019-07-04 Dish Network L.L.C. Remotely generated encoding metadata for local content encoding
US10820023B2 (en) * 2017-12-28 2020-10-27 Dish Network L.L.C. Remotely generated encoding metadata for local content encoding
US11070873B2 (en) 2017-12-28 2021-07-20 Dish Network L.L.C. Locally generated spot beam replacement
US11082726B2 (en) 2017-12-28 2021-08-03 Dish Network L.L.C. Remotely generated encoding metadata for local content encoding
US11800161B2 (en) 2017-12-28 2023-10-24 Dish Network L.L.C. Remotely generated encoding metadata for local content encoding

Also Published As

Publication number Publication date
DE112016004560T5 (en) 2018-06-28
GB201805025D0 (en) 2018-05-09
CA3000847C (en) 2020-09-08
WO2017062404A1 (en) 2017-04-13
GB2558452A (en) 2018-07-11
GB2558452B (en) 2020-01-01
CA3000847A1 (en) 2017-04-13
US9628839B1 (en) 2017-04-18

Similar Documents

Publication Publication Date Title
US11317164B2 (en) Methods, apparatus, and systems for providing media content over a communications network
JP7284906B2 (en) Delivery and playback of media content
CA3000847C (en) Gateway multi-view video stream processing for second-screen content overlay
US9756369B2 (en) Method and apparatus for streaming media data segments of different lengths wherein the segment of different length comprising data not belonging to the actual segment and beginning with key frames or containing key frames only
US9697630B2 (en) Sign language window using picture-in-picture
US10097785B2 (en) Selective sign language location
US20140223502A1 (en) Method of Operating an IP Client
KR102361314B1 (en) Method and apparatus for providing 360 degree virtual reality broadcasting services
CA2795694A1 (en) Video content distribution
US9204123B2 (en) Video content generation
US10204433B2 (en) Selective enablement of sign language display
KR20180064647A (en) System and method for providing hybrid user interfaces
KR20170130883A (en) Method and apparatus for virtual reality broadcasting service based on hybrid network
US10637904B2 (en) Multimedia streaming service presentation method, related apparatus, and related system
US20110242276A1 (en) Video Content Distribution
CA2824708C (en) Video content generation
Macq et al. Application Scenarios and Deployment Domains

Legal Events

Date Code Title Description
AS Assignment

Owner name: ARRIS ENTERPRISES, INC., GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WODKA, JOSEPH F.;WICKRAMASURIYA, JEHAN;VASUDEVAN, VENUGOPAL;SIGNING DATES FROM 20150922 TO 20150930;REEL/FRAME:036739/0888

AS Assignment

Owner name: ARRIS ENTERPRISES LLC, PENNSYLVANIA

Free format text: CHANGE OF NAME;ASSIGNOR:ARRIS ENTERPRISES INC;REEL/FRAME:041995/0031

Effective date: 20151231

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: ARRIS ENTERPRISES LLC, GEORGIA

Free format text: CHANGE OF NAME;ASSIGNOR:ARRIS ENTERPRISES, INC.;REEL/FRAME:049586/0470

Effective date: 20151231

AS Assignment

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:ARRIS ENTERPRISES LLC;REEL/FRAME:049820/0495

Effective date: 20190404

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: ABL SECURITY AGREEMENT;ASSIGNORS:COMMSCOPE, INC. OF NORTH CAROLINA;COMMSCOPE TECHNOLOGIES LLC;ARRIS ENTERPRISES LLC;AND OTHERS;REEL/FRAME:049892/0396

Effective date: 20190404

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: TERM LOAN SECURITY AGREEMENT;ASSIGNORS:COMMSCOPE, INC. OF NORTH CAROLINA;COMMSCOPE TECHNOLOGIES LLC;ARRIS ENTERPRISES LLC;AND OTHERS;REEL/FRAME:049905/0504

Effective date: 20190404

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CONNECTICUT

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:ARRIS ENTERPRISES LLC;REEL/FRAME:049820/0495

Effective date: 20190404

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: WILMINGTON TRUST, DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ARRIS SOLUTIONS, INC.;ARRIS ENTERPRISES LLC;COMMSCOPE TECHNOLOGIES LLC;AND OTHERS;REEL/FRAME:060752/0001

Effective date: 20211115