WO2017026991A1 - Dynamic caching and predictive maintenance for video streaming - Google Patents

Dynamic caching and predictive maintenance for video streaming Download PDF

Info

Publication number
WO2017026991A1
WO2017026991A1 PCT/US2015/044262 US2015044262W WO2017026991A1 WO 2017026991 A1 WO2017026991 A1 WO 2017026991A1 US 2015044262 W US2015044262 W US 2015044262W WO 2017026991 A1 WO2017026991 A1 WO 2017026991A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
caches
network
response time
level
Prior art date
Application number
PCT/US2015/044262
Other languages
French (fr)
Inventor
Salam Akoum
Joydeep Acharya
Original Assignee
Hitachi, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi, Ltd. filed Critical Hitachi, Ltd.
Priority to PCT/US2015/044262 priority Critical patent/WO2017026991A1/en
Publication of WO2017026991A1 publication Critical patent/WO2017026991A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2225Local VOD servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6131Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network

Definitions

  • the present disclosure is generally related to wireless technologies, and more specifically, to caching schemes for wireless systems.
  • QoS Quality of Service
  • Such methods include feedback from client devices for monitoring or diagnosing quality such as link gain quality, Transmission Control Protocol (TCP) throughput, and latency.
  • Third party applications in the related art have been used to obtain higher layer feedback.
  • Example implementations there are systems and methods that first determine, from the video key performance indicators (KPIs) collected from video analytics (e.g., intelligent video analytics reporting), the level of RAN and CN congestion, the level of usage of different caches in the network, and the delays that different videos incur coming from various locations in the network.
  • KPIs video key performance indicators
  • Example implementations utilize dynamic cache optimization and predictive maintenance for the network, depending on the
  • the dynamic cache selection can allow the media file segments to be transmitted from different caches in the network at a finer granularity than a video session, depending on availability of content and network performance.
  • the network there is predictive maintenance for the network based on KPIs from the applications.
  • the collected historical data can be based on cache usage and user Quality of Experience (QoE).
  • QoE Quality of Experience
  • the applications can facilitate the collection of information about status of congestion in different parts of the network, and help with network management.
  • the collected information and historical data allows network managers to predict which caches in the network may be in demand as well as which network segments are going to be congested, and proposes optimization of the cache sizes and locations based on the prediction information.
  • aspects of the present disclosure include an apparatus, which can include a memory configured to store status information of one or more video caches of a network for a video and user equipment (UE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches and regarding one or more other UEs downloading the video; and a processor, configured to determine, from the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches meets a predetermined response time for the UE; and for the set of video caches not meeting the predetermined response time, determine another set of video caches that meet the predetermined response time, and configure the UE to continue downloading the video from the another set of video caches.
  • UE user equipment
  • aspects of the present disclosure further include an apparatus, including a memory configured to store status information of one or more video caches of a network for a video and user eauioment CUE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches and regarding one or more other UEs downloading the video; and a processor, configured to determine usage level of the one or more video caches based on the UE information, and for the determined usage level exceeding a predetermined usage level and for a level of congestion of the one or more video caches exceeding a predetermined congestion level, determine a location in the network associated with a congested one of the one or more video caches; and adjust one of a size and a location of the congested one of the one or more video caches.
  • a memory configured to store status information of one or more video caches of a network for a video and user eauioment CUE) information regarding a UE downloading the video from a set of video caches selected from the one or more video
  • aspects of the present disclosure may further include a computer program, storing instructions for executing a process.
  • the instructions may involve processing a status of each cache for a video in a network at an initiation of a video download;
  • the computer program may be stored on a non-transitory computer readable medium to store the instructions for executing a process. Further, the computer program may be stored in the memory of a UE to be loaded to a processor.
  • FIG. 1 illustrates a network with distributed cache content across the network, in accordance with an example implementation.
  • FIG. 2 illustrates a flow diagram for dynamic cache selection for adaptive video streaming, in accordance with an example implementation.
  • FIG. 3 illustrates the hardware components of the video server, in accordance with an example implementation.
  • FIG. 4 illustrates a dynamic cache and capacity optimization in accordance with an example implementation.
  • FIG. 5 illustrates a flow diagram for predictive maintenance for caches in the network, in accordance with an example implementation.
  • FIG. 6 illustrates an example user equipment upon which example implementations can be implemented.
  • FIG. 7 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • KPIs video key performance indicators
  • an application level feedback for video optimization is utilized that takes into account frame level jitter, and re-buffering/stalling events.
  • the example implementations use the intelligent feedback for video Quality of Experience (QoE) to implement dynamic caching and predictive maintenance for cellular networks.
  • QoE video Quality of Experience
  • One way to deal with the traffic demand is to reduce the duplicate content transmissions, triggered by users requesting the same content, via adopting intelligent caching strategies inside the PDNs as well as at the RAN.
  • Caching reduces the traffic exchanged at the inter and intra internet service provider levels, and reduces the response time or latency needed to fetch a file.
  • Caching alleviates congestion at the network, reduces the energy consumption, and reduces the peak backhaul capacity required at the RAN side.
  • network efficiency may be impacted by leveraging the ability to monitor cache usage and video key performance indicators (KPIs) for different servers in the network, and forecast the usability and the storage requirements of different servers such that user experience is improved through forecasting network needs. Optimization of the network traffic placement and transmission dynamically according to the network conditions is key to impacting (e.g., improving) the performance of the mobile cellular network.
  • KPIs video key performance indicators
  • Example implementations involve dynamic cache selection and optimization for adaptive video streaming delivery in mobile cellular networks.
  • FIG. 1 shows an example of a cellular network with various network infrastructure components where a number of caches are distributed at different CDN and RAN nodes in the network. These caches are used to download videos to the user depending on the proximity of the cache to the user and the size of the cache such that the network response time is reduced and the video QoE is improved.
  • Each of the CDNs can be associated with a video server to manage the videos to be downloaded as described in FIG. 3.1n the related art, the user requests a video from the operator which can be stored in more than one CDN video cache and managed by a video server. The operator selects one of these video sources to provide the requested service at the target video quality or QoE that the user is requesting.
  • the path of the delivery of the video from the cache to the user is selected. This is done at the video chunk level, where the path selection can be changed from one chunk to the other depending on the user mobility or the network quality.
  • example implementations utilizing having the media file or video downloaded from different caches at a time scale based on the network response time, the availability of the media file in the selected caches, and the social network of the user equipment (UE).
  • the social network of the UE is defined as the group of users that the UE interacts with on social networks platforms such that the UEs belonging to the same groups are likely to download the same media content, such as for example a video lag on social media or a tagged video upload.
  • a monitoring server is provided to determine video quality or QoE between the selected CDN and associated video server, and the UE, wherein the monitoring server may determine a set of caches for each UE.
  • FIG. 2 illustrates a flow diagram for dynamic cache selection for adaptive video streaming, in accordance with an example implementation.
  • the UE obtains information about the status of different caches from the intelligent video QoE reporting at the beginning of download.
  • the UE uses the intelligent video reporting application to monitor the status of all the available caches in the network via downloading test videos.
  • the UE begins downloading video from the preselected set of caches.
  • the best caches as reported from 200 are selected for the UE to download the media file.
  • the UE keeps monitoring the status of the caches that the UE is downloading from with the real video packets at 202 through the use of the application. Further, the UE checks the status of other caches using UE's social context. The UE reports the findings to the monitoring server.
  • the monitoring server also monitors the status of the caches for the UEs belonging to the same group as the current UE.
  • the preselected set of caches can include the caches determined by the lest videos from the intelligent reporting analytics to be the caches that have the minimum response time and availability for the UE requested content.
  • the initial or preselected set of caches can also be the closest proxies to the UE by geographical location as determined by the operator, or in accordance with any other desired implementation.
  • a check is determined to see if the set of pre-selected caches meet the minimum response time for the UE. If the monitoring server decides that the network response time will be less if the UE uses a different cache to download the rest of the video based on the information reported by the UE and the information from the other UEs belonging to the same group (No), the monitoring server changes the preset caches to a new set containing the relevant information at 204. This process is repeated at 200 such that the set of caches is changed dynamically as the media file is being downloaded.
  • the change to the preset cache can include changing the set of caches used for the UE, as indicated in Table 1 below.
  • the network response time is the time required to download the media file from the cache to the UE.
  • Network response time can be a function of the time of delivery (e.g. number of hops* time required at each hop), and the time of fetching (e.g. based on the type of the cache, such as Solid state drive (SSD) versus Hard disk drive (HDD)).
  • SSD Solid state drive
  • HDD Hard disk drive
  • FIG. 3 illustrates the hardware components including the video server, in accordance with an example implementation.
  • a motherboard 300 having a random access memory (RAM) 301 and central processing unit (CPU) 302, storage 303 and network interface 304.
  • Network interface 304 can be configured to communicate with the internet and other elements of the architecture of FIG. 1.
  • Storage 303 may be configured with instructions to facilitate the functionality of the video server, which is loaded into memory 301 and executed by CPU 302.
  • Storage 303 may also include cache storage for holding caches of videos.
  • the video server of FIG. 3 can be represented to the monitoring server as an active cache which can be used for selection in a set.
  • the terms 'video server', 'CDN' and 'proxy cache' may be used interchangeably.
  • Table 1 illustrates an example of how the dynamic cache selection works taking into account the social lies of the UE and the response time from different caches, in accordance with an example implementation.
  • Assumptions include a UE1 attached to a source eNB, and socially tied on a social media network to UE2 and UE5.
  • UEl, UE2, UE5 are to access the same media file (e.g., streaming a soccer game where a favorite team is playing).
  • Proxy caches are distributed throughout the CDN network and the RAN. Test videos are being downloaded from various proxy caches by the intelligent reporting app preinstalled on UEl - UE6. Three proxy caches (cache 1, cache 2, and cache 3) are available for media file download for UEl.
  • the social network of the UE is defined as the group of users that the UE interacts with on social networks platforms such that the UEs belonging to the same groups are likely to download the same media content, such as for example a video tag on social media or a tagged video upload.
  • UE1, UE2 and UE5 can be friends on a social media platform for example and tagged on the same video uploaded by UE2.
  • Table 1 An example illustrating dynamic cache selection based on social ties and cache response time.
  • the example implementations disclosure optimizes the delivery of the videos from the CDN to the various proxy caches distributed throughout the network (CDN proxy caches and RAN caches) depending on the popularity of the videos as well as the size of the videos, and subsequently the congestion reported in a particular geographical region. This information is made available from the metrics collected from the intelligent QoE reporting application at the UE or a similar QoS reporting at the eNB, forming the edge of the network.
  • Example implementations optimize where to make the videos available, based on the analysis of the usage of the caches, their sizes and their response time from the intelligent video QoE collection.
  • FIG. 4 illustrates a dynamic cache and capacity optimization in accordance with an example implementation.
  • the monitoring server obtains information about the history of the usage of different caches from the intelligent video QoE reporting application.
  • information about the history of the usage of the different caches from the intelligent video QoE reporting application is first collected, and analyzed.
  • information about the sizes of the various caches from the CDN and the mobile network is then collected from the CDN and the mobile network operator. Assuming a certain threshold for the response time based on a given QoE requirements for the video users, the distribution of the videos to the various caches is optimized such that the overall response time to the various users is minimized.
  • TJetching where T_delivery is the time required to deliver the video from the cache to the UE, taking into account the number of hops at the wireline and wireless network, and T_fetching is the time required to fetch the video from the cache, depending on the type of the cache (SDD, HDD,%) and its availability. Assume also that a reasonable threshold for a response time for the UEs to achieve a given QoE is IS ms, although not limited thereto, as would be understood by those skilled in the art. At the monitoring server, the history of the usage of the different caches is analyzed, and the response time from various caches to the different UEs is monitored.
  • the T_resp is on average greater than IS ms due to the T_delivery being large, another cache is made available for that UE and subsequently the group of UEs in proximity to that UE by freeing up more space at another cache, and reallocating videos to other caches.
  • Another solution may employ a virtual distributed cache cloud, where for example, several caches are brought together to form one virtual cache able to serve more UEs.
  • the virtual cache creates the illusion of a larger size cache and allows high availability of requested content in a collaborative fashion between various proxy caches.
  • Example implementations also involve a method and an apparatus for collecting historical and real time information from different parts from the network using the intelligent video QoE reporting, and combining the information to perform analytics.
  • the analytics include (but are not limited to) processing information related to the status of congestion in different parts of the network, the level of usage of different caches, what kind of problems different caches are having. This information is then
  • this information can lead to figuring out which caches or other network segments (such as specific routers or interconnect internet service providers etc.) have or will have problems.
  • This information can also be used at the network operator and the CDN level to make decisions about how many and where different caches need to be added to meet the user demands. This fulfills the predictive maintenance spirit of using predictive analytics for cellular networks. This is depicted in the flowchart in FIG. 5.
  • FIG. 5 illustrates a flow diagram for predictive maintenance for caches in the network, in accordance with an example implementation.
  • the monitoring server collects information about quality of streaming for various caches in the network. Such information can be the response time for the different caches, the QoE of the videos from a network delivery perspective, taking into account the number of transmitted bits, the number or retransmissions, and so on.
  • the monitoring server analyzes the information to determine cache usage and level of congestion in different parts of the network.
  • the monitoring server predicts the level of cache usage for future downloads based on historical data analysis. Based on training data (historical data) analysis, the monitoring server can predict the level of cache usage from the size of the video downloads, the popularity of the videos, for example.
  • the monitoring server can predict the level of usage of the cache for future cache planning.
  • the monitoring server checks if the predicted level of usage and congestion is below the predetermined threshold. If so (Yes), then the flow proceeds to 504 wherein no change is made to the location and sizes of the caches in the network. Otherwise, (No), the flow proceeds to 505 wherein the monitoring server decides on the cause of congestion in other parts of the network.
  • the monitoring server anticipates future problems by adding cache storage sizes or modifying cache locations in the network.
  • FIG. 6 illustrates an example user equipment upon which example implementations can be implemented.
  • the UE 600 may involve the following modules: the CPU module 601, the Tx/Rx array 602, the baseband processor 603, and the memory 604.
  • the CPU module 601 can be configured to perform one or more functions, such as execution of the flows as described, for example, in FIG. 2 to execute an application to download a video from a preselected set of caches.
  • the Tx/RX array 602 may be implemented as an array of one or more antennas to communicate with the one or more base stations.
  • the memory 604 can be configured to store congestion information and flow traffic.
  • the baseband digital signal processing (DSP) module can be configured to perform one or more functions, such as to conduct measurements to generate the position reference signal for the serving base station to estimate the location of the UE.
  • DSP digital signal processing
  • CPU module 601 can load an application from memory 604 to execute a computer program.
  • CPU module 601 may be configured to process a status of each cache for a video in a network at an initiation of a video download, wherein the set of caches can be selected from the RAN, the CN, or the transit internet service provider/CDN as illustrated in FIG. 1.
  • CPU module 601 may report the status of each cache in the form of QoE metrics and other information to the monitoring server for the monitoring server to dynamically determine a set of caches to meet the predetermined response time of the UE as indicated in Table 1.
  • the predetermined response time can be set by an administrator of a server, by the owner of the UE, or by any other method as known to one of ordinary skill in the art.
  • CPU module 601 processes the preselected set of caches for the video download based on the information transmitted to the monitoring server and can initiate the download of the video from the preselected set of caches.
  • the preselected set of caches can be set by the monitoring server based on received social media information relating the downloading UE to other UEs, by cache response time and by other methods as illustrated in Table 1.
  • CPU module 601 may also transmit a report indicative of the preselected set of caches not meeting the predetermined response time to the monitoring server, whereupon the monitoring server may transmit a new set of caches for the UE to download from. CPU module 601 may then change the download to the new set of caches when received.
  • FIG. 7 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as an apparatus to facilitate the functionality of a monitoring server.
  • Computer device 705 in computing environment 700 can include one or more processing units, cores, or processors 710, memory 715 (e.g., RAM, ROM, and/or the like), internal storage 720 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 725, any of which can be coupled on a communication mechanism or bus 730 for communicating information or embedded in the computer device 705.
  • memory 715 e.g., RAM, ROM, and/or the like
  • internal storage 720 e.g., magnetic, optical, solid state storage, and/or organic
  • I/O interface 725 any of which can be coupled on a communication mechanism or bus 730 for communicating information or embedded in the computer device 705.
  • Computer device 705 can be communicatively coupled to input/user interface 735 and output device/interface 740. Either one or both of input/user interface 735 and output device/interface 740 can be a wired or wireless interface and can be detachable.
  • Input/user interface 735 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like).
  • Output device/interface 740 may include a display, television, monitor, printer, speaker, braille, or the like. In some example
  • input/user interface 735 and output device/interface 740 can be embedded with or physically coupled to the computer device 705.
  • other computer devices may function as or provide the functions of input/user interface 735 and output device/interface 740 for a computer device 705.
  • Examples of computer device 705 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
  • highly mobile devices e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like
  • mobile devices e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like
  • devices not designed for mobility e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like.
  • Computer device 705 can be communicatively coupled (e.g., via I/O interface 725) to external storage 745 and network 750 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration.
  • Computer device 70S or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 725 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802. llx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 700.
  • Network 750 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computer device 705 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non- volatile storage or memory.
  • Computer device 705 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor(s) 710 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • One or more applications can be deployed that include logic unit 760, application programming interface (API) unit 765, input unit 770, output unit 775, and inter-unit communication mechanism 795 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • API unit 76S when information or an execution instruction is received by API unit 76S, it may be communicated to one or more other units (e.g., logic unit 760, input unit 770, output unit 773).
  • logic unit 760 may be configured to control the information flow among the units and direct the services provided by API unit 765, input unit 770, output unit 775, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 760 alone or in conjunction with API unit 765.
  • the input unit 770 may be configured to obtain input for the calculations described in the example implementations
  • the output unit 775 may be configured to provide output based on the calculations described in example implementations.
  • I/O interface 725 may be configured to receive information associated with a RAN and to communicate with the RAN, including the eNodeBs and associated UEs.
  • Memory 715 is configured to store information relating the one or more UEs to one or more RAN related metrics based on the information received through the I/O interface 725.
  • Memory 715 may be configured to store status information for one or more video caches of a network for a video and user equipment information regarding a UE downloading the video from a set of caches selected from the one or more video caches, as well as information regarding one or more UEs downloading a video from one or more sets of video caches as illustrated in Table 1.
  • Processors) 710 may be configured to determine, from the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches meets a predetermined response time for the UE based on the predetermined response time set by the administrator of the monitoring server or of the UE. Should the set of video caches not meet the predetermined response time, the processors) 710 may determine another set of video caches that meet the predetermined response time based on the example implementations above or any other desired implementation, and then communicate with the application of the UE to instruct the UE to continue downloading the video from the new set of video caches as illustrated in FIG. 2. Processor(s) 710 may then update the caches utilized by updating the status information stored in memory 715.
  • processor(s) 710 may be configured to determine the usage level of the one or more video caches based on the UE information as illustrated in Table 1. If a particular cache is overused and exceeds a predetermined usage level as set by the administrator of the monitoring server (e.g., a set number of UEs), if the level of congestion exceeds a predetermined congestion level as set by the administrator of the monitoring server, the processor(s) 710 may identify the locations in the network that contain the congested caches and adjust the size, or the location of the video caches as illustrated in FIG. 4.
  • a predetermined usage level as set by the administrator of the monitoring server
  • Processors) 710 may also predict a level of cache usage based on usage history as stored in memory 715 or as retrieved from a database such as a traffic history database. The predicted usage level can be calculated based on the implementations of FIG. 5 or by any other method as desired by the administrator. For the predicted usage level exceeding predetermined usage level and for a level of congestion of the one or more video caches being below a predetermined congestion level, processors) 710 may conduct one of: adding cache storage to the one or more caches or modifying cache locations of the one or more cache locations depending on the desired implementation. Further, processor(s) 710 may configure an application of the UE as described in FIG.
  • processors 710 can determine whether the set of video caches meets a predetermined response time for the UE based on feedback from the application of the UE.
  • processing can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.
  • Example implementations may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs.
  • Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium.
  • a computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid stale devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information.
  • a computer readable signal medium may include mediums such as carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Example implementations are directed to systems and methods for radio access network (RAN) load balancing that takes into account the end-to-end application requirements. Example implementations involve systems and methods by which RAN level information such as existing base station load and channel conditions can be combined with application information to perform application aware RAN load balancing. By implementation of the examples provided in the present disclosure, end-to-end quality of service (QoS) of the all users in the cellular network may be improved. Example implementations may be implemented by a method for dynamic caching and predictive maintenance of the network using intelligent video Quality of Experience (QoE) metrics feedback from the edge of the network.

Description

DYNAMIC CACHING AND PREDICTIVE MAINTENANCE FOR VIDEO
STREAMING
BACKGROUND
Field
[0001] The present disclosure is generally related to wireless technologies, and more specifically, to caching schemes for wireless systems.
Related Art
[0002] Delivery of video content over mobile broadband networks is expected to be the prevailing traffic in mobile networks. Network operators and vendors have been struggling to find ways to meet the exploding demand for data and multimedia content, triggered by the proliferation of tablets and smart phones. The ever increasing demand for multimedia services creates traffic congestion, and reduction in the quality of service, not only at the radio access network (RAN) side, but also at the core network (CN) and packet data network (PDN) sides. Increasing the wireless resources or offloading to other networks such as wireless local area network (WLAN) may not be always a feasible and reliable option for network operators.
[0003] Several methods for optimizing the Quality of Service (QoS) of users have been implemented in the related art. Such methods include feedback from client devices for monitoring or diagnosing quality such as link gain quality, Transmission Control Protocol (TCP) throughput, and latency. Third party applications in the related art have been used to obtain higher layer feedback.
SUMMARY
[0004] In example implementations, there are systems and methods that first determine, from the video key performance indicators (KPIs) collected from video analytics (e.g., intelligent video analytics reporting), the level of RAN and CN congestion, the level of usage of different caches in the network, and the delays that different videos incur coming from various locations in the network. Example implementations utilize dynamic cache optimization and predictive maintenance for the network, depending on the
- 1 - data collected. An example of an implementation of video analytics is described in PCT Application No. PCT/US15/44056, filed on August 6, 2015, which is herein incorporated by reference in its entirety for all purposes.
[0005] In example implementations, there is dynamic cache selection and optimization based on KPIs from the applications. The dynamic cache selection can allow the media file segments to be transmitted from different caches in the network at a finer granularity than a video session, depending on availability of content and network performance.
[0006] In example implementations, there is predictive maintenance for the network based on KPIs from the applications. The collected historical data can be based on cache usage and user Quality of Experience (QoE). Further, the applications can facilitate the collection of information about status of congestion in different parts of the network, and help with network management.
[0007] The collected information and historical data allows network managers to predict which caches in the network may be in demand as well as which network segments are going to be congested, and proposes optimization of the cache sizes and locations based on the prediction information.
[0008] Aspects of the present disclosure include an apparatus, which can include a memory configured to store status information of one or more video caches of a network for a video and user equipment (UE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches and regarding one or more other UEs downloading the video; and a processor, configured to determine, from the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches meets a predetermined response time for the UE; and for the set of video caches not meeting the predetermined response time, determine another set of video caches that meet the predetermined response time, and configure the UE to continue downloading the video from the another set of video caches.
[0009] Aspects of the present disclosure further include an apparatus, including a memory configured to store status information of one or more video caches of a network for a video and user eauioment CUE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches and regarding one or more other UEs downloading the video; and a processor, configured to determine usage level of the one or more video caches based on the UE information, and for the determined usage level exceeding a predetermined usage level and for a level of congestion of the one or more video caches exceeding a predetermined congestion level, determine a location in the network associated with a congested one of the one or more video caches; and adjust one of a size and a location of the congested one of the one or more video caches.
[0010] Aspects of the present disclosure may further include a computer program, storing instructions for executing a process. The instructions may involve processing a status of each cache for a video in a network at an initiation of a video download;
receiving a preselected set of caches for a video based on the status of each cache for the video in the network; downloading the video from the preselected set of caches; and for the downloading of the video not meeting a predetermined response time, transmit a report indicative of the preselected set of caches not meeting the predetermined response time. The computer program may be stored on a non-transitory computer readable medium to store the instructions for executing a process. Further, the computer program may be stored in the memory of a UE to be loaded to a processor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 illustrates a network with distributed cache content across the network, in accordance with an example implementation.
[0012] FIG. 2 illustrates a flow diagram for dynamic cache selection for adaptive video streaming, in accordance with an example implementation.
[0013] FIG. 3 illustrates the hardware components of the video server, in accordance with an example implementation.
[0014] FIG. 4 illustrates a dynamic cache and capacity optimization in accordance with an example implementation.
[0015] FIG. 5 illustrates a flow diagram for predictive maintenance for caches in the network, in accordance with an example implementation. [0016] FIG. 6 illustrates an example user equipment upon which example implementations can be implemented.
[0017] FIG. 7 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
DETAILED DESCRIPTION
[0018] The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term "automatic" may involve fully automatic or semiautomatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. The terms enhanced node B (eNB), small cell (SC), base station (BS) and pico cell may be utilized interchangeably throughout the example implementations. The terms traffic and data may also be utilized interchangeably throughout the example implementations. The implementations described herein are also not intended to be limiting, and can be implemented in various ways, depending on the desired implementation.
[0019] Related art applications have not optimized for video key performance indicators (KPIs). Therefore, in example implementations of the present disclosure, an application level feedback for video optimization is utilized that takes into account frame level jitter, and re-buffering/stalling events. The example implementations use the intelligent feedback for video Quality of Experience (QoE) to implement dynamic caching and predictive maintenance for cellular networks.
[0020] One way to deal with the traffic demand is to reduce the duplicate content transmissions, triggered by users requesting the same content, via adopting intelligent caching strategies inside the PDNs as well as at the RAN. Caching reduces the traffic exchanged at the inter and intra internet service provider levels, and reduces the response time or latency needed to fetch a file. Caching alleviates congestion at the network, reduces the energy consumption, and reduces the peak backhaul capacity required at the RAN side. Thus, network efficiency may be impacted by leveraging the ability to monitor cache usage and video key performance indicators (KPIs) for different servers in the network, and forecast the usability and the storage requirements of different servers such that user experience is improved through forecasting network needs. Optimization of the network traffic placement and transmission dynamically according to the network conditions is key to impacting (e.g., improving) the performance of the mobile cellular network.
[0021] Related art solutions for cache selection depend upon the geographical area covered by one or more enhanced node Bs (eNBs). Such solutions may not be suited for mobile users and their network operators. Ways to optimize the delivery, or select the best path from the Content Delivery Network (CDN) to the user based on the quality of the CN and the RAN have been implemented in the related art. The goal of the optimization would be to reduce the network response time, by accounting for routing distance and storage occupancy.
[0022] Example implementations involve dynamic cache selection and optimization for adaptive video streaming delivery in mobile cellular networks.
[0023] Dynamic cache selection (online path selection for video delivery)
[0024] FIG. 1 shows an example of a cellular network with various network infrastructure components where a number of caches are distributed at different CDN and RAN nodes in the network. These caches are used to download videos to the user depending on the proximity of the cache to the user and the size of the cache such that the network response time is reduced and the video QoE is improved. Each of the CDNs can be associated with a video server to manage the videos to be downloaded as described in FIG. 3.1n the related art, the user requests a video from the operator which can be stored in more than one CDN video cache and managed by a video server. The operator selects one of these video sources to provide the requested service at the target video quality or QoE that the user is requesting. Once a cache is selected, the path of the delivery of the video from the cache to the user is selected. This is done at the video chunk level, where the path selection can be changed from one chunk to the other depending on the user mobility or the network quality. [0025] In contrast to the related art, example implementations utilizing having the media file or video downloaded from different caches at a time scale based on the network response time, the availability of the media file in the selected caches, and the social network of the user equipment (UE). The social network of the UE is defined as the group of users that the UE interacts with on social networks platforms such that the UEs belonging to the same groups are likely to download the same media content, such as for example a video lag on social media or a tagged video upload.
[0026] In example implementations as illustrated in FIG. 1, a monitoring server is provided to determine video quality or QoE between the selected CDN and associated video server, and the UE, wherein the monitoring server may determine a set of caches for each UE.
[0027] FIG. 2 illustrates a flow diagram for dynamic cache selection for adaptive video streaming, in accordance with an example implementation. At 200, the UE obtains information about the status of different caches from the intelligent video QoE reporting at the beginning of download. The UE uses the intelligent video reporting application to monitor the status of all the available caches in the network via downloading test videos.
[0028] At 201, the UE begins downloading video from the preselected set of caches. The best caches as reported from 200 are selected for the UE to download the media file. While downloading from preselected set of caches, the UE keeps monitoring the status of the caches that the UE is downloading from with the real video packets at 202 through the use of the application. Further, the UE checks the status of other caches using UE's social context. The UE reports the findings to the monitoring server. The monitoring server also monitors the status of the caches for the UEs belonging to the same group as the current UE. In example implementations, the preselected set of caches can include the caches determined by the lest videos from the intelligent reporting analytics to be the caches that have the minimum response time and availability for the UE requested content. The initial or preselected set of caches can also be the closest proxies to the UE by geographical location as determined by the operator, or in accordance with any other desired implementation.
[0029] At 203, a check is determined to see if the set of pre-selected caches meet the minimum response time for the UE. If the monitoring server decides that the network response time will be less if the UE uses a different cache to download the rest of the video based on the information reported by the UE and the information from the other UEs belonging to the same group (No), the monitoring server changes the preset caches to a new set containing the relevant information at 204. This process is repeated at 200 such that the set of caches is changed dynamically as the media file is being downloaded. The change to the preset cache can include changing the set of caches used for the UE, as indicated in Table 1 below.
[0030] In an example implementation, the network response time is the time required to download the media file from the cache to the UE. Network response time can be a function of the time of delivery (e.g. number of hops* time required at each hop), and the time of fetching (e.g. based on the type of the cache, such as Solid state drive (SSD) versus Hard disk drive (HDD)).
[0031] FIG. 3 illustrates the hardware components including the video server, in accordance with an example implementation. In FIG. 3, there is a motherboard 300 having a random access memory (RAM) 301 and central processing unit (CPU) 302, storage 303 and network interface 304. Network interface 304 can be configured to communicate with the internet and other elements of the architecture of FIG. 1. Storage 303 may be configured with instructions to facilitate the functionality of the video server, which is loaded into memory 301 and executed by CPU 302. Storage 303 may also include cache storage for holding caches of videos. The video server of FIG. 3 can be represented to the monitoring server as an active cache which can be used for selection in a set. In the present disclosure, the terms 'video server', 'CDN' and 'proxy cache' may be used interchangeably.
[0032] Table 1 illustrates an example of how the dynamic cache selection works taking into account the social lies of the UE and the response time from different caches, in accordance with an example implementation. Assumptions include a UE1 attached to a source eNB, and socially tied on a social media network to UE2 and UE5. UEl, UE2, UE5 are to access the same media file (e.g., streaming a soccer game where a favorite team is playing). Proxy caches are distributed throughout the CDN network and the RAN. Test videos are being downloaded from various proxy caches by the intelligent reporting app preinstalled on UEl - UE6. Three proxy caches (cache 1, cache 2, and cache 3) are available for media file download for UEl. [0033] The social network of the UE is defined as the group of users that the UE interacts with on social networks platforms such that the UEs belonging to the same groups are likely to download the same media content, such as for example a video tag on social media or a tagged video upload. In the example given in Table 1, UE1, UE2 and UE5 can be friends on a social media platform for example and tagged on the same video uploaded by UE2.
[0034] Table 1: An example illustrating dynamic cache selection based on social ties and cache response time.
Figure imgf000010_0001
[0035] Dynamic Cache Content and Capacity Optimization
[0036] In addition to the download of the videos from the caches to the mobile user through the mobile network, the example implementations disclosure optimizes the delivery of the videos from the CDN to the various proxy caches distributed throughout the network (CDN proxy caches and RAN caches) depending on the popularity of the videos as well as the size of the videos, and subsequently the congestion reported in a particular geographical region. This information is made available from the metrics collected from the intelligent QoE reporting application at the UE or a similar QoS reporting at the eNB, forming the edge of the network. [0037] Example implementations optimize where to make the videos available, based on the analysis of the usage of the caches, their sizes and their response time from the intelligent video QoE collection. FIG. 4 illustrates a dynamic cache and capacity optimization in accordance with an example implementation.
[0038] At 400, the monitoring server obtains information about the history of the usage of different caches from the intelligent video QoE reporting application. Thus, at the monitoring server, information about the history of the usage of the different caches from the intelligent video QoE reporting application is first collected, and analyzed. At 401, information about the sizes of the various caches from the CDN and the mobile network is then collected from the CDN and the mobile network operator. Assuming a certain threshold for the response time based on a given QoE requirements for the video users, the distribution of the videos to the various caches is optimized such that the overall response time to the various users is minimized.
[0039] Assume for example that the response time T_resp = T_delivery +
TJetching, where T_delivery is the time required to deliver the video from the cache to the UE, taking into account the number of hops at the wireline and wireless network, and T_fetching is the time required to fetch the video from the cache, depending on the type of the cache (SDD, HDD,...) and its availability. Assume also that a reasonable threshold for a response time for the UEs to achieve a given QoE is IS ms, although not limited thereto, as would be understood by those skilled in the art. At the monitoring server, the history of the usage of the different caches is analyzed, and the response time from various caches to the different UEs is monitored. If for example, for a given UE1, the T_resp is on average greater than IS ms due to the T_delivery being large, another cache is made available for that UE and subsequently the group of UEs in proximity to that UE by freeing up more space at another cache, and reallocating videos to other caches.
[0040] Another solution may employ a virtual distributed cache cloud, where for example, several caches are brought together to form one virtual cache able to serve more UEs. The virtual cache creates the illusion of a larger size cache and allows high availability of requested content in a collaborative fashion between various proxy caches.
[0041] Predictive Cache and Network Maintenance Attorney Docket No.: 120179-046WO1
[0042] Example implementations also involve a method and an apparatus for collecting historical and real time information from different parts from the network using the intelligent video QoE reporting, and combining the information to perform analytics. The analytics include (but are not limited to) processing information related to the status of congestion in different parts of the network, the level of usage of different caches, what kind of problems different caches are having. This information is then
combined/extrapolated to predict congestion in other parts of the network (depending on the network topology). For example, this information can lead to figuring out which caches or other network segments (such as specific routers or interconnect internet service providers etc.) have or will have problems.
[0043] This information can also be used at the network operator and the CDN level to make decisions about how many and where different caches need to be added to meet the user demands. This fulfills the predictive maintenance spirit of using predictive analytics for cellular networks. This is depicted in the flowchart in FIG. 5.
[0044] FIG. 5 illustrates a flow diagram for predictive maintenance for caches in the network, in accordance with an example implementation. At 500, the monitoring server collects information about quality of streaming for various caches in the network. Such information can be the response time for the different caches, the QoE of the videos from a network delivery perspective, taking into account the number of transmitted bits, the number or retransmissions, and so on. At 501, the monitoring server analyzes the information to determine cache usage and level of congestion in different parts of the network. At 502, the monitoring server predicts the level of cache usage for future downloads based on historical data analysis. Based on training data (historical data) analysis, the monitoring server can predict the level of cache usage from the size of the video downloads, the popularity of the videos, for example. Using this same feature set (popularity, size of video downloads, frequency of download), the monitoring server can predict the level of usage of the cache for future cache planning. At 503, the monitoring server checks if the predicted level of usage and congestion is below the predetermined threshold. If so (Yes), then the flow proceeds to 504 wherein no change is made to the location and sizes of the caches in the network. Otherwise, (No), the flow proceeds to 505 wherein the monitoring server decides on the cause of congestion in other parts of the network. At 506, the monitoring server anticipates future problems by adding cache storage sizes or modifying cache locations in the network. [0045] FIG. 6 illustrates an example user equipment upon which example implementations can be implemented. The UE 600 may involve the following modules: the CPU module 601, the Tx/Rx array 602, the baseband processor 603, and the memory 604. The CPU module 601 can be configured to perform one or more functions, such as execution of the flows as described, for example, in FIG. 2 to execute an application to download a video from a preselected set of caches. The Tx/RX array 602 may be implemented as an array of one or more antennas to communicate with the one or more base stations. The memory 604 can be configured to store congestion information and flow traffic. The baseband digital signal processing (DSP) module can be configured to perform one or more functions, such as to conduct measurements to generate the position reference signal for the serving base station to estimate the location of the UE.
[0046] CPU module 601 can load an application from memory 604 to execute a computer program. In accordance with FIG. 2, CPU module 601 may be configured to process a status of each cache for a video in a network at an initiation of a video download, wherein the set of caches can be selected from the RAN, the CN, or the transit internet service provider/CDN as illustrated in FIG. 1. CPU module 601 may report the status of each cache in the form of QoE metrics and other information to the monitoring server for the monitoring server to dynamically determine a set of caches to meet the predetermined response time of the UE as indicated in Table 1. The predetermined response time can be set by an administrator of a server, by the owner of the UE, or by any other method as known to one of ordinary skill in the art.
[0047] CPU module 601 processes the preselected set of caches for the video download based on the information transmitted to the monitoring server and can initiate the download of the video from the preselected set of caches. The preselected set of caches can be set by the monitoring server based on received social media information relating the downloading UE to other UEs, by cache response time and by other methods as illustrated in Table 1.
[0048] When the downloading of the video fails to meet the predetermined response time (e.g., caches do not respond in a sufficient time), CPU module 601 may also transmit a report indicative of the preselected set of caches not meeting the predetermined response time to the monitoring server, whereupon the monitoring server may transmit a new set of caches for the UE to download from. CPU module 601 may then change the download to the new set of caches when received.
[0049] FIG. 7 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as an apparatus to facilitate the functionality of a monitoring server. Computer device 705 in computing environment 700 can include one or more processing units, cores, or processors 710, memory 715 (e.g., RAM, ROM, and/or the like), internal storage 720 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 725, any of which can be coupled on a communication mechanism or bus 730 for communicating information or embedded in the computer device 705.
[0050] Computer device 705 can be communicatively coupled to input/user interface 735 and output device/interface 740. Either one or both of input/user interface 735 and output device/interface 740 can be a wired or wireless interface and can be detachable. Input/user interface 735 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). Output device/interface 740 may include a display, television, monitor, printer, speaker, braille, or the like. In some example
implementations, input/user interface 735 and output device/interface 740 can be embedded with or physically coupled to the computer device 705. In other example implementations, other computer devices may function as or provide the functions of input/user interface 735 and output device/interface 740 for a computer device 705.
[0051] Examples of computer device 705 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
[0052] Computer device 705 can be communicatively coupled (e.g., via I/O interface 725) to external storage 745 and network 750 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 70S or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
[0053] I/O interface 725 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802. llx, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 700. Network 750 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
[0054] Computer device 705 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non- volatile storage or memory.
[0055] Computer device 705 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
[0056] Processor(s) 710 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 760, application programming interface (API) unit 765, input unit 770, output unit 775, and inter-unit communication mechanism 795 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. [0057] In some example implementations, when information or an execution instruction is received by API unit 76S, it may be communicated to one or more other units (e.g., logic unit 760, input unit 770, output unit 773). In some instances, logic unit 760 may be configured to control the information flow among the units and direct the services provided by API unit 765, input unit 770, output unit 775, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 760 alone or in conjunction with API unit 765. The input unit 770 may be configured to obtain input for the calculations described in the example implementations, and the output unit 775 may be configured to provide output based on the calculations described in example implementations.
[0058] I/O interface 725 may be configured to receive information associated with a RAN and to communicate with the RAN, including the eNodeBs and associated UEs. Memory 715 is configured to store information relating the one or more UEs to one or more RAN related metrics based on the information received through the I/O interface 725.
[0059] Memory 715 may be configured to store status information for one or more video caches of a network for a video and user equipment information regarding a UE downloading the video from a set of caches selected from the one or more video caches, as well as information regarding one or more UEs downloading a video from one or more sets of video caches as illustrated in Table 1.
[0060] Processors) 710 may be configured to determine, from the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches meets a predetermined response time for the UE based on the predetermined response time set by the administrator of the monitoring server or of the UE. Should the set of video caches not meet the predetermined response time, the processors) 710 may determine another set of video caches that meet the predetermined response time based on the example implementations above or any other desired implementation, and then communicate with the application of the UE to instruct the UE to continue downloading the video from the new set of video caches as illustrated in FIG. 2. Processor(s) 710 may then update the caches utilized by updating the status information stored in memory 715. [0061] Similarly, processor(s) 710 may be configured to determine the usage level of the one or more video caches based on the UE information as illustrated in Table 1. If a particular cache is overused and exceeds a predetermined usage level as set by the administrator of the monitoring server (e.g., a set number of UEs), if the level of congestion exceeds a predetermined congestion level as set by the administrator of the monitoring server, the processor(s) 710 may identify the locations in the network that contain the congested caches and adjust the size, or the location of the video caches as illustrated in FIG. 4.
[0062] Processors) 710 may also predict a level of cache usage based on usage history as stored in memory 715 or as retrieved from a database such as a traffic history database. The predicted usage level can be calculated based on the implementations of FIG. 5 or by any other method as desired by the administrator. For the predicted usage level exceeding predetermined usage level and for a level of congestion of the one or more video caches being below a predetermined congestion level, processors) 710 may conduct one of: adding cache storage to the one or more caches or modifying cache locations of the one or more cache locations depending on the desired implementation. Further, processor(s) 710 may configure an application of the UE as described in FIG. 6 to download the UE from the set of video caches and process the status information of one or more video caches of a network for the video from the UE to formulate the information as illustrated in Table 1. Based on the information, processors) 710 can determine whether the set of video caches meets a predetermined response time for the UE based on feedback from the application of the UE.
[0063] Finally, some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
[0064] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as
"processing," "computing," "calculating," "determining," "displaying," or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.
[0065] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium.
[0066] A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid stale devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
[0067] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
[0068] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices
(hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
[0069] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

CLAIMS:
1. An apparatus, comprising: a memory configured to store:
status information of one or more video caches of a network for a video;
user equipment (UE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches; and
a processor, configured to:
determine, based on the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches exceeds a response lime for the UE; and
for the set of video caches exceeding the response time:
determine another set of video caches that does not exceed the response time, and,
configure the UE to continue downloading the video from the another set of video caches.
2. The apparatus of claim 1, wherein the processor is further configured to: determine a usage level of the one or more video caches based on the UE information, and
for the determined usage level exceeding a usage level and for a level of congestion of the one or more video caches exceeding a congestion level:
determine a location in the network associated with a congested one of the one or more video caches; and adjust one of a size and a location of the congested one of the one or more video caches.
3. The apparatus of claim 2, wherein the processor is further configured to predict usage level of the one or more video caches based on historical usage of the set of video caches; for the predicted usage level exceeding the usage level and for a level of congestion of the one or more video caches being below the congestion level, conduct one of: adding cache storage to the one or more caches or modifying cache locations of the one or more cache locations.
4. The apparatus of claim 1, wherein the processor is further configured to: configure an application of the UE to download the UE from the set of video caches; process the status information of one or more video caches of a network for the video from the UE.
5. The apparatus of claim 4, wherein the processor is further configured to: determine whether the set of video caches meets a response time for the UE based on response time feedback from the application of the UE.
6. An apparatus, comprising: a memory configured to store:
status information of one or more video caches of a network for a video;
user equipment (UE) information regarding a UE downloading the video from a set of video caches selected from the one or more video caches; and
a processor, configured to:
determine usage level of the one or more video caches based on the UE information, and
for the determined usage level exceeding a usage level and for a level of congestion of the one or more video caches exceeding a congestion level:
determine a location in the network associated with a congested one of the one or more video caches; and adjust one of a size and a location of the congested one of the one or more video caches.
7. The apparatus of claim 6, wherein the processor is further configured to: determine, from the status information and the UE information regarding the one or more other UEs downloading the video, whether the set of video caches exceeds a response time for the UE; and
for the set of video caches exceeding the response time:
determine another set of video caches that does not exceed the response time, and,
configure the UE to continue downloading the video from the another set of video caches.
8. The apparatus of claim 7, wherein the processor is further configured to predict usage level of the one or more video caches based on historical usage of the set of video caches; for the predicted usage level exceeding the usage level and for a level of congestion of the one or more video caches exceeding the congestion level, conduct one of: adding cache storage to the one or more caches or modifying cache locations of the one or more cache locations.
9. The apparatus of claim 6, wherein the processor is further configured to: configure an application of the UE to download the UE from the set of video caches; process the status information of one or more video caches of a network for the video from the UE.
10. The apparatus of claim 9, wherein the processor is further configured to: determine whether the set of video caches meets a response time for the UE based on response time feedback from the application of the UE.
11. A computer program, storing instructions for executing a process, the
instructions comprising: processing a status of each cache for a video in a network at an initiation of a video download;
receiving a preselected set of caches for a video based on the status of each cache for the video in the network;
downloading the video from the preselected set of caches; and
for the downloading of the video not meeting a response time, transmit a report indicative of the preselected set of caches not meeting the response time.
12. The computer program of claim 11, wherein the instructions further comprise: for the preselected set of caches not meeting the response time:
obtaining another set of caches for the video download; and
changing the downloading of the video from the preselected set of caches to the another set of caches.
13. The computer program of claim 12, further comprising transmitting social media information indicative of other UEs related over social media to a UE downloading the video, and wherein the another set of caches is based on the social media information.
14. The computer program of claim 11 , wherein the preselected set of caches are selected from at least a radio access network (RAN) and a core network (CN).
15. The computer program of claim 11, further comprising transmitting the status of each cache for the video in the network to obtain the preselected set of caches.
PCT/US2015/044262 2015-08-07 2015-08-07 Dynamic caching and predictive maintenance for video streaming WO2017026991A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/044262 WO2017026991A1 (en) 2015-08-07 2015-08-07 Dynamic caching and predictive maintenance for video streaming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/044262 WO2017026991A1 (en) 2015-08-07 2015-08-07 Dynamic caching and predictive maintenance for video streaming

Publications (1)

Publication Number Publication Date
WO2017026991A1 true WO2017026991A1 (en) 2017-02-16

Family

ID=57984623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/044262 WO2017026991A1 (en) 2015-08-07 2015-08-07 Dynamic caching and predictive maintenance for video streaming

Country Status (1)

Country Link
WO (1) WO2017026991A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729314A (en) * 2018-12-24 2019-05-07 浙江大华技术股份有限公司 A kind of method for processing video frequency, device, electronic equipment and storage medium
CN112040302A (en) * 2019-06-03 2020-12-04 阿里巴巴集团控股有限公司 Video buffering method and device, electronic equipment and computer readable storage medium
CN112752117A (en) * 2020-12-30 2021-05-04 百果园技术(新加坡)有限公司 Video caching method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060155759A1 (en) * 2004-12-29 2006-07-13 Yahoo! Inc. Scalable cache layer for accessing blog content
US20140047109A1 (en) * 2009-12-22 2014-02-13 At&T Intellectual Property I, L.P. Integrated Adaptive Anycast For Content Distribution
US8825962B1 (en) * 2010-04-20 2014-09-02 Facebook, Inc. Push-based cache invalidation notification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060155759A1 (en) * 2004-12-29 2006-07-13 Yahoo! Inc. Scalable cache layer for accessing blog content
US20140047109A1 (en) * 2009-12-22 2014-02-13 At&T Intellectual Property I, L.P. Integrated Adaptive Anycast For Content Distribution
US8825962B1 (en) * 2010-04-20 2014-09-02 Facebook, Inc. Push-based cache invalidation notification

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729314A (en) * 2018-12-24 2019-05-07 浙江大华技术股份有限公司 A kind of method for processing video frequency, device, electronic equipment and storage medium
CN112040302A (en) * 2019-06-03 2020-12-04 阿里巴巴集团控股有限公司 Video buffering method and device, electronic equipment and computer readable storage medium
CN112040302B (en) * 2019-06-03 2023-01-03 优视科技有限公司 Video buffering method and device, electronic equipment and computer readable storage medium
CN112752117A (en) * 2020-12-30 2021-05-04 百果园技术(新加坡)有限公司 Video caching method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10944698B2 (en) Apparatus and method of managing resources for video services
KR101943530B1 (en) Systems and methods for placing virtual serving gateways for mobility management
EP3753235B1 (en) Method and system for handling data path creation in wireless network system
US10778522B2 (en) Endpoint-based mechanism to apply network optimization
US10231014B2 (en) Virtual reality (VR) video distribution using edge resources
US20180176325A1 (en) Data pre-fetching in mobile networks
US10205634B2 (en) Adaptive multi-phase network policy optimization
US11671332B2 (en) Adjusting triggers for automatic scaling of virtual network functions
US11128682B2 (en) Video streaming at mobile edge
US20180359335A1 (en) Cooperative policy-driven content placement in backhaul-limited caching network
US20140120930A1 (en) Method, Apparatus, Computer Program Product and System for Communicating Predictions
US9553623B2 (en) Wireless communication device
CN104471904B (en) Method and apparatus for content optimization
US20140181257A1 (en) Methods and systems for loading content in a network
WO2017026991A1 (en) Dynamic caching and predictive maintenance for video streaming
Zhu et al. Multi-bitrate video caching for D2D-enabled cellular networks
WO2017007474A1 (en) Congestion-aware anticipatory adaptive video streaming
Dimopoulos et al. Multi-source mobile video streaming: load balancing, fault tolerance, and offloading with prefetching
WO2017097368A1 (en) System and method for efficient caching in the access network
WO2016114767A1 (en) Location based cooperative caching at the ran
KR102390522B1 (en) Blockchain based reliable quality of experience accelerator for adaptive video streaming service
US20190007295A1 (en) Method and apparatus to manage insufficient data in distributed analytics system
US20160063412A1 (en) Method for distributed goal-driven programming
WO2016118166A1 (en) Method and apparatus for ran-aware flow control in cellular networks
US10728911B2 (en) Wireless communication device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15901110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15901110

Country of ref document: EP

Kind code of ref document: A1