Please upload ZIP format file compression package，The compressed package must contain a folder named submit, which needs to include:
- Algorithm source file: ABR.py
- Machine learning model (optional)
Reference Submission submit_demo.zip
In recent years, a new breed of video services that support live video broadcast has become tremendously popular. These video services allow users to broadcast live videos over the Internet, interact with their viewers and has many applications including journalism and education.
Live video streaming over HTTP chunked-based streaming protocol, such as DASH, meets many new technical challenges, compared to on-demand streaming of pre-recorded video. Firstly, it requires a low end-to-end latency for real-time interaction between the broadcaster and the viewers, while still maintaining few rebuffering events and high video quality. Secondly, for better user experience, it is especially important to ensure the stability of transmission during the live broadcast. Thirdly, the challenge is compound of the fact that we can only access a few seconds of video ahead at every moment, unlike the case for pre-recorded on-demand, video streaming, which means there is less information that can be utilized to make optimal streaming decisions.
To encourage the research community to come together and address the challenges of live video streaming over DASH, we organize a new live video streaming challenge at ACM Multimedia 2019. We will provide a simulator platform, a set of video traces, a set of network traces, and a set of common evaluation metrics, which the challenge participants can use to implement and evaluate their live video streaming algorithms. We hope that the platform and dataset will serve as a common tool for researchers to benchmark their algorithms with each other and thus contribute towards reproducible research.
For this grand challenge, we consider the following scenario for live streaming. There is a streamer who captures and generates a live video stream (either through a mobile phone or a PC). The video stream is uploaded to a transcoding server, which re-encodes the same video into multiple representations, each with a different bitrate and quality level. Each representation is then transmitted to CDN (content delivery network) nodes, which act as edge servers. The client issues pull requests to one of CDN nodes, indicating which representation to download. The corresponding representation is then sent to the client and is buffered before playback.
Figure1 Universal framework for live broadcast scenarios
The main task for this grand challenge is to design the algorithm which runs at the client to decide on which representation to download, what is the playback rate, and whether to skip any frames.
The client decides which representation to download given the current network throughput.
Ideally, the client downloads the representation with the highest quality and bit rate. Playback of representation with higher bitrate improves the quality of experience (QoE) of the viewer. Downloading of the representation of higher bitrate, however, would fill the buffer slower, increasing the risk of the buffer being drained. An empty buffer would cause a playback stall, damaging the QoE. Frequent switching between representations of different quality would also negatively affect the QoE. The key issue here is thus carefully decide which representation to download to improves the quality while reducing stall and number of switches, given the current throughput. This decision is especially challenging in the context of live streaming as the buffer size is kept small to reduce the end-to-end latency, therefore increasing the likelihood of stalls.
Current apps for live video broadcasting often support interactions with users and thus is delay sensitive. Buffering too many segments would increase the end-to-end delay and affects the interaction between the user and the streamer. On the other hand, buffering too few segments would increase the likelihood of playback stall. To control the end-to-end latency, the client can adopt two mechanisms:
The client can slow down its playback if necessary to avoid or reduce the duration of a stall (e.g., when the buffer is about to be drained)
The client can speed up its playback if necessary to catch up and reduce the end-to-end delay.
A stall in playback will inevitably increase the end-to-end latency.