Introduction
Short video is a new form of video that is shared on online social platforms based on user-generated content. However, it is worth noting that short video companies spend a lot on bandwidth. Saving bandwidth overhead without reducing user quality of experience (QoE) has become an important issue. In short video applications, users watch videos in the mode of sliding and watching. To ensure user QoE, the current video and videos in the recommendation queue need to be preloaded. However, if the user slides away, the downloaded but unwatched data does not contribute to improving user QoE, which results in wasted bandwidth.
The challenge of reducing the bandwidth wastage is to match the video download mechanism with user viewing behavior and network conditions. This problem is challenging as bandwidth savings conflict with user QoE improvement. Firstly, the download data volume for a certain video is difficult to determine, as user viewing behavior is unknown and difficult to model. Moreover, the user's viewing behavior varies greatly, with many factors influencing it such as content and viewing history. Secondly, conflicts exist between the downloads of different videos and the download sequence of videos is difficult to decide. Finally, selecting the right bitrate can be very challenging, because the network changes dynamically and it is hard to predict.
We will provide a simulator platform, a set of video traces, a set of network traces, and a set of common evaluation metrics, which the challenge participants can use to implement and evaluate their algorithms. We hope that the platform and dataset will serve as a common tool for researchers to benchmark their algorithms with each other and thus contribute towards reproducible research.
Grand Challenge Scenario
For this grand challenge, we consider the following scenario for short video applications. Users generate content and upload the videos, which are then processed and uploaded to the content delivery network (CDN) nodes. At the client side, in addition to the currently playing videos, some other videos are placed in the recommendation queue. So, users can watch videos in the mode of sliding and watching, that is, the user can slide away while watching the current video and continue to watch the next video.
To ensure user QoE, the current video is pre-buffered and videos in the recommendation queue need to be preloaded. However, if the user slides away, the downloaded but unwatched data are wasted.
The main task for this grand challenge is to design the algorithm which decides the download video chunk and its bitrate, in addition to that, to reduce bandwidth waste, the pause time should also be determined.
1. Decide which chunk to download next and its bitrate. According to the network condition and the buffer of videos to determine which chunk should be downloaded next and its bitrate, with the consideration of user QoE.
2. Decide the pause time during which the download process stops. To reduce the bandwidth waste, the download may be stopped if the network condition is good when taking bandwidth waste into account.
Task Description
For 2022’s Multimedia Grand Challenge, we would like to encourage the multimedia systems community to take on the challenge of designing short video preload algorithms that decide the video chunk and its bitrate to download for better QoE. To save bandwidth when necessary, the pause time should be determined duration which the download process is stopped. The contestants are asked to develop a short video preload algorithm and integrate it into our simulator which then is evaluated with the given video traces, user traces, and network. Finally, the wasted data and the achieved QoE are shown.
Platform
We have set up a multi-video simulator to simulate a video player for this challenge. Given video trace, user trace, and network trace as inputs, the goal is to automatically decide the download chunk and its bitrate as well as the pause time to minimize the bandwidth wastage without reducing the QoE.
The traces provided to the contestants in this grand challenge is divided into three parts as follows:
1.User trace: user retention data for each video.
2.Network trace: sampled from real network to record the network state.
3.Video trace: to describe the chunks of videos. Chunk size are obtained.
The detail of platform and dataset can be seen at https://github.com/AItransCompetition/Short-Video-Streaming-Challenge Participants can replace a file solution.py with their own implementation of the control algorithm. The simulator will then go through the video traces as well as the user retetion correspoing to each video and network traces to simulate the playback and user watch process with uploaded solution.py.
Evaluation Metrics
The QoE is evaluated as follows. We leverage the simple and widely used QoE model for this grand challenge:
which represents the QoE of video i and the QoE is the sum of all watched chunks {j}.The final score of video i on a certain user view process and network trace is
where bandwidth_usage is the bandwidth used by chunk k and the sum is calculated for all downloaded chunks. The coefficients w1,w2,w3 and w4 above are set to1,1.85,1 and 0.5 respectively.
For an algorithm submitted by a contestant, we run it under each condition (a video sequence, a network trace, and a user viewing behavior) and get the score S. Then we normalize the scores of all contestants under this condition, that is, calculate Normalized_S = (score-MIN)/(MAX-MIN) where MAX is the maximum scores of all contestants under this condition and MIN is the score of the baseline algorithm we implemented. The final score of a algorithm is the sum of Normalized_S under all conditions, denoted as Sum_normal_S. The teams are ranked according to Sum_normal_S and the rank of a team is the highest ranking of all submitted algorithms for this team in the same testing phase.
Apart from the Sum_normal_S, we also provide Sum_S which is the sum of the score S under all test conditions. Sum_S is presented to show the performance of the algorithm and is not used for ranking.
Participation
The Challenge is a team-based competition. Each team can have one or more members (up to 3, i.e. 1 team leader and 2 other team members). If there are multiple members in the team, you can use the information of the team leader to register and fill in the information of other members in Team Members. Each individual can only be part of one team.
The Short Video Streaming Multimedia Challenge 2022 is open to the public. Interested individuals, researchers/developers of tertiary education, research institutes or organizations from different sectors and fields are welcome to take part independently or as a team.
Disclaimer: The organizers reserve the right to disqualify participants if the information provided on the application is inaccurate or misleading. The dataset provided by the organizer can only be used for this challenge and its related research activities. Commercial use is strictly prohibited.
At the end of the Challenge, all teams will be ranked based on both objective evaluation and subjective human evaluation criteria described above. The top three performing teams will receive award certificates and cash prizes:
First Prize |
$3000 |
Second Prize |
$1000 |
Third Prize |
$500 |
At the same time, all accepted submissions are qualified for the conference’s grand challenge award competition. Please note that each winning team for the cash prizes is required to open source their proposed solution on Github before qualifying to receive the cash prizes.
Timeline
Unless otherwise stated, all deadlines are at 23:59 CST, UTC+8.
please follow the instructions on the main conference website
- Registration Open: March 20, 2022
- Registration Deadline: April 27, 2022
- Competition Begin: April 30, 2022
- Evaluation Begin: May 26, 2022
- Competition Deadline: May 28, 2022
- Paper Submission Deadline: June 18, 2022
* Score of teams' solutions depend on the evaluation stage. We will open the submission system a few weeks before the start of the competition so that you can try the task. Dataset used in the competition will be activated when the competition begin.
Competition-Training |
Competition-Evaluation-1 |
Competition-Evaluation-2 |
Competition-Evaluation-3 |
The leaderboard is updated every half hour.
2022.5.6 Update
1. The number of videos is 9 in the test video sequence of this test phase.
2. The execution time of the simulator is accurate to millisecond.
2022.5.9 Update
1. The network traces used in the current phase and in the final evaluation phase are al collected from the real world, and the public network traces are generated using hidden Markov chain. If needed, contestants may try to find some public real world network traces for training.
2. There is a video sequence, 75(3*25) network trace and 50 user sample in the first test phase, so the highest Sum_normal_S is 3750 in this phase.
2022.5.12 Update
1. The second phase of testing will begin on May 15 UTC+8, and last until May 24 UTC+8. As in the first phase, the scores in this phase do not count to the final score.
2. In the second phase, we will replace the network traces and test videos. There are three categories of networks, 25 traces for each category, and the number of test videos is 9.
2022.5.14 Update
1. To avoid some abnormal submissions, we fix the MIN value of the Normalized_S calculation in each test condition by replacing the MIN with the score of a simple baseline algorithm we implemented. The score below MIN in a test condition is negative.
2022.5.21 Update
1. The final evaluation phase is scheduled for three days from May 26th to May 28th, CST. During the evaluation phase, network traces and video traces are changed every day, and contestants can submit 3 times a day. The final ranking is determined based on the sum of the ranking of the three days.
2. The user's departure time of each video is not allowed to be obtained in any way. Any team found try to obtain this departure time will be disqualified from the grand challenge. So do the network traces.
rank |
teamname |
rank-day1 |
rank-day2 |
rank-day3 |
rank-sum |
1 |
kuai2022 |
1 |
1 |
1 |
3 |
2 |
No1 |
2 |
2 |
2 |
6 |
3 |
sky_light |
5 |
3 |
3 |
11 |
4 |
EVA00 |
6 |
4 |
5 |
15 |
5 |
MC2 |
7 |
5 |
4 |
16 |
6 |
Polaris |
4 |
6 |
6 |
16 |
7 |
One |
3 |
13 |
7 |
23 |
8 |
YJGY |
10 |
7 |
8 |
25 |
9 |
ParttimeJob |
11 |
10 |
13 |
34 |
10 |
Sail |
9 |
9 |
16 |
34 |
11 |
Incendio |
12 |
11 |
14 |
37 |
12 |
XXX |
8 |
18 |
11 |
37 |
13 |
Forward |
21 |
8 |
10 |
39 |
14 |
Binary linear equations |
14 |
14 |
15 |
43 |
15 |
Reparo |
13 |
12 |
18 |
43 |
16 |
Jump |
16 |
17 |
12 |
45 |
17 |
YJ_GY |
15 |
22 |
9 |
46 |
18 |
shakalaka_team |
19 |
16 |
20 |
55 |
19 |
Navigator |
22 |
15 |
19 |
56 |
20 |
Bingo |
17 |
19 |
23 |
59 |
21 |
NewTeamA |
20 |
21 |
24 |
65 |
22 |
nknetlab |
18 |
26 |
21 |
65 |
23 |
D404NotFound |
25 |
23 |
25 |
73 |
24 |
Steins |
24 |
24 |
26 |
74 |
25 |
Kangaroo |
26 |
25 |
27 |
78 |
26 |
Hacker Alliance |
27 |
27 |
28 |
82 |
27 |
the avengers |
28 |
28 |
29 |
85 |
For teams with the same rank sum, we rank them by the sum of three Sum_normal_S.