Transcript
Performance of P2P Live Video Streaming Systems on a Controlled Test-bed Sachin Agarwal1 , Jatinder Pal Singh1 , Aditya Mavlankar2 , Pierpaolo Bacchichet2 , and Bernd Girod2 1 Deutsche
Telekom A.G., Laboratories Ernst-Reuter-Platz 7 10409 Berlin, Germany
2 Department
of Electrical Engineering, Stanford University 350 Serra Mall Stanford, CA 94305, USA
March 26, 2008
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
P2P live video streaming
I
P2P overlay network to stream video content
I
Bandwidth comes from the end-users: a low-cost content delivery technology (?)
I
Several commercial implementations
I
Increasing content/channels available
Contributions
I
Aim I
I
Quantifying the performance of P2P video streaming systems in an Internet-like environment.
Challenge I
I
I
I
Determining the parameters useful in measuring P2P system performance, and then logging the appropriate data to calculate these parameters Repeating nearly identical network conditions for each tested system in order to be fair Varying network conditions in order to test the systems’ performance under different conditions Deploying each P2P systems on multiple hosts and controlling them from a centralized location
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
Setup Details
I
48 peer computers configured with heterogeneous up-link bandwidth, packet delay, delay jitter and packet loss rate
I
Emulated peer churn, i.e., peers joining and leaving the system, using a random on-off model.
I
An H.264, constant-bit-rate, 400 kbps, 25 frames-per-second, 30 minutes duration test video.
I
Results are presented for 3 P2P video streaming systems: System A, System B, and System C.
Network Setup 576 X 9 1024 X 5 2048 X 1
Test center
Berlin, Germany (15)
Berlin, Germany Server 1, 2
[Emulated HS Broadband]
52 M
576 X 16 1024 X 5 2048 X 1
bps
Internet
Stanford, CA (22) [Emulated HS Broadband]
ISP
Datacenter Erfurt, Germany TU Munich, Germany (3) [Emulated HS Broadband]
Berlin, Germany (8) [Real HS Broadband]
128 X 2 192 X 2 576 X 2 1024 X 2
3072 X 3
Figure: Network Setup: The physical and “emulated” network setup on the controlled network test-bed. The clouds represent emulated clients while the colored enclosures are the physical location of the hosts in the data centers.
Network Emulation Parameters
Table: NISTNet Network model: Average Delay (ms), Jitter (ms) and PLR between hosts at different locations measured using Abing and data from T-Statistics
server to Berlin server to Stanford, Berlin to Stanford, & Munich to Stanford server to Munich, & Berlin to Munich DSL to DSL within Munich
Delay 24.17 109
Jitter 4.8 24
PLR 0.001 0.001
29
4
0.0001
29 0.4
4 0.1
0.0005 0.0001
Peer computer setup I
I
Peer computers were controlled through a central “Hobbit” interface The P2P client software was installed on a virtual machine on each computer. The NISTnet traffic shaper was installed on another virtual machine and traffic from the P2P VM was routed through the NISTNet VM.
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
Comparison basis: interest to the ISP
Parameters of interest to the ISP Protocol overhead due to duplicates and control traffic Server vs. P2P band- Comparison of bytes received from the server width to bytes received from other peers Overlay protocol effects Traffic pattern of the P2P video streaming system Efficiency
Comparison basis: interest to the end-user
Parameters of interest to the end-user Time taken from executing the P2P client’s startup command till when the first video stream bytes are received Effectiveness Received video as a percentage of the minimum video stream required for maximum quality playback PSNR drop Drop in video quality due to reduction in average peak signal-to-noise ratio Startup delay
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
Efficiency and effectiveness, traffic shaping enabled I
I
Blue bars indicate the video stream downloaded as a percentage of the minimum video stream required for maximum quality playback Empty white bars indicate protocol overhead due to duplicates and control traffic as a percentage of the minimum video stream required for maximum quality playback
As percentage of ideal video stream bytes required
As percentage of ideal video stream bytes required
200
200 Total Served to media decoder
180
160
140
140
140
120
120
120
Percent
160
100
100
100
80
80
80
60
60
60
40
40
40
20
20
0 0
10
20
30
40
Client ID
Figure: System A
50
0 0
Total Served to media decoder
180
160
Percent
Percent
As percentage of ideal video stream bytes required
200 Total Served to media decoder
180
20 10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
Efficiency and effectiveness, traffic shaping disabled I
I
Blue bars indicate the video stream downloaded as a percentage of the minimum video stream required for maximum quality playback Empty white bars indicate protocol overhead due to duplicates and control traffic as a percentage of the minimum video stream required for maximum quality playback
As percentage of ideal video stream bytes required
As percentage of ideal video stream bytes required
200
200 Total Served to media decoder
180
160
140
140
140
120
120
120
Percent
160
100
100
100
80
80
80
60
60
60
40
40
40
20
20
0 0
10
20
30
40
Client ID
Figure: System A
50
0 0
Total Served to media decoder
180
160
Percent
Percent
As percentage of ideal video stream bytes required
200 Total Served to media decoder
180
20 10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
Startup delay, traffic shaping enabled I
The time taken from executing the P2P client’s startup command till when the first video stream bytes are received. Since the On-Off model is employed, note that some clients switch on and off multiple times.
Average first startup time =9.3775
Average first startup time =7.2815
40
40 First startup Second startup Third startup
30 25 20 15 10 5
30 25 20 15 10 5
10
20
30
40
Client ID
Figure: System A
50
0 0
First startup Second startup Third startup
35 Data reception time (seconds)
35 Data reception time (seconds)
Data reception time (seconds)
35
0 0
Average first startup time =1.8367
40 First startup Second startup Third startup
30 25 20 15 10 5
10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
Startup delay, traffic shaping disabled I
I
The time taken from executing the P2P client’s startup command till when the first video stream bytes are received. Since the On-Off model is employed, note that some clients switch on and off multiple times. Pre-roll delay is introduced for buffering after the first bytes are received.
Average first startup time =10.124
Average first startup time =7.7283
40
40 First startup Second startup Third startup
30 25 20 15 10 5
30 25 20 15 10 5
10
20
30
40
Client ID
Figure: System A
50
0 0
First startup Second startup Third startup
35 Data reception time (seconds)
35 Data reception time (seconds)
Data reception time (seconds)
35
0 0
Average first startup time =1.8323
40 First startup Second startup Third startup
30 25 20 15 10 5
10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
Server vs. P2P bandwidth contribution, traffic shaping enabled I
Comparison of bytes received from the server to bytes received from other peers. Since the On-Off model is employed, a peer might not need to download the entire video file. Some bars exceed the scale employed.
200
200 Downloaded from servers Downloaded from other peers
200 Downloaded from servers Downloaded from other peers
180
160
140
140
140
120
120
120
MBytes
160
100
100
100
80
80
80
60
60
60
40
40
40
20
20
0 0
10
20
30
40
Client ID
Figure: System A
50
0 0
Downloaded from servers Downloaded from other peers
180
160
MBytes
MBytes
180
20 10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
Server vs. P2P bandwidth contribution, traffic shaping disabled I
Comparison of bytes received from the server to bytes received from other peers. Since the On-Off model is employed, a peer might not need to download the entire video file. Some bars exceed the scale employed.
200
200 Downloaded from servers Downloaded from other peers
200 Downloaded from servers Downloaded from other peers
180
160
140
140
140
120
120
120
MBytes
160
100
100
100
80
80
80
60
60
60
40
40
40
20
20
0 0
10
20
30
40
Client ID
Figure: System A
50
0 0
Downloaded from servers Downloaded from other peers
180
160
MBytes
MBytes
180
20 10
20
30
40
Client ID
Figure: System B
50
0 0
10
20
30
40
Client ID
Figure: System C
50
PSNR drop, traffic shaping enabled
I
Average drop in video quality for all tested peers. Some bars exceed the scale employed.
5
4
3
2
1
0
0
10
20
30
40
Client ID
Figure: System A
50
35
Avg. drop in PSNR for displayed frames
6
Avg. drop in PSNR for displayed frames
Avg. drop in PSNR for displayed frames
6
5
4
3
2
1
0
0
10
20
30
40
Client ID
Figure: System B
50
30
25
20
15
10
5
0
0
10
20
30
40
Client ID
Figure: System C
50
PSNR Drop, traffic shaping disabled
I
Average drop in video quality over all the tested peers. Some bars exceed the scale employed.
5
4
3
2
1
0
0
10
20
30
40
Client ID
Figure: System A
50
35
Avg. drop in PSNR for displayed frames
6
Avg. drop in PSNR for displayed frames
Avg. drop in PSNR for displayed frames
6
5
4
3
2
1
0
0
10
20
30
40
Client ID
Figure: System B
50
30
25
20
15
10
5
0
0
10
20
30
40
Client ID
Figure: System C
50
Tree vs. mesh overlay, traffic shaping enabled I
Total bytes received at each peer from every other peer during the P2P streaming session with Systems A and B
Figure: System A (Tree)
Figure: System B (Mesh)
Outline
I
Introduction to P2P live video streaming
I
Testbed setup and specifications
I
Parameters and measures of interest
I
Using the testbed: experimental results
I
Conclusions
Conclusions
I
Successfully deployed a “repeatable” Internet testbed for P2P testing
I
Large variation in the video quality delivered depending on the P2P video streaming system
I
Large variation in the network resource usage depending on the P2P video streaming system
I
P2P video streaming is a very network resource intensive service
I
Careful selection of P2P video streaming systems and the underlying protocols can make P2P video streaming more scalable and increase the quality delivered to end-users