TCP On-Off Traffic Experiment

Congestion control algorithms (CCAs) are challenged by competing on-off traffic patterns. When competitors join or leave, the traffic load of competing flows may change abruptly. CCAs should either yield bandwidth to new competitors or grab freed bandwidth resources once a competitor is left.

In the TCP on-off traffic experiment, TCP flows start and finish throughout the duration of the experiment. All TCP flows use the CCA under test. CCAs should adapt to changes in the level of competition gracefully. Maintaining a fair distribution of bandwidth among flows is a challenging task for CCAs in such a scenario.

Scenario

In the TCP on-off traffic experiment, multiple flows operate in a static dumbbell network. Each flow generates greedy source traffic and uses either TCP with the CCA under test or UDP. The number of flows can be set with the parameter k. The start and the stop times can be set for each flow individually with the parameters start_times and stop_times, respectively.

To summarize the experiment setup:

  • Topology: Dumbbell topology (\(K>1\)) with static network parameters

  • Flows: Multiple TCP flows (\(K>1\)) that use the CCA under test with varying start and stop times

  • Traffic Generation Model: Greedy source traffic

Experiment Results

Experiment #94

Parameters

Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=tcp_on_off --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:6,start_times:[0s,1s,2s,3s,4s,5s],stop_times:[5s,6s,7s,8s,9s,10s]}' --aut=TcpNewReno --stop-time=15s --seed=42 --start-times=0s,1s,2s,3s,4s,5s --stop-times=5s,6s,7s,8s,9s,10s --bw=96Mbps --loss=0.0 --qlen=120p --qdisc=FifoQueueDisc --rtts=15ms,15ms,15ms,15ms,15ms,15ms --sources=src_0,src_1,src_2,src_3,src_4,src_5 --destinations=dst_0,dst_1,dst_2,dst_3,dst_4,dst_5 --protocols=TCP,TCP,TCP,TCP,TCP,TCP --algs=TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery '--traffic-models=Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 5.00
src_1 dst_1 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 1.00 6.00
src_2 dst_2 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 2.00 7.00
src_3 dst_3 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 3.00 8.00
src_4 dst_4 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 4.00 9.00
src_5 dst_5 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 5.00 10.00

Metrics

The following tables list the flow, link, and network metrics of experiment #94. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1 flow_2 flow_3 flow_4 flow_5 flow_6
cov_in_flight_l4 1.41 1.24 1.05 1.28 1.08 1.18
cov_throughput_l4 1.41 1.23 1.07 1.31 1.14 1.41
flow_completion_time_l4 7.29 6.82 6.77 6.09 5.72 5.36
mean_cwnd_l4 48.01 49.64 41.41 54.78 71.81 91.55
mean_delivery_rate_l4 18.64 12.34 7.90 8.12 12.33 16.03
mean_est_qdelay_l4 5.54 11.71 10.35 9.87 8.56 4.88
mean_idt_ewma_l4 0.67 0.45 0.40 0.55 0.23 0.23
mean_in_flight_l4 37.03 25.82 16.37 16.80 24.77 28.49
mean_network_power_l4 858.71 529.75 340.72 350.31 552.30 828.56
mean_one_way_delay_l7 1659.99 1539.75 1783.22 1431.88 1019.41 827.76
mean_recovery_time_l4 32.97 32.04 32.70 36.88 32.18 35.59
mean_sending_rate_l4 18.83 12.41 7.93 8.13 12.39 16.09
mean_sending_rate_l7 18.64 10.12 7.90 5.48 12.33 16.03
mean_srtt_l4 20.54 26.71 25.35 24.87 23.56 19.88
mean_throughput_l4 18.64 12.34 7.90 8.12 12.33 16.03
mean_throughput_l7 18.64 12.34 7.90 8.12 12.33 16.03
mean_utility_mpdf_l4 -0.04 -0.05 -0.07 -0.12 -0.05 -0.05
mean_utility_pf_l4 3.41 3.14 2.67 2.55 3.08 3.18
mean_utilization_bdp_l4 0.32 0.22 0.14 0.15 0.21 0.25
mean_utilization_bw_l4 0.19 0.13 0.08 0.08 0.13 0.17
total_retransmissions_l4 253.00 91.00 25.00 17.00 53.00 47.00
total_rtos_l4 0.00 0.00 0.00 0.00 0.00 0.00

Network Metrics

Network metrics assess the entire network as a whole by aggregating other metrics, e.g., the aggregated throughput of all flows. Hence, the network metrics has only one column named net.

Metric net
mean_agg_in_flight_l4 149.29
mean_agg_throughput_l4 75.37
mean_agg_utility_mpdf_l4 -0.38
mean_agg_utility_pf_l4 18.03
mean_agg_utilization_bdp_l4 1.29
mean_agg_utilization_bw_l4 0.79
mean_entropy_fairness_throughput_l4 1.38
mean_jains_fairness_throughput_l4 0.84
mean_product_fairness_throughput_l4 1680671.47

Figures

The following figures show the results of the experiment #94.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.

Mean Operating Point Plane

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.