Steady-state Synchronous Fairness Experiment

In this experiment multiple flows compete for the bottleneck bandwidth resources. The sending rates of all flows are controlled by the same congestion control algorithm (CCA), i.e., it is an intra-protocol competition scenario. All flows start simultaneously and have the same two-way propagation delay. Hence, this experiment evaluates an idealized scenario in which concurrent flows face the same network conditions.

The goal of the flows should be to reach the steady-state behavior of the CCA. Similar to the steady-state experiment with a single flow, the flows should jointly exhaust the bandwidth (bottleneck rate) of the shared network path and avoid self-inflicted queueing delay. The CCA should let the sending rates of the flows eventually converge to a steady-state equilibrium. At best, the flows share the bandwidth resources fairly. Network metrics quantify wheter or not efficiency (bandwidth utilization) and fairness is reached by a CCA.

Scenario

Multiple flows are set up to operate compete against each other in a static dumbbell network. Greedy source traffic ensures that flows are network-limited. The start times and two-way propagation delays are the same for all flows. The number of flows can be varied with the experiment parameter k.

To summarize the setup:

  • Topology: Dumbbell topology (\(K>1\)) with static network parameters

  • Flows: Multiple flows (\(K>1\)) starting simultaneously using the same CCA (intra-protocol competition)

  • Traffic Generation Model: Greedy source traffic

Experiment Results

Experiment #3

Parameters

Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=steady_state_synchronous_fairness --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:5,path:static.winds_satellite}' --aut=TcpNewReno --stop-time=15s --seed=42 --bw=42Mbps --loss=0.0074 --qlen=2.8kp --qdisc=FifoQueueDisc --rtts=800ms,800ms,800ms,800ms,800ms --sources=src_0,src_1,src_2,src_3,src_4 --destinations=dst_0,dst_1,dst_2,dst_3,dst_4 --protocols=TCP,TCP,TCP,TCP,TCP --algs=TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s,0s,0s,0s --stop-times=15s,15s,15s,15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_1 dst_1 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_2 dst_2 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_3 dst_3 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00
src_4 dst_4 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00

Metrics

The following tables list the flow, link, and network metrics of experiment #3. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1 flow_2 flow_3 flow_4 flow_5
cov_in_flight_l4 1.08 0.64 0.39 0.41 0.57
cov_throughput_l4 1.03 0.59 0.45 0.36 0.52
flow_completion_time_l4 14.85 14.89 14.90 14.91 14.91
mean_cwnd_l4 48.92 67.73 11.36 21.20 21.01
mean_delivery_rate_l4 0.64 0.90 0.13 0.27 0.32
mean_est_qdelay_l4 3.55 6.45 3.79 3.20 3.77
mean_idt_ewma_l4 15.94 2.48 51.95 20.05 19.62
mean_in_flight_l4 48.78 67.44 11.11 20.89 20.89
mean_network_power_l4 0.80 1.17 0.18 0.35 0.41
mean_one_way_delay_l7 6827.94 6867.32 6878.93 6885.19 6893.19
mean_recovery_time_l4 1004.38 1206.89 806.29 813.17 1077.08
mean_sending_rate_l4 0.66 0.95 0.15 0.28 0.33
mean_sending_rate_l7 2.86 3.12 2.36 2.49 2.53
mean_srtt_l4 803.55 806.45 803.79 803.20 803.77
mean_throughput_l4 0.65 0.95 0.15 0.28 0.33
mean_throughput_l7 0.65 0.95 0.15 0.28 0.33
mean_utility_mpdf_l4 -3.96 -2.55 -8.16 -4.59 -4.31
mean_utility_pf_l4 -0.91 -0.32 -2.00 -1.37 -1.25
mean_utilization_bdp_l4 0.02 0.03 0.00 0.01 0.01
mean_utilization_bw_l4 0.02 0.02 0.00 0.01 0.01
total_retransmissions_l4 7.00 5.00 1.00 4.00 5.00
total_rtos_l4 0.00 0.00 0.00 0.00 1.00

Network Metrics

Network metrics assess the entire network as a whole by aggregating other metrics, e.g., the aggregated throughput of all flows. Hence, the network metrics has only one column named net.

Metric net
mean_agg_in_flight_l4 169.11
mean_agg_throughput_l4 2.36
mean_agg_utility_mpdf_l4 -23.56
mean_agg_utility_pf_l4 -5.86
mean_agg_utilization_bdp_l4 0.06
mean_agg_utilization_bw_l4 0.06
mean_entropy_fairness_throughput_l4 1.60
mean_jains_fairness_throughput_l4 0.73
mean_product_fairness_throughput_l4 0.01

Figures

The following figures show the results of the experiment #3.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.

Mean Operating Point Plane

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows

Comparison of Congestion Control Algorithms (CCAs)

Figures