Steady-state RTT Fairness Experiment
This experiment evaluates how the two-way propagation delay influences the steady-state operating point of a congestion control algorithm (CCA). The sending rates of all flows are controlled by the same congestion control algorithm (CCA), i.e., it is an intra-protocol competition scenario.
The goal of the CCA should be to be efficient (bandwidth utilization) and reasonably fair despite the different two-way propagation delays of the flows. Larger two-way propagation delays increase the feedback cycle of CCAs, i.e., the time between a packet transmission and the reception of its acknowledgement. Typically, when flows compete for bandwidth, flows with larger two-way propagation delays allocate fewer bandwidth resources. The experiment tests whether that is the case for a CCA and therefore the experiment evaluates RTT fairness.
Scenario
Multiple flows are set up to operate compete against each other in a
static dumbbell network. Greedy source traffic ensures that flows are
network-limited. The flows start simultaneously, but have different same
two-way propagation delays. The number of flows can be varied with the
experiment parameter k
. The two-way propagation delays are
set with the parameter rtts
.
To summarize the setup:
Topology: Dumbbell topology (\(K>1\)) with static network parameters defined by the
path
parameterFlows: Multiple flows (\(K>1\)) with different two-way propagation delays using the same CCA (intra-protocol competition)
Traffic Generation Model: Greedy source traffic
Experiment Results
Experiment #4
Parameters
Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=steady_state_rtt_fairness --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:2,path:static.default,rtts:[97ms,76ms]}' --aut=TcpNewReno --stop-time=15s --seed=42 --rtts=97ms,76ms --bw=32Mbps --loss=0.0 --qlen=40p --qdisc=FifoQueueDisc --sources=src_0,src_1 --destinations=dst_0,dst_1 --protocols=TCP,TCP --algs=TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s --stop-times=15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0)'
Flows
src | dst | transport_protocol | cca | cc_recovery_alg | traffic_model | start_time | stop_time |
---|---|---|---|---|---|---|---|
src_0 | dst_0 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_1 | dst_1 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
Metrics
The following tables list the flow, link, and network metrics of experiment #4. Refer to the the metrics page for definitions of the listed metrics.
Flow Metrics
Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.
Metric | flow_1 | flow_2 |
---|---|---|
cov_in_flight_l4 | 0.46 | 0.28 |
cov_throughput_l4 | 0.47 | 0.27 |
flow_completion_time_l4 | 15.00 | 15.00 |
mean_cwnd_l4 | 88.12 | 105.05 |
mean_delivery_rate_l4 | 9.89 | 15.05 |
mean_est_qdelay_l4 | 3.39 | 3.34 |
mean_idt_ewma_l4 | 1.28 | 0.77 |
mean_in_flight_l4 | 87.68 | 104.59 |
mean_network_power_l4 | 98.89 | 189.85 |
mean_one_way_delay_l7 | 2958.00 | 1860.22 |
mean_recovery_time_l4 | 390.39 | 120.02 |
mean_sending_rate_l4 | 10.06 | 15.15 |
mean_sending_rate_l7 | 12.04 | 17.20 |
mean_srtt_l4 | 100.39 | 79.34 |
mean_throughput_l4 | 9.95 | 15.08 |
mean_throughput_l7 | 9.95 | 15.08 |
mean_utility_mpdf_l4 | -0.14 | -0.07 |
mean_utility_pf_l4 | 2.17 | 2.68 |
mean_utilization_bdp_l4 | 0.35 | 0.54 |
mean_utilization_bw_l4 | 0.31 | 0.47 |
total_retransmissions_l4 | 58.00 | 42.00 |
total_rtos_l4 | 0.00 | 0.00 |
Link Metrics
Link metrics are recorded at the network links of interest, typically at bottlenecks. They include metrics that measure queue states. Bold values indicate which link achieved the best performance.
Metric | btl_forward |
---|---|
mean_qdisc_delay_l2 | 2.54 |
mean_qdisc_length_l2 | 6.61 |
mean_sending_rate_l1 | 26.05 |
total_qdisc_drops_l2 | 100.00 |
Network Metrics
Network metrics assess the entire network as a
whole by aggregating other metrics, e.g., the aggregated throughput of
all flows.
Hence, the network metrics has only one column named net
.
Metric | net |
---|---|
mean_agg_in_flight_l4 | 192.28 |
mean_agg_throughput_l4 | 25.03 |
mean_agg_utility_mpdf_l4 | -0.21 |
mean_agg_utility_pf_l4 | 4.85 |
mean_agg_utilization_bdp_l4 | 0.89 |
mean_agg_utilization_bw_l4 | 0.78 |
mean_entropy_fairness_throughput_l4 | 0.68 |
mean_jains_fairness_throughput_l4 | 0.90 |
mean_product_fairness_throughput_l4 | 153.17 |
Figures
The following figures show the results of the experiment #4.Time Series Plot of the Operating Point
Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.
In Flight vs Mean Operating Point
The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows
Mean Operating Point Plane
The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.
Distribution of the Operating Point
The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.