Reno-Friendliness Experiment
To be deployable, new congestion control algorithms (CCAs) have to be able to compete against established CCAs. For a long time, Reno was the most widely used CCA, and fairness towards Reno was referred to as TCP-friendliness. Nowadays Reno is merely used anymore and Reno can be regarded as a legacy CCA. In the Reno-friendliness experiment, it is evaluated if a CCA is fair towards Reno. However, it is debatable if fairness towards Reno remains a favorable property of a CCA, because Reno is not widely used anymore. Regardless of that, ccperf includes this experiment for reasons of comprehensiveness.
Many newer CCAs that aim to be more efficient are unfair towards Reno as a consequence. That is, these CCAs grab a significantly larger portion of the bandwidth when competing against Reno. On the other hand, CCAs that try to avoid queueing delay may yield their bandwidth to Reno, which is a buffer filling algorithm. Reno blindly increases its congestion window (cwnd) as its steady-state behavior even if the bandwidth is already exhausted.
Scenario
In the Reno-friendliness experiment, multiple flows operate in a
static dumbbell network. Each flow generates greedy source traffic and
uses either the CCA under test or Reno. The experiment has one parameter
k
, which sets the number of flows. Half of the flows
(rounded down) use the CCA under test, whereas the other half (rounded
up) use Reno.
To summarize the experiment setup:
Topology: Dumbbell topology (\(K>1\)) with static network parameters
Flows: Multiple flows (\(K>1\)) that use either the CCA under test or Reno
Traffic Generation Model: Greedy source traffic
Experiment Results
Experiment #85
Parameters
Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=reno_fairness --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:2}' --aut=TcpNewReno --stop-time=15s --seed=42 --bw=32Mbps --loss=0.0 --qlen=40p --qdisc=FifoQueueDisc --rtts=15ms,15ms --sources=src_0,src_1 --destinations=dst_0,dst_1 --protocols=TCP,TCP --algs=TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s --stop-times=15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0)'
Flows
src | dst | transport_protocol | cca | cc_recovery_alg | traffic_model | start_time | stop_time |
---|---|---|---|---|---|---|---|
src_0 | dst_0 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_1 | dst_1 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
Metrics
The following tables list the flow, link, and network metrics of experiment #85. Refer to the the metrics page for definitions of the listed metrics.
Flow Metrics
Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.
Metric | flow_1 | flow_2 |
---|---|---|
cov_in_flight_l4 | 0.23 | 0.23 |
cov_throughput_l4 | 0.15 | 0.15 |
flow_completion_time_l4 | 14.99 | 15.00 |
mean_cwnd_l4 | 31.66 | 33.14 |
mean_delivery_rate_l4 | 15.02 | 15.71 |
mean_est_qdelay_l4 | 8.95 | 8.98 |
mean_idt_ewma_l4 | 0.67 | 0.63 |
mean_in_flight_l4 | 31.20 | 32.66 |
mean_network_power_l4 | 653.60 | 682.50 |
mean_one_way_delay_l7 | 1946.30 | 1900.46 |
mean_recovery_time_l4 | 30.02 | 29.16 |
mean_sending_rate_l4 | 15.09 | 15.78 |
mean_sending_rate_l7 | 17.15 | 17.85 |
mean_srtt_l4 | 23.95 | 23.98 |
mean_throughput_l4 | 15.03 | 15.72 |
mean_throughput_l7 | 15.03 | 15.72 |
mean_utility_mpdf_l4 | -0.07 | -0.07 |
mean_utility_pf_l4 | 2.70 | 2.74 |
mean_utilization_bdp_l4 | 0.81 | 0.85 |
mean_utilization_bw_l4 | 0.47 | 0.49 |
total_retransmissions_l4 | 69.00 | 66.00 |
total_rtos_l4 | 0.00 | 0.00 |
Link Metrics
Link metrics are recorded at the network links of interest, typically at bottlenecks. They include metrics that measure queue states. Bold values indicate which link achieved the best performance.
Metric | btl_forward |
---|---|
mean_qdisc_delay_l2 | 7.69 |
mean_qdisc_length_l2 | 21.68 |
mean_sending_rate_l1 | 31.90 |
total_qdisc_drops_l2 | 135.00 |
Network Metrics
Network metrics assess the entire network as a
whole by aggregating other metrics, e.g., the aggregated throughput of
all flows.
Hence, the network metrics has only one column named net
.
Metric | net |
---|---|
mean_agg_in_flight_l4 | 63.85 |
mean_agg_throughput_l4 | 30.74 |
mean_agg_utility_mpdf_l4 | -0.13 |
mean_agg_utility_pf_l4 | 5.44 |
mean_agg_utilization_bdp_l4 | 1.66 |
mean_agg_utilization_bw_l4 | 0.96 |
mean_entropy_fairness_throughput_l4 | 0.69 |
mean_jains_fairness_throughput_l4 | 0.98 |
mean_product_fairness_throughput_l4 | 231.33 |
Figures
The following figures show the results of the experiment #85.Time Series Plot of the Operating Point
Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.
Distribution of the Operating Point
The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.
In Flight vs Mean Operating Point
The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows