Active Queue Management Experiment
Active queue management (AQM) algorithms are packet schedulers that define policies how packets are enqueued and dequeued by a queue. Congestion control algorithms (CCAs) may benefit from AQM that generates congestion signals earlier than First In, First Out (FIFO) taildrop queues, which drop arriving packets only if the queue is full. Queues use specific AQM algorithms by setting the queueing discipline (qdisc) of the queue.
In ccperf, the interactions between CCAs and the following qdiscs are evaluated:
FIFO taildrop
Scenario
In the AQM experiment, one or more flows operate in a static dumbbell
network. The flows generate greedy source traffic and use CCAs. The
experiment has two parameters: the number of flows k
and
the qdisc type qdisc
of the bottleneck queue.
To summarize the experiment setup:
Topology: Dumbbell topology (\(K=1\)) with static network parameters and a specified qdisc
Flows: A single flow (\(K=1\)) that uses a CCA
Traffic Generation Model: Greedy source traffic
Experiment Results
Experiment #5
Parameters
Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=aqm --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:5,qdisc:FqCoDelQueueDisc}' --aut=TcpNewReno --stop-time=15s --seed=42 --qdisc=FqCoDelQueueDisc --bw=80Mbps --loss=0.0 --qlen=100p --rtts=15ms,15ms,15ms,15ms,15ms --sources=src_0,src_1,src_2,src_3,src_4 --destinations=dst_0,dst_1,dst_2,dst_3,dst_4 --protocols=TCP,TCP,TCP,TCP,TCP --algs=TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno,TcpNewReno --recoveries=TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery,TcpPrrRecovery --start-times=0s,0s,0s,0s,0s --stop-times=15s,15s,15s,15s,15s '--traffic-models=Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0),Greedy(bytes=0)'
Flows
src | dst | transport_protocol | cca | cc_recovery_alg | traffic_model | start_time | stop_time |
---|---|---|---|---|---|---|---|
src_0 | dst_0 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_1 | dst_1 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_2 | dst_2 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_3 | dst_3 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
src_4 | dst_4 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
Metrics
The following tables list the flow, link, and network metrics of experiment #5. Refer to the the metrics page for definitions of the listed metrics.
Flow Metrics
Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.
Metric | flow_1 | flow_2 | flow_3 | flow_4 | flow_5 |
---|---|---|---|---|---|
cov_in_flight_l4 | 0.21 | 0.21 | 0.22 | 0.22 | 0.22 |
cov_throughput_l4 | 0.07 | 0.08 | 0.08 | 0.08 | 0.08 |
flow_completion_time_l4 | 15.00 | 15.00 | 15.00 | 15.00 | 15.00 |
mean_cwnd_l4 | 25.08 | 25.09 | 25.13 | 25.12 | 25.11 |
mean_delivery_rate_l4 | 14.90 | 14.90 | 14.90 | 14.90 | 14.90 |
mean_est_qdelay_l4 | 3.94 | 3.91 | 3.94 | 3.97 | 3.97 |
mean_idt_ewma_l4 | 0.78 | 0.78 | 0.78 | 0.78 | 0.78 |
mean_in_flight_l4 | 24.61 | 24.60 | 24.64 | 24.63 | 24.64 |
mean_network_power_l4 | 802.47 | 801.29 | 802.00 | 802.29 | 802.18 |
mean_one_way_delay_l7 | 1981.64 | 1981.75 | 1981.64 | 1981.57 | 1981.45 |
mean_recovery_time_l4 | 24.15 | 21.13 | 23.71 | 24.67 | 24.62 |
mean_sending_rate_l4 | 14.97 | 14.97 | 14.97 | 14.97 | 14.97 |
mean_sending_rate_l7 | 17.04 | 17.04 | 17.04 | 17.04 | 17.04 |
mean_srtt_l4 | 18.94 | 18.91 | 18.94 | 18.97 | 18.97 |
mean_throughput_l4 | 14.91 | 14.91 | 14.91 | 14.91 | 14.91 |
mean_throughput_l7 | 14.91 | 14.91 | 14.91 | 14.91 | 14.91 |
mean_utility_mpdf_l4 | -0.07 | -0.07 | -0.07 | -0.07 | -0.07 |
mean_utility_pf_l4 | 2.70 | 2.70 | 2.70 | 2.70 | 2.70 |
mean_utilization_bdp_l4 | 0.26 | 0.26 | 0.26 | 0.26 | 0.26 |
mean_utilization_bw_l4 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 |
total_retransmissions_l4 | 65.00 | 64.00 | 66.00 | 69.00 | 69.00 |
total_rtos_l4 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Link Metrics
Link metrics are recorded at the network links of interest, typically at bottlenecks. They include metrics that measure queue states. Bold values indicate which link achieved the best performance.
Metric | btl_forward |
---|---|
mean_qdisc_delay_l2 | 3.23 |
mean_qdisc_length_l2 | 24.35 |
mean_sending_rate_l1 | 77.35 |
total_qdisc_drops_l2 | 333.00 |
Network Metrics
Network metrics assess the entire network as a
whole by aggregating other metrics, e.g., the aggregated throughput of
all flows.
Hence, the network metrics has only one column named net
.
Metric | net |
---|---|
mean_agg_in_flight_l4 | 123.12 |
mean_agg_throughput_l4 | 74.54 |
mean_agg_utility_mpdf_l4 | -0.34 |
mean_agg_utility_pf_l4 | 13.49 |
mean_agg_utilization_bdp_l4 | 1.28 |
mean_agg_utilization_bw_l4 | 0.93 |
mean_entropy_fairness_throughput_l4 | 1.61 |
mean_jains_fairness_throughput_l4 | 1.00 |
mean_product_fairness_throughput_l4 | 764657.69 |
Figures
The following figures show the results of the experiment #5.Time Series Plot of the Operating Point
Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.
In Flight vs Mean Operating Point
The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows
Distribution of the Operating Point
The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.
Mean Operating Point Plane
The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow.