Deep Buffers Experiment
Buffer-filling congestion control algorithms (CCAs) are designed to continuously probe for free bandwidth. By doing that, they may keep filling up a standing queue at the bottleneck, which leads to self-inflicted queueing delay. When the bottleneck queue has a large queue size, i.e., when the buffer is deep, large queueing delays may be a consequence. The presence of deep buffers in many networks today and the consequences thereof is known as the problem of bufferbloat. A CCA that is resilient against bufferbloat should refrain from inflicting queueing delays that are proportional to the queue size.
Scenario
In the deep buffers experiment, a single flow operates in a static
dumbbell network with a queue size that is larger than the
bandwidth-delay product (BDP). The flow generates greedy source traffic
and uses a CCA. The experiment has one parameter named qlen
that sets the size of the bottleneck queue. It can be repeated for
different values of qlen
to evaluate the influence of the
queue size on the operating point.
To summarize the experiment setup:
Topology: Dumbbell topology (\(K=1\)) with static network parameters
Flows: A single flow (\(K=1\)) that uses a CCA
Traffic Generation Model: Greedy source traffic
Experiment Results
Experiment #18
Command: ns3-dev-ccperf-static-dumbbell-default --experiment-name=deep_buffers --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,k:1,qlen:20p}' --aut=TcpNewReno --stop-time=15s --seed=42 --qlen=20p --bw=16Mbps --loss=0.0 --qdisc=FifoQueueDisc --rtts=15ms --sources=src_0 --destinations=dst_0 --protocols=TCP --algs=TcpNewReno --recoveries=TcpPrrRecovery --start-times=0s --stop-times=15s '--traffic-models=Greedy(bytes=0)'
Flows
src | dst | transport_protocol | cca | cc_recovery_alg | traffic_model | start_time | stop_time |
---|---|---|---|---|---|---|---|
src_0 | dst_0 | TCP | TcpNewReno | TcpPrrRecovery | Greedy(bytes=0) | 0.00 | 15.00 |
Metrics
The following tables list the flow, link, and network metrics of experiment #18. Refer to the the metrics page for definitions of the listed metrics.
Flow Metrics
Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.
Metric | flow_1 |
---|---|
cov_in_flight_l4 | 0.21 |
cov_throughput_l4 | 0.04 |
flow_completion_time_l4 | 15.00 |
mean_cwnd_l4 | 33.60 |
mean_delivery_rate_l4 | 15.36 |
mean_est_qdelay_l4 | 9.76 |
mean_idt_ewma_l4 | 0.76 |
mean_in_flight_l4 | 33.08 |
mean_network_power_l4 | 647.17 |
mean_one_way_delay_l7 | 1925.55 |
mean_recovery_time_l4 | 33.91 |
mean_sending_rate_l4 | 15.44 |
mean_sending_rate_l7 | 17.50 |
mean_srtt_l4 | 24.76 |
mean_throughput_l4 | 15.37 |
mean_throughput_l7 | 15.37 |
mean_utility_mpdf_l4 | -0.07 |
mean_utility_pf_l4 | 2.73 |
mean_utilization_bdp_l4 | 1.72 |
mean_utilization_bw_l4 | 0.96 |
total_retransmissions_l4 | 68.00 |
total_rtos_l4 | 0.00 |
Link Metrics
Link metrics are recorded at the network links of interest, typically at bottlenecks. They include metrics that measure queue states. Bold values indicate which link achieved the best performance.
Metric | link_5 |
---|---|
mean_qdisc_delay_l2 | 7.90 |
mean_qdisc_length_l2 | 11.03 |
mean_sending_rate_l1 | 15.95 |
total_qdisc_drops_l2 | 68.00 |
Figures
The following figures show the results of the experiment #18.Time Series Plot of the Operating Point
Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.
Comparison of Congestion Control Algorithms (CCAs)
Figures
Mean Bandwidth Utilization vs Buffer Size
The mean bandwidth utilization vs the buffer size for large buffers (larger or equal to the BDP). High utilization indicates good performance. CCAs that suffer from the bufferbloat problem inflict high queueing delays as the buffer size increases.