Traffic Policing Experiment

The traffic control layer (TCL) implements packet schedulers with various functionalities. Rate limiting with Token Bucket Filters (TBFs) are used by Internet Service Providers (ISPs) to enforce user plans. Such rate limiters are common in networks today and challenging for congestion control algorithms (CCAs) for a variety of reasons.

Scenario

In the traffic policing experiment, the bottleneck uses a TBF queueing discipline (qdisc) to limit its sending rate. The transmission rate of the Network Interface Controller (NIC) is set by the parameter peak_rate. The peak_rate is a hard limit of the bottleneck rate that cannot be exceeded. In contrast to that, the policing rate of the TBF, set by the parameter policing_rate, is an elastic soft limit. The transmission rate of the bottleneck can exceed the policing rate, but for a limited amount of time (specifically, until the tokens run out). Once the TBF enforces its policing rate, the transmission rate of the bottleneck is abruptly throttled to the value of policing_rate.

To summarize the experiment setup:

  • Topology: Dumbbell topology (\(K=1\)) with static network parameters and a TBF qdisc

  • Flows: A single flow (\(K=1\)) that uses a CCA

  • Traffic Generation Model: Greedy source traffic

Experiment Results

Experiment #56

Parameters

Command: ns3-dev-ccperf-traffic-policing-default --experiment-name=traffic_policing --db-path=benchmark_TcpNewReno.db '--parameters={aut:TcpNewReno,policing_rate:4Mbps,peak_rate:1Gbps}' --aut=TcpNewReno --stop-time=15s --seed=42 --policing-rate=4Mbps --peak-rate=1Gbps --bw=16Mbps --loss=0.0 --qlen=20p --qdisc=FifoQueueDisc --rtts=15ms --sources=src_0 --destinations=dst_0 --protocols=TCP --algs=TcpNewReno --recoveries=TcpPrrRecovery --start-times=0s --stop-times=15s '--traffic-models=Greedy(bytes=0)'

Flows

src dst transport_protocol cca cc_recovery_alg traffic_model start_time stop_time
src_0 dst_0 TCP TcpNewReno TcpPrrRecovery Greedy(bytes=0) 0.00 15.00

Metrics

The following tables list the flow, link, and network metrics of experiment #56. Refer to the the metrics page for definitions of the listed metrics.

Flow Metrics

Flow metrics capture the performance of an individual flow. They are measured at the endpoints of a network path at either the source, the receiver, or both. Bold values indicate which flow achieved the best performance.

Metric flow_1
cov_in_flight_l4 0.55
cov_throughput_l4 0.47
flow_completion_time_l4 15.00
mean_cwnd_l4 23.23
mean_delivery_rate_l4 3.78
mean_est_qdelay_l4 43.48
mean_idt_ewma_l4 3.12
mean_in_flight_l4 22.74
mean_network_power_l4 74.09
mean_one_way_delay_l7 5963.57
mean_recovery_time_l4 142.57
mean_sending_rate_l4 3.88
mean_sending_rate_l7 5.91
mean_srtt_l4 58.48
mean_throughput_l4 3.78
mean_throughput_l7 3.78
mean_utility_mpdf_l4 -0.26
mean_utility_pf_l4 1.36
mean_utilization_bdp_l4 0.02
mean_utilization_bw_l4 0.00
total_retransmissions_l4 120.00
total_rtos_l4 1.00

Figures

The following figures show the results of the experiment #56.

Time Series Plot of the Operating Point

Time series plot of the number of segments in flight, the smoothed round-trip time (sRTT), and the throughput at the transport layer.

In Flight vs Mean Operating Point

The mean throughput and mean smoothed round-trip time (sRTT) at the transport layer of each flow. The optimal operating point is highlighted with a star (magenta). The joint operating point is given by the aggregated throughput and the mean sRTT over all flows

Distribution of the Operating Point

The empirical cumulative distribution function (eCDF) of the throughput and smoothed round-trip time (sRTT) at the transport layer of each flow.

Comparison of Congestion Control Algorithms (CCAs)

Figures