Results of Experiments and Benchmarks

While experiments are running, performance metrics are saved to a database. These metrics are the raw output data of experiments. The raw data is further post-processed to generate results in the form of tables and figures. What kind of tables and figures are generated depend on the experiment type. The generation is implemented in report.py. Visit ccperf.net to have a look at the results of some benchmark runs.

We differentiate three types of results:

  • experiment results: Metrics obtained from a single experiment run.
  • benchmark results: Metrics obtained from multiple experiments belonging to the same benchmark run. Benchmark results are useful to show how the performance of the Congestion Control Algorithm (CCA) under test is affected by different experiment parameters.
  • comparative results: Metrics that compare experiments from different benchmark runs. The comparative results are useful to compare different CCAs.

Database Schema

The metrics captured in experiments are saved locally to SQLite databases. The databases contain the raw output data that is not yet post-processed. Depending on whether or not an experiment belongs to a benchmark, different database files are used to store the raw data. Benchmark data is saved to results/<backend>/benchmark_<CCA>.db, where <backend> is the selected backend (as of now, ns3 is the only option) and <CCA> is the CCA under test. The data of single experiment runs are saved to results/<backend>/experiments.db. The following figure depicts the schema of the database (created with this schema diagram generator, thanks!):

The database consists of the following tables:

  • experiments: Tracks experiment runs. Each experiment has a unique id and some meta information. Other tables that contain data of experiments have a foreign key named experiment_id that references experiment.id.
  • flows: Meta information about flows that transmitted data throughout an experiment.
  • links: Meta information about selected links present in an experiment.
  • flow_metrics: The data of flow metrics sampled while experiments are running. For brevity, only two metric types are shown even though many more exist.
  • link_metrics: The data of link metrics sampled while experiments are running. For brevity, only two metric types are shown even though many more exist.
  • metric_classes: Meta information about which types of metrics are available.