How To Use ECN-Spider

In this section I illustrates a typical use case for ECN-Spider. I will highlight how the various scripts that make up ECN-Spider work together.

Getting The Input Ready

First, I obtain a CSV data file with a list of domain names and traffic rank information that I would like to test. To use Alexa’s list of the top 1 million domains, I did this:

ecn$ wget
ecn$ unzip ./

The list has the following format:


A records consists of a rank and a domain name. The rank is only used for the analysis at the very end, and is stored together with the domain name through the processing in all scripts.

For most tests, I choose not to use the entire domain name list. Using the script, I can extract a shorter list of two parts: the first n unique domains, and m randomly selected unique domains from the remainder.:

ecn$ python ./ 50000 50000 ./top-1m.csv ./subset.csv

Note that this script should always be used (even when using the complete input list and not a subset), since this script not only does subset selection: it also does some clean-up and other minor manipulation of the list. If this script is not used, the analysis at the end may produce incorrect results.

The main testing script expects an input file with domain names and IP addresses they resolve to. The script takes an input file and runs address resolution on the domain names therein:

ecn$ python ./ --workers 10 --www preferred ./subset.csv ./resolved.csv

With input files like Alexa’s top 1M list, resolved.csv will now contain many duplicate IP addresses, due to many popular websites being hosted on CDNs that share an IP address between multiple sites. The script ensures that both the IPv4 and IPv6 addresses of the resolved domain names are unique. Non-unique IP addresses may lead to erroneous results in the analysis.

ecn$ python ./ ./resolved.csv ./input.csv

The list now has the following format:


Note that in this particular example, the option --www preferred for the resolution script has led to most domains in input.csv to now have a prepended www..

Running The Test

Now that the input file has been prepared, I can run ecn_spider. Before I start ECN-Spider, I run tcpdump as root in a separate shell, to capture all TCP packet headers for later analysis:

root$ tcpdump -ni eth0 -w ./ecn_spider.pcap -s 128

And now:

ecn$ python ./ --verbosity INFO --workers 64 --timeout 4 ./input.csv ./retry.csv ./ecn-spider.csv ./ecn-spider.log
This run creates three output files:
This file is used as the input file for later runs of ecn_spider and contains only the IP addresses that had problems during this test run.
This file contains the collected test data used for further analysis.
This file contains human-readable log data useful for debugging. It is not needed for normal use of the tools of ECN-Spider.

Benchmarking the --workers parameter

The rate at which ECN-Spider tests domains varies greatly with the number of worker threads used for testing. This number can be adjusted with the command line option --workers. Of course, the rate also depends on the the round-trip time to the tested domains and the value of the --timeout option.

To find the optimal number of workers, the script can be used.

Table Of Contents

Previous topic

Setting up ECN-Spider on a Machine

Next topic


This Page