AFSA

International Competition on Graph Counting Algorithms 2023

What's new

January 15, 2024 September 7, 2023 August 29, 2023 July 18, 2023 July 13, 2023 July 12, 2023 July 07, 2023 July 05, 2023 July 03, 2023 June 30, 2023

Overview

The Algorithmic Foundations for Social Advancement (AFSA), one of the most significant research projects in computer science in Japan, is pleased to announce the international competition for graph counting algorithms.

The graph counting problem, i.e., counting all subgraphs that satisfy specified constraints for a given graph, is a key technique to address various social issues that can be abstracted as graphs. This problem has attracted attention in many applications, e.g., reliability evaluation and optimization of networked infrastructure, structural analysis of human society such as epidemiological studies and electoral districting, etc. You will find a useful and funny example on YouTube , created by a predecessor project of AFSA. As the YouTube movie shows, the graph counting problem is extremely challenging; e.g., even the small 10x10 grid graph has 1,568,758,030,464,750,013,214 paths!

Contestants are encouraged to submit their algorithms counting paths on a given graph. The algorithms will be ranked by accuracy, computation time, and/or ideas. The competition is held online in the following schedule.

A current overview of the competition is shown below. Note that the following are still under consideration and may be changed. We would appreciate your feedback to finalize the competition’s details.

Problem and benchmarks

The problem is counting all paths that satisfy given constraints on a given graph (the above movie shows some instances of this problem). This problem is known to be in #P-complete, a computationally challenging class.

Input

Each benchmark (an instance) consists of the following input. The input format is described below. Note that if no terminals are specified, please count the total number of paths between every vertex pair.

Output

Number of benchmarks

In the competition, 150 benchmarks are given. Since graph counting algorithms are expected to evaluate networked infrastructures and geographical systems, the benchmarks mainly include sparse graphs mimicking them.

Example benchmarks

Example codes

Although most contestants are likely to use low-level programming languages like C/C++, we, for clarity, show a Python example using Graphillion, a graph library featuring graph enumeration algorithms.

# Input
E = [(1, 2), (1, 4), (2, 3), (2, 4), (3, 4)]  # An undirected graph
L = 2  # The maximum path length
T = (1, 3)  # The path terminals

# Initialization
from graphillion import GraphSet
GraphSet.set_universe(E)

# Count the paths
n = len(GraphSet.paths(*T).smaller(L + 1))

# Output
print(n)  # 2 paths
# Input and initialization are the same and omitted.

# Count the paths between every vertex pair
from itertools import combinations
V = [1, 2, 3, 4]
n = sum([len(GraphSet.paths(s, t).smaller(L + 1)) for s, t in combinations(V, 2)])

# Output the count to stdout
print(n)  # 13 paths

Rules

An overview of the competition rules is described; details will be added later.

Contestants

Contestants can be individuals or teams. Note that a person can belong to at most two teams.

Evaluation metrics

  1. The number of benchmarks correctly solved in a time budget.
    • The time budget is 600 seconds in wall-clock time for each benchmark.
    • Submitted programs are supposed to be executed on an organizers' computer described in the evaluation environment.
  2. An idea inspiring algorithms.
    • Algorithmic ideas of the program are evaluated by contestants' votes. Contestants will be personally notified on how to vote.

Input format

A benchmark is given in an extended DIMACS file format. Each line begins with a letter that defines the rest of the line. The legal lines are:

The following is the input file of the above example benchmark.

c This is an example.
p edge 4 5
e 1 2
e 1 4
e 2 3
e 2 4
e 3 4
l 2
t 1 3

Output format

Path-counting programs are supposed to output just a single integer of path count to stdout.

Specifications of the submitted program

Path-counting programs must be compiled with make, and the executable file must be named a.out. The organizers will run the program by ./a.out < <benchmark-file> and examine the output number as the path count. To recap, what the organizers will do are as follows.

$ make
$ ./a.out < <benchmark-file>  # Run for each benchmark
???  # Interpret the output number as the path count

Evaluation environment

Submitted programs are supposed to be executed on the following environment:

If you have any requests on the evaluation environment, e.g., installation of other packages, please send the requests to the organizers.

Evaluation script

Your program will be evaluated by this Python script. You can use it for pre-checking your program if you with. To run the script, follow the instructions below:

python icgca-evaluation.py -u {user} -o {output} -bms {one_pair} {all_pairs}

Please ensure that the Makefile and a.out files are in the directory /home/icgca/{user}/submission/. Additionally, store the ICGCA benchmarks (*.col) in two directories: {one_pair} and {all_pairs}. The script will save the results to {output}/{user}-results.csv.

Kindly note that this script is designed for our evaluation purposes and may not be compatible with other environments. You can freely modify the script to make it work in your environment. Keep in mind that the execution of this script is solely at your own risk and responsibility.

If you encounter any critical issues that may affect the evaluation's accuracy, please inform us promptly.

Registration for downloading benchmarks

Contestants need to register for the competition to download the benchmarks. A team at the registration is tentative, and it will be finalized at the submission.

The registration page is here.

Code submission

To proceed to the submission process, contestants need to register for the competition. The registration page is here.

Registered contestants will be notified of a personalized submission page via email. Please follow the specific instructions provided in the email to complete your submission.

The notification emails will be sent out starting from approximately June 19th. Don't hesitate to contact the organizers if you have not received the notification email within two business days after registration.

Submission files

Contestants are supposed to submit their program with source codes and a short paper.
  1. Program
    • The program is submitted in either of the following formats:
      1. Makefile with source codes, or
      2. Statically linked binary with source codes used to compile.
    • The program must satisfy the specifications of the submitted program, i.e., it can be built with make command and executed with ./a.out < <benchmark-file>.
      • To follow the specification, refer to the following samples:
      • Normal Makefile. Makefile is used to build a.out.
      • CMake. Makefile is used to trigger cmake and move build/a.out to the current directory.
      • Docker. Makefile is used to trigger docker built, whereas a.out of a shell script is prepared to call docker container run.
    • All the submitted source codes will be made available online on the competition site after the submission due.
    • Submitted programs must follow an open source license (e.g. Apache2, MIT, or public domain) and must contain a LICENSE.md or LICENSE.txt file at the root.
    • Do not hard-code the benchmark solutions in your program!
  2. Short paper
    • The paper briefly describes the algorithmic ideas of your program. The format is free, e.g., a one-page extended abstract.
    • The paper should briefly describe your program's algorithmic ideas.
    • The paper can be in any format, e.g., a one-page extended abstract.
    • The paper must include the team name and team members.

Event

Although the competition is held online, the results will be revealed in a hybrid event, an annual Forum on Information Technology (FIT), in Osaka, Japan, September 7, 2023. Anyone including but not limited to contestants is welcome to join the event.

Organizers

Members:
Support:
  • KAKENHI Grant-in-Aid for Transformative Research Areas (A) “Algorithmic Foundations for Social Advancement”
Contact:
  • icgca-contact [at] afsa.jp

FAQ

Q. Are contestants allowed to use the 12 CPUs in parallel?
A. Yes.
Q. How to measure the computation time in parallel computing?
A. In wall-clock time.
Q. Can a contestant solve multiple benchmarks within the time limit?
A. No. The time limit (600 s) is applied to each benchmark.
Q. Will other processes run in the evaluation environment?
A. Evaluation will be performed in the Ubuntu server in the initial state with apt-get as described above. Each submitted program will run exclusively.
Q. Under what temperature will the evaluation be performed?
A. Air-conditioning will be appropriately used to avoid unfairness among contestants.
Q. What is the maximum benchmark size in the evaluation?
A. It is the maximum size in the 150 benchmarks downloaded after the registration because the 150 benchmarks will be used as is.
Q. Can the number of pre-counted paths be hard-coded or downloaded from the network
No. Submitted programs cannot include the number of paths or other information used for each benchmark. Such programs will be disqualified.
Q. Can contestants check the compilation result in the evaluation environment beforehand?
A. Yes. Feel free to get in touch with the organizers.
Q. Would contestants be disqualified if their programs gave some benchmarks wrong answers?
A. No. The evaluation is based on the number of benchmarks answered correctly within the time limit.
Q. How will it be dealt with if some benchmarks are invalid (e.g., a format violation)?
A. Invalid benchmarks are excluded from the evaluation. Please let us know if you'd find invalid benchmarks.
Q. Does "open source" require the disclaimer of intellectual property rights?
A. No, contestants are not required to give up their intellectual property rights. However, we would appreciate it if academic use is free of charge.
Q. A contestant wants to use code violating nondisclosure agreements.
A. Submitted programs must not contain any rights issues. Please get in touch with the organizers if you have complicated matters.