AFSA

International Competition on Graph Counting Algorithms 2024

What's new

March 25, 2024

Overview

After the resounding success of ICGCA 2023, we are excited to announce the planning of ICGCA 2024.

The graph counting problem, i.e., counting all subgraphs that satisfy specified constraints for a given graph, is a key technique to address various social issues that can be abstracted as graphs. This problem has attracted attention in many applications, e.g., reliability evaluation and optimization of networked infrastructure, structural analysis of human society such as epidemiological studies and electoral districting, etc. You will find a useful and funny example on YouTube, created by a predecessor project of AFSA. As the YouTube movie shows, the graph counting problem is extremely challenging; e.g., even the small 10x10 grid graph has 1,568,758,030,464,750,013,214 paths!

In ICGCA 2023, contestants submitted their solvers counting paths on a given graph. Building on the strong foundation set by the previous year, ICGCA 2024 aims to explore new frontiers in graph counting algorithms. Here are some key changes and additions for 2024's competition:

ICGCA is held by the Algorithmic Foundations for Social Advancement (AFSA), one of the most significant research projects in computer science in Japan, to foster the foundation of computer science. The competition is held online in the following schedule.

The solvers will be ranked by accuracy, computation time, and/or ideas. Details of the competition are shown below. Note that the following are still under consideration and may be changed until April, 2024. We would appreciate your feedback to finalize the competition's details: see the organizers for the contact information.

Problem and benchmarks

The problem is counting all paths that satisfy given constraints on a given graph (the above movie shows some instances of this problem). This problem is known to be in #P-complete, a computationally challenging class.

Input

Each benchmark (an instance) consists of the following input. The input format is described below.

Output

The number of paths between the terminals under the path length constraint.

Example benchmarks

Example input Example output

Example code for undirected graphs

Although most contestants are likely to use low-level programming languages like C/C++, we, for clarity, show a Python example using Graphillion, a graph library featuring graph enumeration algorithms.

# Input
E = [(1, 2), (1, 4), (2, 3), (2, 4), (3, 4)]  # An undirected graph
L = 2  # The maximum path length
T = (1, 3)  # The path terminals

# Initialization
from graphillion import GraphSet
GraphSet.set_universe(E)

# Count the paths
n = len(GraphSet.paths(*T).smaller(L + 1))

# Output
print(n)  # 2 paths

Number of benchmarks

In the competition, 100 benchmarks are given for each category: 50 public instances and 50 private instances. The benchmarks are prepared separately for undirected and directed graphs, but shared among single-thread and multi-thread categories.

How to download public benchmarks

Contestants are supposed to register to download public benchmarks.

Categories

ICGCA 2024 holds the following four categories. Solvers are separately submitted and ranked for the categories.

Rules

Contestants

Contestants can be individuals or teams. A team can submit one solver for each category. Note that a person can belong to at most two teams.

Evaluation metrics

  1. The number of benchmarks correctly solved in a time budget.
    • The time budget is 600 seconds in wall-clock time for each benchmark.
    • Submitted solvers are supposed to be executed on an organizers' computer described in the evaluation environment.
    • Solvers that gave some benchmarks wrong answers are disqualified. Note that since path-counting problems are difficult to verify the correctness of the answers, this rule would only apply to relatively small benchmarks that are easily solvable by the organizers.
  2. An idea inspiring algorithms.
    • Algorithmic ideas of the solver are evaluated by contestants' votes. Contestants will be personally notified on how to vote.

Input format

A benchmark is given in an extended DIMACS file format. Each line begins with a letter that defines the rest of the line. The legal lines are:

The following is the input file of the above example benchmark.

c This is an example.
p edge 4 5
e 1 2
e 1 4
e 2 3
e 2 4
e 3 4
l 2
t 1 3

Output format

Path-counting solvers are supposed to output just a single integer of path count to stdout.

Specifications of the submitted solver

Path-counting solvers must be compiled with make, and the executable file must be named a.out. The organizers will run the solver by ./a.out < <benchmark-file> and examine the output number as the path count. To recap, what the organizers will do are as follows.

$ make
$ ./a.out < <benchmark-file>  # Run for each benchmark
???  # Interpret the output number as the path count

Evaluation environment

Submitted solvers are supposed to be executed on the following environment:

If you have any requests on the evaluation environment, e.g., installation of other packages, please send the requests to the organizers.

Evaluation script

Your solver will be evaluated by this Python script. You can use it for pre-checking your solver if you with. To run the script, follow the instructions below:

python icgca-evaluation.py -u {user} -o {output} -bms {benchmark_dir}

Please ensure that the Makefile and a.out files are in the directory /home/icgca/{user}/submission/. Additionally, store the ICGCA benchmarks (*.col) in the directory of {benckmark_dir} (undirected or directed). The script will save the results to {output}/{user}-results.csv.

Kindly note that this script is designed for our evaluation purposes and may not be compatible with other environments. You can freely modify the script to make it work in your environment. Keep in mind that the execution of this script is solely at your own risk and responsibility.

If you encounter any critical issues that may affect the evaluation's accuracy, please inform us promptly.

Registration

Contestants need to register for the competition to download the benchmarks. A team at the registration is tentative, and it will be finalized at the submission.

The registration page will be open in April, 2024.

Submission

To proceed to the submission process, contestants need to register in advance. Registered contestants will be notified of a personalized submission page via email around July. Don't hesitate to contact the organizers if you have not received the notification email. Please follow the specific instructions provided in the email to complete your submission.

Submission files

Contestants are supposed to submit their solver with source codes and a short paper. Contestants also can submit instances, which may be selected as benchmarks for the evaluation.

  1. Solver
    • Solvers should be submitted separately for each category; if a team wants to participate in all four categories, they need to submit four solvers (their implementation can be different).
    • The solver is submitted in either of the following formats:
      1. Makefile with source codes, or
      2. Statically linked binary with source codes used to compile.
    • The solver must satisfy the specifications of the submitted solver, i.e., it can be built with make command and executed with ./a.out < <benchmark-file>.
      • To follow the specification, refer to the following samples:
      • Normal Makefile. Makefile is used to build a.out.
      • CMake. Makefile is used to trigger cmake and move build/a.out to the current directory.
      • Docker. Makefile is used to trigger docker built, whereas a.out of a shell script is prepared to call docker container run.
      • All the submitted source codes will be made available online on the competition site after the submission due.
      • Submitted solvers must follow an open source license (e.g. Apache2, MIT, or public domain) and must contain a LICENSE.md or LICENSE.txt file at the root.
      • Do not hard-code the benchmark solutions in your solver!
  2. Short paper
    • A single paper is submitted from each team, regardless of participation categories.
    • The paper should briefly describe your solver's algorithmic ideas.
    • The paper can be in any format, e.g., a one-page extended abstract.
    • The paper must include the team name and team members.
  3. Instances (optional)
    • Each team can submit at most ten instances of undirected graphs and/or ten instances of directed graphs.
    • The instances must follow the specified format. It is strongly recommended to write the origin of the graph in the comment line.
    • The organizers will select instances used in the evaluation depending on the hardness. The selected instances will be revealed after the competition, as all the benchmarks will be available.

Event

Although the competition is held online, the results will be revealed at Japanese Society for Artificial Intelligence in October, 2024. Anyone including but not limited to contestants is welcome to join the event.

Organizers

Members:

Support: KAKENHI Grant-in-Aid for Transformative Research Areas (A) "Algorithmic Foundations for Social Advancement"

Contact: icgca-contact [at] afsa.jp

FAQ

Q. Summarize the differences from ICGCA2023.
A. The following are the differences:
  • Tracks: Digraph and single-thread are alied.
  • Benchmarks: Private ones are introduced. Contestants can submit benchmarks. Benchmarks for all-pairs (PCA) have been discontinued.
  • Solvers: Solvers that output even one wrong solution will be disqualified. Submitted codes cannot include operations that require root privileges.
Q. Is the PCA (path counting for all vertex pairs) problem missed in ICGCA 2024?
A. Yes. ICGCA 2024 only provides the PCS (path counting for a single vertex pair) problem.
Q. Will other processes run in the evaluation environment?
A. Evaluation will be performed in the Ubuntu server in the initial state with apt-get as described above. Each submitted solver will run exclusively.
Q. Under what temperature will the evaluation be performed?
A. Air-conditioning will be appropriately used to avoid unfairness among contestants.
Q. What is the maximum benchmark size in the evaluation?
A. Although it depends on instances to be submitted from contestants, we assume that the maximum number of edges is less than 10,000.
Q. Can the number of pre-counted paths be hard-coded or downloaded from the network?
No. Submitted solvers cannot include the number of paths or other information used for each benchmark. Such solvers will be disqualified.
Q. Can contestants check the compilation result in the evaluation environment beforehand?
A. Yes. Feel free to get in touch with the organizers.
Q. Does "open source" require the disclaimer of intellectual property rights?
A. No, contestants are not required to give up their intellectual property rights. However, we would appreciate it if academic use is free of charge.
Q. A contestant wants to use code violating nondisclosure agreements.
A. Submitted solvers must not contain any rights issues. Please get in touch with the organizers if you have complicated matters.