What's new
January 15, 2024 September 7, 2023- The results are unveiled at here! Congratulations to the winners!
- The program for the ICGCA symposium, where the results will be unveiled, is available here.
- The GNU multi-precision library (libgmp-dev) and Xlib (libx11-dev) will be available on the evaluation environment upon requests from some contestants.
- The wrong description for the evaluation script has been fixed; the directory path was written as /home/{user}/submission/, but the correct one is /home/icgca/{user}/submission/.
- The evaluation script had a bug on multi-threading and has been fixed, so please re-download it from here.
- A Python script to be used in the evaluation is made available. You can pre-check your program with it if you with.
- Updated some information about Docker usage. A sample code using Docker is now available. Also, we will add pip install docker to the evaluation environment, mainly for evaluating programs using Docker.
- OpenBLAS will be available on the evaluation environment upon requests from some contestants.
- The submission deadline is extended from July 10 to July 17, 23:59 (UTC-11).
- The evaluation environment is updated with installation information for CMake and Docker. In addition, OpenMPI is installed in the evaluation computers.
- A sample code using CMake is now available; that with Docker will also be available soon.
Overview
The Algorithmic Foundations for Social Advancement (AFSA), one of the most significant research projects in computer science in Japan, is pleased to announce the international competition for graph counting algorithms.
The graph counting problem, i.e., counting all subgraphs that satisfy specified constraints for a given graph, is a key technique to address various social issues that can be abstracted as graphs. This problem has attracted attention in many applications, e.g., reliability evaluation and optimization of networked infrastructure, structural analysis of human society such as epidemiological studies and electoral districting, etc. You will find a useful and funny example on YouTube , created by a predecessor project of AFSA. As the YouTube movie shows, the graph counting problem is extremely challenging; e.g., even the small 10x10 grid graph has 1,568,758,030,464,750,013,214 paths!
Contestants are encouraged to submit their algorithms counting paths on a given graph. The algorithms will be ranked by accuracy, computation time, and/or ideas. The competition is held online in the following schedule.
- Benchmarks will be available on April 5, 2023.
- Submission is due on
July 10July 17, 23:59 (UTC-11), 2023. - Results will be revealed on September 7, 2023.
A current overview of the competition is shown below. Note that the following are still under consideration and may be changed. We would appreciate your feedback to finalize the competition’s details.
Problem and benchmarks
The problem is counting all paths that satisfy given constraints on a given graph (the above movie shows some instances of this problem). This problem is known to be in #P-complete, a computationally challenging class.
Input
Each benchmark (an instance) consists of the following input.- An undirected graph,
- Path terminals, and
- The maximum path length (the maximum number of edges on the path).
Output
- The number of paths between the terminals under the path length constraint.
Number of benchmarks
In the competition, 150 benchmarks are given. Since graph counting algorithms are expected to evaluate networked infrastructures and geographical systems, the benchmarks mainly include sparse graphs mimicking them.
Example benchmarks
- The input includes a graph with a terminal pair (vertices 1 and 3) and the maximum path length of two.
- The output should be "2"; there are two paths between the terminal pair with at most two edges, as follows.
- If no terminal pair is specified in the above example, the output should be "13", because there are 13 paths with at most two edges in total for every vertex pair.
Example codes
Although most contestants are likely to use low-level programming languages like C/C++, we, for clarity, show a Python example using Graphillion, a graph library featuring graph enumeration algorithms.
- A case of single terminal pair.
# Input
E = [(1, 2), (1, 4), (2, 3), (2, 4), (3, 4)] # An undirected graph
L = 2 # The maximum path length
T = (1, 3) # The path terminals
# Initialization
from graphillion import GraphSet
GraphSet.set_universe(E)
# Count the paths
n = len(GraphSet.paths(*T).smaller(L + 1))
# Output
print(n) # 2 paths
- The case of every vertex pair.
# Input and initialization are the same and omitted.
# Count the paths between every vertex pair
from itertools import combinations
V = [1, 2, 3, 4]
n = sum([len(GraphSet.paths(s, t).smaller(L + 1)) for s, t in combinations(V, 2)])
# Output the count to stdout
print(n) # 13 paths
Rules
An overview of the competition rules is described; details will be added later.Contestants
Contestants can be individuals or teams. Note that a person can belong to at most two teams.Evaluation metrics
- The number of benchmarks correctly solved in a time budget.
- The time budget is 600 seconds in wall-clock time for each benchmark.
- Submitted programs are supposed to be executed on an organizers' computer described in the evaluation environment.
- An idea inspiring algorithms.
- Algorithmic ideas of the program are evaluated by contestants' votes. Contestants will be personally notified on how to vote.
Input format
A benchmark is given in an extended DIMACS file format. Each line begins with a letter that defines the rest of the line. The legal lines are:
- c Comment:the remainder of the line is ignored.
- p Problem: must be of form p edge n m where n is the number of nodes (to be numbered 1..n) and m the number of edges.
- e Edge: must be of form e v1 v2 where v1 and v2 are the endpoints of the edge.
- l Length: must be of form l len where len is the maximum path length (the number of edges on the path).
- t Terminals: must be of form t v1 v2 where v1 and v2 are the path terminals. When this line is not given, count the total number of paths between every vertex pair.
The following is the input file of the above example benchmark.
c This is an example.
p edge 4 5
e 1 2
e 1 4
e 2 3
e 2 4
e 3 4
l 2
t 1 3
Output format
Path-counting programs are supposed to output just a single integer of path count to stdout.Specifications of the submitted program
Path-counting programs must be compiled with make, and the executable file must be named a.out. The organizers will run the program by ./a.out < <benchmark-file> and examine the output number as the path count. To recap, what the organizers will do are as follows.
$ make
$ ./a.out < <benchmark-file> # Run for each benchmark
??? # Interpret the output number as the path count
Evaluation environment
Submitted programs are supposed to be executed on the following environment:- Hardware: Intel NUC 12 Pro Kit NUC12WSHi7
- 12 CPU cores, no GPU, 64 GB main memory , and 2 TB SSD.
- Software: Ubuntu server 22.04 LTS with the following development packages
- apt-get install build-essential rust-all libboost-dev python3-pip cmake openmpi-bin libopenblas-dev libx11-dev
- pip install docker psutil
- Docker installed following the official docker page.
If you have any requests on the evaluation environment, e.g., installation of other packages, please send the requests to the organizers.
Evaluation script
Your program will be evaluated by this Python script. You can use it for pre-checking your program if you with. To run the script, follow the instructions below:
python icgca-evaluation.py -u {user} -o {output} -bms {one_pair} {all_pairs}
Please ensure that the Makefile and a.out files are in the directory /home/icgca/{user}/submission/. Additionally, store the ICGCA benchmarks (*.col) in two directories: {one_pair} and {all_pairs}. The script will save the results to {output}/{user}-results.csv.
Kindly note that this script is designed for our evaluation purposes and may not be compatible with other environments. You can freely modify the script to make it work in your environment. Keep in mind that the execution of this script is solely at your own risk and responsibility.
If you encounter any critical issues that may affect the evaluation's accuracy, please inform us promptly.
Registration for downloading benchmarks
Contestants need to register for the competition to download the benchmarks. A team at the registration is tentative, and it will be finalized at the submission.
The registration page is here.Code submission
To proceed to the submission process, contestants need to register for the competition. The registration page is here.
Registered contestants will be notified of a personalized submission page via email. Please follow the specific instructions provided in the email to complete your submission.
The notification emails will be sent out starting from approximately June 19th. Don't hesitate to contact the organizers if you have not received the notification email within two business days after registration.
Submission files
Contestants are supposed to submit their program with source codes and a short paper.- Program
- The program is submitted in either of the following formats:
- Makefile with source codes, or
- Statically linked binary with source codes used to compile.
- The program must satisfy the specifications of the submitted program, i.e., it can be built with make command and executed with ./a.out < <benchmark-file>.
- To follow the specification, refer to the following samples:
- Normal Makefile. Makefile is used to build a.out.
- CMake. Makefile is used to trigger cmake and move build/a.out to the current directory.
- Docker. Makefile is used to trigger docker built, whereas a.out of a shell script is prepared to call docker container run.
- All the submitted source codes will be made available online on the competition site after the submission due.
- Submitted programs must follow an open source license (e.g. Apache2, MIT, or public domain) and must contain a LICENSE.md or LICENSE.txt file at the root.
- Do not hard-code the benchmark solutions in your program!
- Short paper
- The paper briefly describes the algorithmic ideas of your program. The format is free, e.g., a one-page extended abstract.
- The paper should briefly describe your program's algorithmic ideas.
- The paper can be in any format, e.g., a one-page extended abstract.
- The paper must include the team name and team members.
Event
Although the competition is held online, the results will be revealed in a hybrid event, an annual Forum on Information Technology (FIT), in Osaka, Japan, September 7, 2023. Anyone including but not limited to contestants is welcome to join the event.
Organizers
Members:- Takeru Inoue (NTT, Japan)
- Norihito Yasuda (NTT, Japan)
- Hidetomo Nabeshima (University of Yamanashi, Japan)
- Masaaki Nishino (NTT, Japan)
- Shuhei Denzumi (NTT, Japan)
- Shin-ichi Minato (Kyoto University, Japan)
- KAKENHI Grant-in-Aid for Transformative Research Areas (A) “Algorithmic Foundations for Social Advancement”
- icgca-contact [at] afsa.jp
FAQ
- Q. Are contestants allowed to use the 12 CPUs in parallel?
- A. Yes.
- Q. How to measure the computation time in parallel computing?
- A. In wall-clock time.
- Q. Can a contestant solve multiple benchmarks within the time limit?
- A. No. The time limit (600 s) is applied to each benchmark.
- Q. Will other processes run in the evaluation environment?
- A. Evaluation will be performed in the Ubuntu server in the initial state with apt-get as described above. Each submitted program will run exclusively.
- Q. Under what temperature will the evaluation be performed?
- A. Air-conditioning will be appropriately used to avoid unfairness among contestants.
- Q. What is the maximum benchmark size in the evaluation?
- A. It is the maximum size in the 150 benchmarks downloaded after the registration because the 150 benchmarks will be used as is.
- Q. Can the number of pre-counted paths be hard-coded or downloaded from the network
- No. Submitted programs cannot include the number of paths or other information used for each benchmark. Such programs will be disqualified.
- Q. Can contestants check the compilation result in the evaluation environment beforehand?
- A. Yes. Feel free to get in touch with the organizers.
- Q. Would contestants be disqualified if their programs gave some benchmarks wrong answers?
- A. No. The evaluation is based on the number of benchmarks answered correctly within the time limit.
- Q. How will it be dealt with if some benchmarks are invalid (e.g., a format violation)?
- A. Invalid benchmarks are excluded from the evaluation. Please let us know if you'd find invalid benchmarks.
- Q. Does "open source" require the disclaimer of intellectual property rights?
- A. No, contestants are not required to give up their intellectual property rights. However, we would appreciate it if academic use is free of charge.
- Q. A contestant wants to use code violating nondisclosure agreements.
- A. Submitted programs must not contain any rights issues. Please get in touch with the organizers if you have complicated matters.