Participation Details

Participation Timeline

For the exact dates and deadlines, please see the schedule on the challenge schedule.

Training Phase. Register (see FAQ) to download the co-registered, skull-stripped, and annotated training data.

Validation Phase. An independent set of validation scans will be made available to the participants in June, with the intention to allow them assess the generalizability of their methods in unseen data, via CBICA’s Image Processing Portal (IPP). The FeTS Challenge leaderboard will be available through a link from this page. Note that this may not reflect the out-of-distribution generalization aimed at in task 2. Regarding task 1, validation data model outputs submitted for placement on the leaderboard must use a model trained using the run_challenge_experiment function as shown in the jupyter notebook: Challenge/Task_1/FeTS_Challenge.ipynb of the FeTS Competition Supporting Code Repository. In addition, the model must be trained in under the maximum simulated time of one week, and must be trained using the split from partitioning_2.csv (see the README under Challenge/Task_1 for further details on simulated time and partitioning csvs).

Short Paper submission deadline. Participants will have to evaluate their methods on the training and validation datasets, and submit their short paper (8-10 LNCS pages — together with the “LNCS Consent to Publish” form), describing their method and results to the BrainLes CMT submission system, and make sure you choose FeTS as the “Track”. Please ensure that you include the appropriate citations, mentioned at the bottom of the “Data” section. This unified scheme should allow for appropriate preliminary comparisons and the creation of the pre- and post-conference proceedings. Participants are allowed to submit longer papers to the MICCAI 2021 BrainLes Workshop, by choosing “BrainLes” as the “Track”. FeTS papers will be part of the BrainLes workshop proceedings distributed by Springer LNCS. All paper submissions should use the LNCS template, available both in LaTeX and in MS Word format, directly from Springer (link here).

Testing Phase. The test scans are not made available to participating teams. The organizers will evaluate the submitted contributions instead for all participants that submitted a short paper, and an appropriate version of their algorithm, as described in each task (evaluation section). Participants that have not submitted a short paper, and the copyright form, will not be evaluated.

Oral Presentations. The top-ranked participants will be contacted in September to prepare slides for orally presenting their method during the FeTS satellite event at MICCAI 2021, on Oct. 1.

Announcement of Final Results (Oct 1). The final rankings will be reported during the FeTS 2021 challenge, which will run in conjunction with MICCAI 2021.

Post-conference LNCS paper (Oct 10). All participanting teams are invited to extend their papers to 11-14 pages for inclusion to the LNCS proceedings of the BrainLes Workshop.

Joint post-conference journal paper. All participating teams have the chance to be involved in the joint manuscript summarizing the results of FeTS 2021, that will be submitted to a high-impact journal in the field. To be involved in this manuscript, the participating teams will need to participate in all phases of at least one of the FeTS tasks.

Participation policies

  • By participating and submitting your contribution to the FeTS 2021 challenge, for review and evaluation during the testing/ranking phase, you confirm that your code follows a license conforming to one of the standards: Apache 2.0, BSD-style, or MIT.
  • Participants are NOT allowed to use additional public and/or private data (from their own institutions) for extending the provided data. Similarly, using models that were pretrained on such datasets is NOT allowed. This is due to our intentions to provide a fair comparison among the participating methods.
  • The top 3 performing methods for each task will be announced publicly at the conference and the participants will be invited to present their method.
  • Inclusion criteria for the test phase of task 2: As we are going to perform a real-world federated evaluation in task 2, the computation capabilities are heterogeneous and restricted. Therefore, we reserve the right to limit the number of task-2 submissions included in the final ranking. Details are given in below.
  • We reserve the right to exclude teams and team members if they do not adhere to the challenge rules.

Registration and Data Access

To register and request the training and the validation data of the FeTS 2021 challenge, please follow the steps below. Please note that the i) training data includes ground truth annotations, ii) validation data does not include annotations, and iii) testing data are not available to either challenge participants or the public.

  1. Create an account in CBICA’s Image Processing Portal (IPP) and wait for its approval. Note that a confirmation email will be sent so make sure that you also check your Spam folder. This approval process requires a manual review of the account details and might take 3-4 days until completed.
  2. Once your IPP account is approved, login to IPP and then click on the application FeTS 2021: Registration, under the MICCAI FeTS 2021 group.
  3. Fill in the requested details and press “Submit Job”.
  4. Once your request is recorded, you will receive an email pointing to the “results” of your submitted job. You need to login to IPP, access the “Results.zip” file, in which you will find the file REGISTRATION_STATUS.txt. In this txt file you will find the links to download the FeTS 2021 data. The training data will include for each subject the 4 structural modalities, ground truth segmentation labels and accompanying text information relating to the source institution, whereas the validation data will include only the 4 modalities.

Submission Process

Task 1 Submission

As of 2021/July/19, each participant that has opened an initial entry in the CMT submission system would have received a unique upload link. Using that link, each participant would need to upload the following items:

  • The edited challenge notebook with as many comments as possible.
  • An accompanying README providing any additional details that participants would like to convey to the organizers.
  • [OPTIONAL] Scores reported by notebook on your own runs. This will be useful for organizers to double-check the re-training process.

To ensure that all participants are working off of a common baseline, we will be performing the re-training of each submitted method on our hardware.

Task 2 Submission

To provide high implementation flexibility to the participants while also facilitating the federated evaluation on different computation infrastructures, algorithm submissions for this task have to be singularity containers. The container application should be able to produce segmentations for a list of test cases. Details on the interface and examples for how to build such a container are given in the challenge repository.

Each participating team will be provided a gitlab project where they can upload their submission. To make a submission to task 2:

  1. Register for the challenge as described above (if not already done).
  2. Sign up at https://gitlab.hzdr.de/ using the same email address as in step 1 by either clicking Helmholtz AAI (login via your institutional email) or via your github login. Both buttons are in the lower box on the right.
  3. Send an email to challenge@fets.ai, asking for a Task 2-gitlab project and stating your gitlab handle (@your-handle) and team name. We will create a project for you and invite you to it within a day.
  4. Follow the instructions in the newly created project to make a submission.

To make sure that the containers submitted by the participants also run successfully on the remote institutions in the FeTS federation, we offer functionality tests on toy cases. Details are provided in the gitlab project.

Evaluation

Participants are called to produce segmentation labels of the different glioma sub-regions:

  1. the “enhancing tumor” (ET), equivalent to label 4
  2. the “tumor core” (TC), comprising labels 1 and 4
  3. the “whole tumor” (WT), comprising labels 1, 2 and 4

For each region, the predicted segmentation is compared with the groundtruth segmentation using the following metrics:

  • Dice Similarity Coefficient
  • Hausdorff Distance - 95th percentile
  • Sensitivity (this will not be used for ranking purposes)
  • Specificity (this will not be used for ranking purposes)

Task 1 Evaluation Details

Apart from the segmentation metrics above, the communication cost during model training, i.e. Budget time (product of bytes sent/received * number of federated rounds), will be included as another metric for performance evaluation.

Task 2 Evaluation Details

Code Review

To make sure that the submitted containers are functional and to prevent misconduct, we are going to review each submission manually before the actual federated evaluation. Regarding functionality, we intend to check the validity of the algorithm output and measure the execution time of the container on a small dataset using a pre-defined hardware setup. Regarding security, we will inspect the code being executed by the container and discuss any unclear points with the participants.

Federated Evaluation Process

Participants have to adhere to the challenge rules described above to be eligible for evaluation on the test set. Furthermore, the following rules apply to the submissions:

  • Only submissions that include a complete short paper will be considered for evaluation.
  • Only submissions that pass the code review will be considered for evaluation.
  • Each submitted container is given 180 seconds per case in the code review phase (we will check only the total runtime for all cases, though). Submissions that fail to predict all cases within this time budget will not be included in the federated evaluation.
  • If the number of participants is extremely high, we reserve the right to limit the number of participants in the final MICCAI ranking in the following way: Algorithms will be evaluated on the federated test set in the chronological order they were submitted in (last submission of each team counts). This means the later an algorithm is submitted, the higher is the risk it cannot be evaluated on all federated test sets before the end of the testing phase. Note that this is a worst-case rule and we will work hard to include every single valid submission in the ranking.

Ranking

Only the external FeTS testing institutions (that are not part of the training data) are used for the ranking. First, on institution k, algorithms are ranked on all N_k test cases, three regions and two metrics, yielding N_k * 3 * 2 ranks for each algorithm. Averaging these produces a score equivalent to a per-institution rank for each algorithm (rank-then-aggregate approach). The final rank of an algorithm is computed from the average of its per-institution ranks. Ties are resolved by assigning the minimum rank.

Awards

The top-ranked participating teams will receive monetary prizes of total value of $5,000 - sponsored by Intel.

logo