Rules

Goal: The development of tissue trackers for stereo endoscopic videos.

The SurgT challenge has been created for the purpose of exploring tracking methods for surgical scenes, where there is limited annotated data available. The participants of this challenge are expected to develop tracking methods, which can learn to perform accurate keypoint tracking on surgical tissue, without the use of human-annotated endoscopic videos. This challenge encourages the development of self-supervised tracking methods, and also generalised tracking methods which have been trained on annotated non-surgical scenes. Traditional computer vision methods are also welcomed.

For this challenge, the participants are asked to develop tracking methods for stereo endoscopic videos, as shown in the image above. More info about the data can be found in SurgT Data.

The participants are expected to: (1) create a docker image; (2) download the SurgT_benchmarking GitHub repository; (3) replace the sample_tracker.py with the developed self-supervised tracker.

The benchmarking tool is used to assess the tracker during the development stage and for getting the final results. The final results will be calculated on the test set, which will be hidden until the end of the challenge. The trackers are ranked according to their ability to track a bounding box throughout the video sequence. Specifically, the submissions will be ranked by computing the EAO (Expected Average Overlap) score, similar to what is used in the VOT Challenge.

At the end of the challenge, results will be published in a joint publication. In this publication, the developed methods will be described and all the members of the teams will be made authors. The participating teams may publish their own results separately only after the joint publication is published. The developed methods, including code and models, are to be made publicly available online at the end of the challenge.

Submission

Submissions are open from now until the end of the submission period (September 1st at 14:00 ET).

How to create a docker image?

  1. Download the docker_submission/ folder from the GitHub link: SurgT_challenge
  2. Create an environment requirements file docker_submission/surg_t_requirements.txt, to specify the libraries you used:
    For conda: conda env export > surg_t_requirements.txt
    For virtualenv: pip freeze > surg_t_requirements.txt
  3. Edit line 60 in the Dockerfile to upload your code to the docker image
    Option 1: copy SurgTbenchmarking folder (containing your code) from your local computer to the docker image using: COPY [path/to/your/SurgT_benchmarking] /home/newuser/surg_t/
    Option 2: upload the SurgT
    benchmarking folder (containing your code) on GitHub and use: RUN git clone https://github.com/YOUR_GIT_USERNAME/SurgT_benchmarking.git /home/newuser/surg_t/
  4. Run the following commands to build Docker and test it is working: sh build.sh , sh start.sh
  5. Compress docker image for uploading : docker save -o surgt.tar surgt:latest
  6. Upload the docker image .tar file to any remote file storage of choice Google Drive, Box, DropBox...;

Please send an email to jmc19@ic.ac.uk, containing the following information:

  1. Please send us a link to your docker image containing the latest version of [SurgT_benchmarking](https://github.com/Cartucho/SurgT_benchmarking)  configured to your algorithm. Make sure that the file src/evaluate.py is updated; Find instructions on how to create the docker image above;
  2. Team members (names + email) - One submission per team;
  3. Please attach a summary to be used in the final joint publication: 1. 1 paragraph describing the method with references (example from VOT challenge) 2. 1 paragraph describing the datasets used for training

At the end of the submission period, the latest received email from each team will be used to download the docker image. The winning team and runner-up will be announced at the MICCAI challenge's event.

FAQ

  1. Can I manually label the training data?

    No, you **cannot label** and **cannot use any annotated surgical
    data**. The only labelled data that can be used are the non-medical
    datasets that are publicly available online, e.g.
    <a href="http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html"     target="_blank">Visual Tracker Benchmark Dataset</a>,
    <a href="https://www.votchallenge.net/vot2021/dataset.html"     target="_blank">VOT 2021 Dataset</a>.
    
  2. Can I train on my own data?

    Private datasets cannot be used. Public datasets can be used, except
    for data from other
    <a href="https://endovis.grand-challenge.org/" target="_blank">EndoVis
    challenges</a>, and endoscopic data from the Hamlyn Centre for
    Robotic Surgery. Participants will be required to declare all
    datasets that have been used, during submission.
    
  3. Can I train on the validation ground truth?

    No, the validation ground truth can only be used to assess your
    method during development.
    
  4. Can I develop a monocular tracking method, instead of stereo?

    Yes, as exemplified in <a href="https://github.com/Cartucho/SurgT_benchmarking/blob/main/src/sample_tracker.py"     target="_blank">sample_tracker.py</a> a monocular tracker can be
    used to track the bounding box on the left and right image
    separately.
    
  5. Are the images in the test set rectified?

    Yes, similarly to the validation dataset the images in the test set
    are rectified. So,  the left bounding box will always be centred
    along the same row as the right bounding box. In the training set,
    we also provide the stereo calibration parameters, which can be used
    to rectify the provided training videos.
    
  6. How will I submit my method at the end of the challenge?

    You will send the docker image to the organizers. In your docker
    image, <a href="https://github.com/Cartucho/SurgT_benchmarking"     target="_blank">SurgT_benchmarking</a>  should have already been
    downloaded and running with your method. Simply, replace the <a href="https://github.com/Cartucho/SurgT_benchmarking/blob/main/src/sample_tracker.py"     target="_blank">sample_tracker.py</a> with your own method. At the
    end of the challenge, we will load your docker image, and update the
    <a href="https://github.com/Cartucho/SurgT_benchmarking/blob/main/config.yaml"     target="_blank">config.yaml</a> file with the links to the test data
    and assess your method.
    
  7. Can I use pre-trained networks?
    Yes, pre-trained networks are allowed in this challenge.

Summary

  • Manual annotation of the training/development data is not allowed;
  • Validation ground truth cannot be used for training;
  • Publicly available non-surgical datasets are allowed, even with ground truth labels;
  • Docker image with SurgT_benchmarking is required;
  • Ranking by EAO score;
  • Results & Methods will be published in a joint paper;
  • Developed methods are to be made publicly available online.