How to submit your models to the benchmark


Submission to the benchmark is simple and framework-agnostic:

  • Select one or more datasets from :doc:

  • Train a pose estimation model. The benchmark is framework agnostic, and you can use any codebase of your choice.

  • Your model should export predictions for all images in the test datasets. Package these predictions in a format of your choice

  • Implement a loader function using the benchmark API, and contribute your code along with your predictions as a pull request to the benchmark.

Preparing your submission

Your submission will be a pull request directly to the ``benchmark’’ package. The PR should have the following structure:

      <your-package-name>/    # Empty file      # Describe your contribution here
         LICENSE        # The license that applies to your contribution
         <file 1>.py    # Python modules needed for processing your
         ...            #  submission data
            <data>      # data files containing the predictions of your
            ...         # model. You can use any format.

The data sub-folder can contain the raw outputs from the pose estimation framework of your choice. During evaluation, we crawl all *.py files in your submitted package for method definitions.

These looks as follows:

import benchmark
from benchmark.benchmarks import TriMouseBenchmark

class YourSubmissionToTriMouse(DLCBenchMixin, TriMouseBenchmark):

   code = "link/to/your/code.git"

   def names(self):
      """An iterable of model names to evaluate."""
      return "FooNet-42", "BarNet-73"

   def get_predictions(self, name):
      """This function will receive one of the names returned by the
      names() function, and should return a dictionary containing your
      return {
         "path/to/image.png" : (
            # animal 1
               "snout" : (0, 1),
               "leftear" : (2, 3),
            # animal 2
               "snout" : (0, 1),
               "leftear" : (2, 3),

For more advanced use-cases, please refer to our api.

Testing your submission

You can test your submission by running

$ python -m benchmark

from the repository root directory, which will generate a table with all available results. If your own submission does not appear, make sure that you added your evaluation class to the benchmark with the @benchmark.register decorator.


To submit, open a pull request directly in the benchmark repository:


If you encounter difficulties during preparation of your submission that are not covered in this tutorial, please open an issue in the benchmark repository: