Postprocessors

A postprocessor is a program that provides a resulting score based on the solution submitted by the participant.

Unlike the checker, this program runs once for all tests when all tests are completed. The postprocessor doesn't control the submission verdict and can't affect it.

In the simplest case, the postprocessor sums up the scores of all checkers.

The postprocessor is used to:

  • Get the total score for a submission.
  • Implement a complex scoring logic for a submission, for example, by groups of tests or as a percentage of tests passed.
  • Change the logic of summing up the score for each test.

Uploading to the system

  1. Open the problem and select Settings in the menu on the left.
  2. Go to Additional files and processing at the bottom of the list.
  3. Under Postprocessing files, click Select file.
  4. In the system window, select the file and confirm the upload.

Alert

Upload the executable file first. When running the postprocessor, all other files will be next to the executable one.

Implementation

Postprocessor file requirements:

  • It must contain line breaks that look like \n (unix line breaks). If any other kind of line break is used (for example, \r\n — windows line breaks), the system won't be able to run the file.
  • It must be executable. For non-executable files of non-compiled programming languages, specify a shebang (like in the checker).

You can get data for the postprocessor from stdin. The data is received in JSON format, so you need to attach the necessary libraries.

For example, for Python3, it can be the following command:

#!/usr/local/bin/python3.7

import json

data = json.loads(input())

The postprocessor result determines the submission score. To do this, the first output line of the postprocessor must contain a number (long / double).

Example: print(10).

JSON structure

A data structure with test results that are available to the postprocessor.

{
  "tests": [
    {
      "testName": "tests/01",
      "sequenceNumber": 1,
      "testsetIdx": 0,
      "testsetName": "All tests",
      "verdict": "ok",
      "score": {
        "scoreType": "long",
        "longScore": 10
      },
      "runningTime": 39,
      "memoryUsed": 4648960,
      "startTimeMillis": 1626862111771
    },
    {
      "testName": "tests/02",
      "sequenceNumber": 2,
      "testsetIdx": 0,
      "testsetName": "All tests",
      "verdict": "ok",
      "score": {
        "scoreType": "double",
        "doubleScore": 9.27
      },
      "runningTime": 52,
      "memoryUsed": 162270,
      "startTimeMillis": 1626862113546
    },
  ]
}

Parameter

Description

tests

array of objects

The root array with data from each test.

testName

string

The path to the test file in the problem's file system.

sequenceNumber

integer

The test's sequence number (starting from 1).

testsetIdx

integer

The test set's unique ID.

testsetName

string

The test set's name.

verdict

integer

The test's verdict.

score

object

The test scores.

score.scoreType

boolean

Field type with the score. Acceptable values:

  • long: Scores as integers.
  • double: Scores as fractional numbers.

score.longScore

integer

The score awarded (integers). Available if "scoreType": "long".

score.doubleScore

double

The score awarded (fractional numbers). Available if "scoreType": "double".

runningTime

integer

The execution time of the participant's code in ms.

memoryUsed

integer

Memory usage in bytes.

startTimeMillis

integer

Submission testing start timestamp (Unix Timestamp in milliseconds).

Correct launch

The correct start of the postprocessor depends on the correct contest settings. For example, if the postprocessor scores points based on how many tests were passed out of the total number of tests, the system should launch all tests without interrupting testing when it encounters the first error.

The correct settings affect how fast a participant's submission is checked: the more tests, the longer testing takes.

To set up postprocessors, make sure the contest settings are correct. Pay special attention to the following parameters:

  • Abort testing when first error occurs: Disable if the postprocessor scores points based on the number of tests.
  • Stop testing at first error in test set: Enable to save time checking the submission when different test groups check different properties of a participant's solution and you need to check a different test group, but testing within the current test group is no longer needed.
  • For failed samples: Enable Stop testing to save time and give the verdict to the contest participant faster.

Examples of postprocessors

As a percentage of solved tests: 0-100

In the contest settings, set the following values:

  • Abort testing when first error occurs: Off.
  • Stop testing at first error in test set: Off.
#!/usr/bin/python3

import json

maxValue = 100

testData = json.loads(input())
testsCount = len(testData["tests"])
acceptedTestsCount = len(list(filter(lambda n: n["verdict"] == "ok", testData["tests"])))

print(int(acceptedTestsCount / testsCount * maxValue))
Different scores for different test groups

In the contest settings, set the following values:

  • Abort testing when first error occurs: Off.
  • Stop testing at first error in test set: On/Off, depending on the situation.

We recommend that you don't use the test set All tests (in problem settings). Only test groups with clear designations.

#!/usr/bin/python3

import json

config = [
  { "testsetName": "samples", "scoreByTest": 0 },
  { "testsetName": "All tests", "scoreByTest": 0 },
  { "testsetName": "1", "scoreByTest": 1 },
  { "testsetName": "2", "scoreByTest": 10 },
]

testData = json.loads(input())

finalScore = 0

for elem in testData["tests"]:
  if elem["verdict"] == "ok":
    scoreByCurrentTest = list(filter(lambda x: elem["testsetName"] == x["testsetName"], config))[0]["scoreByTest"]
    finalScore += scoreByCurrentTest

print(finalScore)
Overriding checker scores with the calculation log output

You can configure any contest settings.

We recommend that you don't use the test set All tests (in problem settings). Only test groups with clear designations.

This postprocessor overrides the test value by using an appropriate multiplier depending on the test group.

Information about the final value of each test is logged.

#!/usr/bin/python3

import json

config = {
  "All tests": 0,
  "samples": 0,
  "1": 1,
  "2": 2
}

testData = json.loads(input())["tests"]

finalScore = 0

log = []

for elem in testData:
  if "score" in elem:
    currentScore = 0
    try:
      currentScore = elem["score"]["doubleScore"]
    except KeyError:
      currentScore = elem["score"]["longScore"]

    newScore = currentScore * config[elem["testsetName"]]

    log.append("Test: {}-{}. Score: {} * {} = {} ".format(
        elem["testsetName"],
        elem["testName"],
        currentScore,
        config[elem["testsetName"]],
        newScore
      ))
    finalScore += newScore

print(finalScore)
print("\n".join(log))
Postprocessor with flexible configuration

The postprocessor with flexible configuration was developed by a Yandex Contest user.

Features:

  • It can be adjusted for different scoring systems.
  • It counts a test and group score.
  • It creates custom reports.
  • It generates various types of feedback for participants.

You can learn more about using a postprocessor with flexible configuration at github.com.

Contact Support