Frequently asked questions
Yes, our interpretation is that if you violate one of the time windows from above (you can still be early without any penalty), we deem the solution 'infeasible'. This is also true for when you exceed
maxT. Note that if you only exceed the time windows a couple of times, there is still a chance you can get a 'good' tour out of it even though it's 'infeasible'.
In the first track, you are expected to solve a single instance of the problem.
In the second track, you are expected to learn a policy and solve 1,000 instances sampled 100 times.
We won't enforce any limits there. While using more hardware certainly gives an advantage, we believe there is more to be gained from developing smart and efficient solutions, especially considering the enormous size of the search space. Using a big cluster that might have downtime during the test phase is at your own risk and might give someone with smaller but more reliable hardware an advantage. We do encourage all participants, when submitting their code, to give us an indication of what kind of hardware they used. This will not determine the winner, but it will indicate if the winning team mainly won due to their hardware capabilities or inventive solution. Besides this, there will be possibilities for the winning teams and the teams who develop inventive solutions to publish their results via the DSO workshop at IJCAI2021. This way, participating in the competition should be rewarding no matter the hardware capabilities.
We don't impose a limit on the number of objective function evaluations, as participants are allowed to shape this objective function in any way they like. For example, to make use of a multi-fidelity approach or to use continuous solvers. There is also no limit on the computation time, except for the 1-week time limit of the final test phase.
Yes. Please submit your code, in whatever language it is written, as a zip file when submitting your solution.
No. We (the organizers) use the submitted code to check how you solved the problem, and whether you used surrogate models in track 1 and RL methods in track 2.
Ideally, your code is readable and includes relevant comments that help us determine the methods you employed. We might ask for clarifications if the code is not clear to us.
Yes, the instances for both tracks between the validation and test phases will be different. This is to ensure that participants do not overfit the validation instances. For Track 1, the test instance will be larger than the validation instance, but it will not have more than 70 nodes. More details will come soon!