Neural Latents Benchmark

Logo

A Benchmark for Models of Neural Data


About
Datasets
NLB'21 Challenge Info
NLB'21 Codepack
NLB'21 EvalAI

Challenge Guidelines

As part of the benchmarking effort, we are hosting a competition and offering prizes for the best-performing submissions on the EvalAI leaderboard. Phase 1 of the challenge ended on 1/7/2022, won by AE Studio. Phase 2 of the challenge is ongoing, and will end on April 3rd, 2022, anywhere on Earth (i.e. 23:59 April 3rd UTC-12). Phase 2 is graciously funded by AE Studio.

Ranking and Prizes

The challenge evaluates co-smoothing performance on 5ms bins, on individual datasets and across all datasets. There are four primary datasets: MC_Maze, MC_RTT, Area2_Bump, DMFC_RSG, and three scaling datasets: MC_Maze-Large, MC_Maze-Medium, MC_Maze-Small. The prizes are allotted as follows:

Eligibility

To be considered for challenge prizes, teams must submit their methods to the EvalAI challenge before the deadline and commit to releasing code for reproducibility. This commitment means that, after the deadline, top contenders will be notified to share their code with the NLB team so that we can validate their submissions, and moreover, that this code must be open-sourced before we transfer funds. Note that NLB organizers are ineligible for collecting prize money.

We recommend participants to publicly share their code to help promote progress in the field.