A Benchmark for Models of Neural Data
As part of the benchmarking effort, we are hosting a competition and offering prizes for the best-performing submissions on the EvalAI leaderboard. Phase 1 of the challenge ended on 1/7/2022, won by AE Studio. Phase 2 of the challenge is ongoing, and will end on April 3rd, 2022, anywhere on Earth (i.e. 23:59 April 3rd UTC-12). Phase 2 is graciously funded by AE Studio.
The challenge evaluates co-smoothing performance on 5ms bins, on individual datasets and across all datasets. There are four primary datasets: MC_Maze
, MC_RTT
, Area2_Bump
, DMFC_RSG
, and three scaling datasets: MC_Maze-Large
, MC_Maze-Medium
, MC_Maze-Small
. The prizes are allotted as follows:
MC_Maze
, MC_RTT
, Area2_Bump
, and DMFC_RSG
.MC_Maze-Large
, MC_Maze-Medium
, and MC_Maze-Small
.
Prizes will be provided in the form of Visa gift cards.To be considered for challenge prizes, teams must submit their methods to the EvalAI challenge before the deadline and commit to releasing code for reproducibility. This commitment means that, after the deadline, top contenders will be notified to share their code with the NLB team so that we can validate their submissions, and moreover, that this code must be open-sourced before we transfer funds. Note that NLB organizers are ineligible for collecting prize money.
We recommend participants to publicly share their code to help promote progress in the field.