Neural Latents Benchmark

Logo

A Benchmark for Models of Neural Data

Overview

The Neural Latents Benchmark (NLB) aims to evaluate models of neural population state. In the first benchmark suite, participating models should take multi-channel spiking activity as input and produce firing rate estimates as output. This first benchmark will be released in August 2021.

The first benchmark suite will consist of a collection of 3-4 datasets of neural spiking activity and a public challenge hosted on EvalAI. While the benchmark will be available indefinitely, the challenge will close and announce winners in Fall 2021.

Stay in the loop!

FAQ

How do I submit a model to the benchmark?

Please note, the first benchmark will open in August 2021. We are hosting our challenge on EvalAI, a platform for evaluating AI models. On the platform, you can choose to make private or public submissions to any or all of the individual datasets.

Can I view the leaderboard without submitting?

Yes, the full leaderboard will be available on this website indefinitely (courtesy of EvalAI), and EvalAI is also synced with Papers With Code.

Is there a deadline?

The benchmark and its leaderboard can be submittted to indefinitely on EvalAI as a resource for the community. However, the winners of the challenge will be determined from the leaderboard in Fall 2021. Prizes for the challenge will be distributed to the winner then.

Is NLB one benchmark or many benchmarks?

NLB aims to regularly organize benchmark suites, a collection of tasks, datasets, and metrics around a theme in neural latent variable modeling. For example, the first benchmark suite will emphasize general population modeling.

Contact

The Neural Latents benchmark is being led by the Systems Neural Engineering Lab in collaboration with labs across several universities. General inquiries should be directed to [Dr. Pandarinath] at chethan [at] gatech [dot] edu.