Neural Latents Benchmark

Logo

A Benchmark for Models of Neural Data


About
Datasets
NLB'21 Challenge Info
NLB'21 Codepack
NLB'21 EvalAI

Overview

Advances in neural recording present increasing opportunities to study neural activity in unprecedented detail. Latent variable models (LVMs) are promising tools for analyzing this rich activity across diverse neural systems and behaviors, as LVMs do not depend on known relationships between the activity and external experimental variables. To coordinate LVM modeling efforts, we introduce the Neural Latents Benchmark (NLB). The first benchmark suite, NLB 2021, evaluates models on 7 datasets of neural spiking activity spanning 4 tasks and brain areas.

While the benchmark will be available indefinitely, the challenge (phase 2) will close April 3, 2022. To get started with the challenge, follow the links on the left.

NLB Virtual Workshop

We hosted a virtual workshop on 2/27, and all materials from the workshop are available here. The workshop featured several presentations on the benchmark and on developing neural data models, including talks about:

NLB 2021, Phase 2: Calling all models!

Phase 1 of our challenge was won by AE Studio; their writeup will be online shortly, but their code is available here. Their team comprised Darin Erat Sleiter, Joshua Schoenfield, and Mike Vaiana; the team additionally thanks Sumner L Norman for his guidance and advice in the areas of neuroscience and neural decoding.

The challenge continues into phase 2 with an update to the prizes:

Motivation: The Neural Latents effort is unique among machine learning benchmarks in that the downstream use of these models clearly depends on more than the measured metrics. Therefore, we wish to allow for a more qualitative evaluation of each model for their relative merits, and encourage the submission of all models. For example, there is no consensus manner in which to measure model interpretability, but such interpretability is key when using LVMs to infer the computational function of the modeled activity. We want to discourage instances where developers keep potentially valuable models private because the model doesn’t recapitulate the data as perfectly as a powerful black-box approach. One of our primary goals as a benchmark is to populate an “accuracy-interpretability” paretofront. Highlighting such a set of models would provide a variety of downstream users with the model best matched to their requirements, at all levels of analysis. The judge’s choice prize is a first effort towards this concept.


FAQ

How do I submit a model to the benchmark?

We are hosting our challenge on EvalAI, a platform for evaluating machine learning models. On the platform, you can choose to make private or public submissions to any or all of the individual datasets.

Can I view the leaderboard without submitting?

Yes, the full leaderboard will be available on EvalAI, and EvalAI is also synced with Papers With Code. Model open-sourcing is encouraged and thus may be available through the leaderboard.

Is there a deadline?

The benchmark and its leaderboard can be submitted to indefinitely on EvalAI as a resource for the community. Phase 2 of the challenge, which rewards top entries with prizes, will end on April 3rd, 2022.

Is NLB one benchmark or many benchmarks?

NLB aims to regularly organize benchmark suites, a collection of tasks, datasets, and metrics around a theme in neural latent variable modeling. For example, NLB’21 will emphasize general population modeling.

Citation

If you use the Neural Latents Benchmark in your work, please cite our NeurIPS paper:

@inproceedings{PeiYe2021NeuralLatents,
  title={Neural Latents Benchmark '21: Evaluating latent variable models of neural population activity},
  author={Felix Pei and Joel Ye and David M. Zoltowski and Anqi Wu and Raeed H. Chowdhury and Hansem Sohn and Joseph E. O’Doherty and Krishna V. Shenoy and Matthew T. Kaufman and Mark Churchland and Mehrdad Jazayeri and Lee E. Miller and Jonathan Pillow and Il Memming Park and Eva L. Dyer and Chethan Pandarinath},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS), Track on Datasets and Benchmarks},
  year={2021},
  url={https://arxiv.org/abs/2109.04463}
}

Contact

The Neural Latents Benchmark is being led by the Systems Neural Engineering Lab in collaboration with labs across several universities. General inquiries should be directed to [Dr. Pandarinath] at chethan [at] gatech [dot] edu.