Hello ELEVATER! A Platform for Computer Vision in the Wild

Evaluation of Language-augmented Visual Task-level Transfer.

Why ELEVATER?



Various Datasets over Representative Tasks

20 image classification datasets / 35 object detection datasets.



Toolkit

Automatic hyper-parameter tuning; Strong language-augmented efficient adaptation methods




Diverse Knowledge Source

Each dataset concept is augmented with diverse knowledge source include: WordNet, Wiktionary, and GPT3.



Leaderboard!

To track the research advances in language-image models.



What is ELEVATER?


The ELEVATER benchmark is a collection of resources for training, evaluating, and analyzing language-image models on image classification and object detection. ELEVATER consists of:

  • Benchmark: A benchmark suite that consists of 20 image classification datasets and 35 object detection datasets, augmented with external knowledge
  • Toolkit: An automatic hyper-parameter tuning toolkit; Strong language-augmented efficient model adaptation methods.
  • Baseline: Pre-trained languange-free and languange-augmented visual models.
  • Knowledge: A platform to study the benefit of external knowledge for vision problems.
  • Evaluation Metrics: Sample-efficiency (zero-, few-, and full-shot) and Parameter-efficiency.
  • Leaderboard: A public leaderboard to track performance on the benchmark

The ultimate goal of ELEVATER is to drive research in the development of language-image models to tackle core computer vision problems in the wild.

[Quick introduction with slides]


News


  • [ Summer, 2022] Interested in learning what is ``Computer Vision in the Wild''?
    • Talks Please check out an overview of our team effort "A Vision-and-Language Approach to Computer Vision in the Wild: Modeling and Benchmark". Talks at Apple AI/ML, NIST, Xiaoice, The AI Talks. [YouTube]
    • Demos Vision systems that are equipped with the mechanism to recognize any concept from any given images. Check out the demos on image classification with UniCL, object detection with RegionCLIP and GLIP.
    • Challenge Have a better idea? Join the community.

A more diverse set of CV tasks





Paper


Please cite our paper as below if you use the ELEVATER benchmark or our toolkit.

@article{li2022elevater,
    title={ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models},
    author={Li, Chunyuan and Liu, Haotian and Li, Liunian Harold and Zhang, Pengchuan and Aneja, Jyoti and Yang, Jianwei and Jin, Ping and Hu, Houdong and Liu, Zicheng and Lee, Yong Jae and Gao, Jianfeng},
    journal={Neural Information Processing Systems},
    year={2022}
}

Contact



Have any questions or suggestions? Feel free to reach us by opening a GitHub issue!