I disagree very strongly. Many fields over long periods of the history of science have oriented themselves around benchmark problems.
Some things which come to mind are:
- C. elegans for connectomics
- Drosophila experiments for a wide range of biology benchmarks
- even previously in computer vision there was the so-called "chair challenge" [0], and dozens and dozens of canonical face detection, object detection, and segmentation data sets used frequently as benchmarks across many papers
- in Bayesian statistics there are various canonical data sets for evaluating theoretical improvements in hierarchical models and general regression
- in finance there is CRSP and the Kenneth French Data Library
It's very common across many fields to orient around benchmark problems and data sets, and it has been for a really long time. This is not at all new with ImageNet, not even just in the tiny world of computer vision.
Some things which come to mind are:
- C. elegans for connectomics
- Drosophila experiments for a wide range of biology benchmarks
- even previously in computer vision there was the so-called "chair challenge" [0], and dozens and dozens of canonical face detection, object detection, and segmentation data sets used frequently as benchmarks across many papers
- in Bayesian statistics there are various canonical data sets for evaluating theoretical improvements in hierarchical models and general regression
- in finance there is CRSP and the Kenneth French Data Library
It's very common across many fields to orient around benchmark problems and data sets, and it has been for a really long time. This is not at all new with ImageNet, not even just in the tiny world of computer vision.
[0]: < http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226... >