You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But some labels from different datasets are normalized to one label. Why?
Is it due to the When there are different label granularities, we keep them all in our label-space, and expect to predict all of them in the paper?
The text was updated successfully, but these errors were encountered:
Hi,
This is a great question! This phenomenon is discussed in our paper Fig. 3: some classes with the same name might correspond to different definitions. We take oven as an example in the paper, where the definition of oven among the three datasets are different. We believe here the nightstand among the two datasets are different. Note this is NOT what we mean by the label granularities. We mean we did not merge OpenImages Boy and COCO person for the granularities.
If you prefer a tighter label space, we recently observe simply increasing \tau in the label space learning algorithm can do the trick. I uploaded a new label space with \tau=0.3 which gives a label-space size of 668 classes. This gives close performance of 41.6/ 20.7/ 62.9 mAP on COCO/O365/OID (vs. 41.9/ 20.9/ 63.0 of the paper label space of 701 classes).
For example, nightstand appears twice in the labels file
learned_mAP.csv
:But some labels from different datasets are normalized to one label. Why?
Is it due to the
When there are different label granularities, we keep them all in our label-space, and expect to predict all of them
in the paper?The text was updated successfully, but these errors were encountered: