Is it possible to reach mAP 1.0 if not all of the object could be detected?


#1

As the title mention, could this happen?

I trained a person and face detector by yolov3, apply the detector on the FDDB dataset, with IOU equal to 0.5 and input is 320, the program tell me the mAP is 1.0, which sounds weird for me since not every faces could be detected when the input shape is 320.

Either

  1. VOC07MApMetric got bug
  2. This is possible
  3. my codes got bug
  4. The rec file generated by me got bugs

ps : Put the details, model and data on my blog


#2

Hi @stereomatchingkiss,

I’d also be suspicious of mAP being exactly 1.0, unless the objects in your dataset are large and thus could be detected easily using a 320x320 input image. Could it be that you’re calculating mAP for very few test samples? Could it be rounding applied to mAP? You could try plotting the predicted bounding boxes, and compare to the truth. You could also try increasing the IOU (e.g. 0.99) and see if mAP <1.0 then. See if they provide any clues.


#3

The target is face, so can’t say those targets are big, but the bounding boxes of the FDDB(test set) do much larger than the training set(data from kaggle, I adjust the bounding boxes), because the bounding boxes of FDDB enclose the head of humans, but the training set enclose five sense organs.

You could try plotting the predicted bounding boxes, and compare to the truth.

Tried, looks fine, accuracy is close to perfect, do not find any mis-classify example
But there are a few faces cannot detect

You could also try increasing the IOU (e.g. 0.99) and see if mAP <1.0 then. See if they provide any clues.

Tried too, if increase mAP to 0.6(maybe is 0.7), mAP become zero, maybe this is because bounding boxes of training set enclose five sense organs but test set enclose head of humans.

There are 2845 images for testing, may contain more than 6000 faces(haven’t counted), I think this is big enough for testing(training set only got 406 images).

I wonder too