HW5 Q2 and Q3


#1

After creating the different datasets I am still getting >99% test accuracy for all the different lambda values. In the following question it says that the internal covariate shift is harmful, but the accuracy that I am getting don’t really reflect this. Am I doing something incorrectly?


#2

Two thoughts:

  1. Did you use Batch Normalization? As it will reduce hidden unit value shifts, you won’t see a lot change.
  2. If you use positive class as “t-shirt+shirt”, try different class such as “trouser+shirt”. I guess “t-shirt” and “shirt” and too similar to each other and there won’t be too much distribution shift.

#3

I also have the same issue. I didn’t use batch normalization and tried using “trouser+shirt” but still get > 99% accuracy…


#4

Any updates on this? I am not using batch normalization just a regular fully connected nn. Is there a typo and we are supposed to reweight the classes themselves? Otherwise, shirts/tshirts/trousers and shoes/sandals are pretty easy to distinguish.


#5

I got accuracy 0.499 for all of the lambda. I felt there is something wrong here. Is it reasonable to get only 50% accuracy?


#6

Let’s have a little bit change on Q2:

For this, compose a dataset of 12, 000 observations, given by a mixture of shirt and t-shirt and of sandal and sneaker respectively, where you use a fraction λ ∈ {0.05, 0.1, 0.2, . . . 0.8, 0.9, 0.95} of one (shirt and t-shirt) and a fraction of 1 − λ of the other datasets (sandal and sneaker) respectively.

For instance, you might pick for λ = 0.1 a total of 600 shirt and 600 t-shirt images and likewise 5,400 sandal and 5, 400 sneaker photos, yielding a total of 12, 000 images for training. Note that the test set remains unbiased, composed of 2, 000 photos for the shirt + t-shirt category and of the sandal + sneaker category each.


#7

I did the change you suggested and I’m still seeing very high (99%) accuracy rate even when the shift is pretty extreme.


#8

same here. No big change after adjusting the dataset.