Biased prediction in image recognition in R 3.5.0

Hi All,

I am getting biased output from the model predicted using mxnet package in R 3.5.0.

I am posting the sample code of model training.

thank you in advance for looking into this query.


training_index <- createDataPartition(complete_set$label, p = .9, times = 1)
training_index <- unlist(training_index)
train_set <- complete_set[training_index,]
dim(train_set)
test_set <- complete_set[-training_index,]
dim(test_set)

Fix train and test datasets

train_data <- data.matrix(train_set)
train_x <- t(train_data[, -1])
train_y <- train_data[,1]
train_array <- train_x
dim(train_array) <- c(72, 72, 1, ncol(train_x))

test_data <- data.matrix(test_set)

test_x <- t(test_set[,-1])

test_y <- test_set[,1]

test_array <- test_x

dim(test_array) <- c(72, 72, 1, ncol(test_x))

test_data <- data.matrix(test_set)
test_x <- t(test_data[,-1])
test_y <- test_data[,1]
test_array <- test_x
dim(test_array) <- c(72, 72, 1, ncol(test_x))

library(mxnet)

Model

mx_data <- mx.symbol.Variable(‘data’)

1st convolutional layer 5x5 kernel and 20 filters.

conv_1 <- mx.symbol.Convolution(data = mx_data, kernel = c(6, 6), num_filter = 20)
tanh_1 <- mx.symbol.Activation(data = conv_1, act_type = “tanh”)
pool_1 <- mx.symbol.Pooling(data = tanh_1, pool_type = “max”, kernel = c(3,3), stride = c(3,3 ))

2nd convolutional layer 5x5 kernel and 50 filters.

conv_2 <- mx.symbol.Convolution(data = pool_1, kernel = c(6,6), num_filter = 30)
tanh_2 <- mx.symbol.Activation(data = conv_2, act_type = “tanh”)
pool_2 <- mx.symbol.Pooling(data = tanh_2, pool_type = “max”, kernel = c(3,3), stride = c(3, 3))

1st fully connected layer

flat <- mx.symbol.Flatten(data = pool_2)
fcl_1 <- mx.symbol.FullyConnected(data = flat, num_hidden = 500)
tanh_3 <- mx.symbol.Activation(data = fcl_1, act_type = “tanh”)

2nd fully connected layer

fcl_2 <- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 2) # 2 classes

fcl_2 <- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 6) # 6 classes

Output

NN_model <- mx.symbol.SoftmaxOutput(data = fcl_2)

Set seed for reproducibility

mx.set.seed(100)

Device used. Sadly not the GPU :frowning:

device <- mx.cpu()

gc()

Train on 1200 samples

model <- mx.model.FeedForward.create(NN_model, X = train_array, y = train_y,
ctx = device,
num.round = 30,
# array.batch.size = 20,
learning.rate = 0.01,
# momentum = 0.9,
# wd = 0.00001,
eval.metric = mx.metric.accuracy,
epoch.end.callback = mx.callback.log.train.metric(100))

predict_probs <- predict(model, test_array)
predicted_labels <- max.col(t(predict_probs)) - 1

predicted_labels <- max.col(t(predict_probs))

table(test_data[, 1], predicted_labels)


Here, I am trying to solve classification problem with 2 classes labelled.
Model posted above is predicting on only 1 side of the labels.
You help would be really useful.

Thanks.

Hi @apshreyans,

Could you clarify what you mean by “only 1 side of the labels”? An example output would be helpful, thanks!

If the model is always predicting the same class for all samples, you may have an unbalanced dataset, where there is many more samples from one class than the others. If so, a simple technique is to “undersample” from majority classes so that all of the classes have an equal number of samples in your training set. See this article. Otherwise you may want to look at weighting samples inversely to the frequency of their class.

Thanks for you time @thomelane

I have 2 classes of target variable values.
On test data, prediction is biased on single value only instead of 2 values.

My dataset is balanced one.

Could you please help me with this.

Thank you in advance.

-Shreyans