Assessment of CNN+XGBoost Performance for Image Classification
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In the last few years, convolutional neural network (CNN) models have provided state-of-the-art results in visual recognition tasks. Similarly to CNNs, tree-based methods, in particular, gradient tree boosting (XGBoost) provided superior results in many applications. Taking into account the superiority of both methods, the goalof this work is to implement the CNN+XGBoost combined model where learned representations extracted from the CNN part will be used as input features for the XGBoost part. It is of particular interest to investigate whether the XGBoost part improves classification accuracy of the CNN part. In this work, we use existing approaches — AlexNet, AllConvolutionalNet, WideResNet, DenseNet and CaffeNet (in transfer learning mode) — to extract features from the CNN part with different quality, which is defined by the classification accuracy of
the appropriate CNN model. Then XGBoost is trained on the extracted features and the obtained final accuracy of AlexNet+XGBoost, AllConvolutionalNet+XGBoost,WideResNet+XGBoost, DenseNet+XGBoost and CaffeNet+XGBoost models are assessed. All experiments are fulfilled using the CIFAR10 image dataset. Our results show that features extracted by CNNs, which provided more than 85–88% classification accuracy, do not allow XGBoost to improve the final CNN+XGBoost classification performance.