A Pipeline for Optimizing F1-Measure in Multi-Label Text Classification

Published in 17th IEEE International Conference on Machine Learning and Applications, Orlando, FL, USA, 2018

Bingyu Wang, Cheng Li, Virgil Pavlu, Javed Aslam

Abstract

Multi-label text classification is the machine learning task wherein each document is tagged with multiple labels, and this task is uniquely challenging due to high dimensional features and correlated labels. Such text classifiers need to be regularized to prevent severe over-fitting in the high dimensional space, and they also need to take into account label dependencies in order to make accurate predictions under uncertainty. Many classic multi-label learning algorithms focus on incorporating label dependencies in the model training phase and optimize for the strict set-accuracy measure. We propose a new pipeline which takes such algorithms and improves their F1-performance with careful training regularization and a new prediction strategy based on support inference, calibration and GFM, to the point that classic multi-label models are able to outperform recent sophisticated methods (PDsparse, SPEN) and models (LSF, CFT, CLEMS) designed specifically to be multi-label F-optimal. Beyond performance and practical contributions, we further demonstrate that support inference acts as a strong regularizer on the label prediction structure.

Advisor

Professor Javed Aslam

Advisor’s Statement

Bingyu was the first and primary author on this paper: he conceived many of the ideas, conducted all of the experiments, drew most of the conclusions and insights, and did most of the writing. The initial idea behind the paper was a mix of research and engineering: how could one build an efficient and effective machine learning pipeline for a particular objective (muti-label F1 optimization)? In addition to building such a pipeline, this work additionally led to novel and surprising insights into the reasons why other machine learning methods for this problem work (or fail).

Acceptance Rate:

  • Regular Papers: 31%
  • Short Papers: 14%
    Accepted as a short paper, with 6 pages.

Download

[Paper] [Poster]