Research Article: Subspace structural constraint-based discriminative feature learning via nonnegative low rank representation

Date Published: May 7, 2019

Publisher: Public Library of Science

Author(s): Ao Li, Xin Liu, Yanbing Wang, Deyun Chen, Kezheng Lin, Guanglu Sun, Hailong Jiang, Kim Han Thung.

http://doi.org/10.1371/journal.pone.0215450

Abstract

Feature subspace learning plays a significant role in pattern recognition, and many efforts have been made to generate increasingly discriminative learning models. Recently, several discriminative feature learning methods based on a representation model have been proposed, which have not only attracted considerable attention but also achieved success in practical applications. Nevertheless, these methods for constructing the learning model simply depend on the class labels of the training instances and fail to consider the essential subspace structural information hidden in them. In this paper, we propose a robust feature subspace learning approach based on a low-rank representation. In our approach, the low-rank representation coefficients are considered as weights to construct the constraint item for feature learning, which can introduce a subspace structural similarity constraint in the proposed learning model for facilitating data adaptation and robustness. Moreover, by placing the subspace learning and low-rank representation into a unified framework, they can benefit each other during the iteration process to realize an overall optimum. To achieve extra discrimination, linear regression is also incorporated into our model to enforce the projection features around and close to their label-based centers. Furthermore, an iterative numerical scheme is designed to solve our proposed objective function and ensure convergence. Extensive experimental results obtained using several public image datasets demonstrate the advantages and effectiveness of our novel approach compared with those of the existing methods.

Partial Text

Feature subspace learning is a critical technique for feature extraction, which has been widely and well studied in the areas of computer vision, data mining, and pattern recognition [1, 2, 3]. Many representative works have been proposed for feature subspace learning. For example, principal component analysis (PCA) [4] is a classical unsupervised feature learning method, which seeks a subspace with maximum variance of the projected samples to project the high-dimensional data onto a lower dimensional subspace. Aiming to preserve the local neighborhood structure in the data manifold, He et al. proposed a neighbor-preserving embedding (NPE) [5], which showed advantages over PCA in terms of the robustness to noise and reduced sensitivity to outliers. Locality-preserving projection (LPP) [6] is another effective feature projection method, which attempts to preserve more local structure of the original image space. Although both NPE and LPP are unsupervised feature learning methods, they can be extended to supervised scenarios to achieve improved performance. To improve the robustness and discriminative ability of preserving projection methods, structurally incoherent low-rank 2DLPP (SILR-2DLPP) [7] was proposed, which realized the discriminability of the preserving projection learning by recovering the sample from different classes. Linear discriminant analysis (LDA) [8] is a well-known supervised subspace learning method, which obtains the projection by Fisher’s LDA and produces well-separated classes in a low-dimensional subspace with discriminative information. For further improvement, locality-sensitive discriminant analysis (LSDA) [9] was presented, which aimed to learn projection by determining the local manifold structure to maximize the margin between the data points from the different classes in each local area.

In this section, we briefly review the related works on LRR and discriminative subspace learning, respectively.

In this section, our discriminative feature learning model is proposed and the novel objective function for our proposed model is detailed and analyzed. To solve the objective function efficiently, we also developed a numerical scheme to obtain an approximate solution.

In this paper, a robust and discriminative feature subspace learning method is proposed for feature extraction and classification tasks. Our approach iteratively learns a subspace with two types of constraints based on a low-rank representation and class labels, respectively. The ALM with BCD is developed to solve the framework convergently. The proposed approach is examined on several public datasets, and the experimental results demonstrate the competitive and superior performance of our approach compared to the conventional methods. In addition, when the data suffer from noise, our approach shows more robustness than the other comparison methods. In the future work, we may extend our approach to a semi-supervised scenario for feature learning and design some new regularization constraints to further improve the classification performance.

 

Source:

http://doi.org/10.1371/journal.pone.0215450

 

Leave a Reply

Your email address will not be published.