Research Article: Improved support vector machine classification algorithm based on adaptive feature weight updating in the Hadoop cluster environment

Date Published: April 10, 2019

Publisher: Public Library of Science

Author(s): Jianfang Cao, Min Wang, Yanfei Li, Qi Zhang, Ulas Bagci.


An image classification algorithm based on adaptive feature weight updating is proposed to address the low classification accuracy of the current single-feature classification algorithms and simple multifeature fusion algorithms. The MapReduce parallel programming model on the Hadoop platform is used to perform an adaptive fusion of hue, local binary pattern (LBP) and scale-invariant feature transform (SIFT) features extracted from images to derive optimal combinations of weights. The support vector machine (SVM) classifier is then used to perform parallel training to obtain the optimal SVM classification model, which is then tested. The Pascal VOC 2012, Caltech 256 and SUN databases are adopted to build a massive image library. The speedup, classification accuracy and training time are tested in the experiment, and the results show that a linear growth tendency is present in the speedup of the system in a cluster environment. In consideration of the hardware costs, time, performance and accuracy, the algorithm is superior to mainstream classification algorithms, such as the power mean SVM and convolutional neural network (CNN). As the number and types of images both increase, the classification accuracy rate exceeds 95%. When the number of images reaches 80,000, the training time of the proposed algorithm is only 1/5 that of traditional single-node architecture algorithms. This result reflects the effectiveness of the algorithm, which provides a basis for the effective analysis and processing of image big data.

Partial Text

The selection of image features and classifiers has long been a primary challenge in image classification [1]. Traditional image classification algorithms can be roughly divided into two categories. One category includes classification algorithms that employ artificial markers, whereas the other includes algorithms that use keywords and text to describe and classify images. These methods are simple and easy to understand. However, they are also time consuming, cumbersome and easily affected by subjectivity, and they yield inaccurate classification results [2]. Classification methods [3] based on image contents were subsequently proposed; these methods avoid the effects of subjectivity and display high classification performance. Currently, image classification methods based on low-level visual features are commonly used for image feature extraction, and these methods display both accuracy and temporal complexity advantages in image classification. However, the single-feature descriptions and single-node architecture algorithms used in these methods have considerable defects, such as low classification accuracy and poor time performance when applied to big data [4].

In image classification, image characteristics are key factors that determine classification performance. Low-level visual feature algorithms are frequently used for image feature extraction, and image classification methods that are based on single low-level visual features (such as color, texture, or shape) are the most common [5]. Golpour et al. [6] classified 5 rice cultivars according to 36 color features extracted from the RGB, HSI, and HSV spaces of images. In the field of medicine, Sezer et al. [7] extracted the texture features of humeral heads via the Hermite transform and developed a computer-aided diagnosis system to identify images representing normal and edematous humeral heads. Kothari et al. [8] proposed a shape-based image analysis method for renal tumors that used Fourier shape descriptors to extract the shape features from images and captured the distribution of stain-enhanced cellular and tissue structures in each image to distinguish disease subtypes. In addition, other algorithms based on single image features are also commonly used in image classification. Fidalgo et al. [9] classified images in the Caltech101 dataset by extracting the scale-invariant feature transform (SIFT) features from the images using the Edge-SIFT descriptor. Guccione et al. [10] proposed an iterative hyperspectral image classification algorithm based on spectral-spatial relational features by defining the spatial features of remote sensing images. Although researchers have selected different features for use in image classification, single features often fail to accurately describe the contents of images in several application fields. Therefore, the performance of the above single-feature-based image classification algorithms can still be improved. As an alternative, in recent years, many scholars have studied the deeper features of images, that is, the high-level semantic features. For example, Lei et al. [11] applied rough set theory to construct low-level feature decision tables for images and proposed a feature extraction algorithm based on a rough set approach to identify the low-level semantic features of images. In addition, Han et al. [12] proposed an algorithm to extract semantic features from images based on canonical correlation analysis and feature fusion; the application of this algorithm effectively ensured consistency between low-level features and high-level semantics. For remote sensing images, Wang et al. [13] designed an image retrieval scheme suitable for scene matching that combined the visual features of images with the semantic features of object and spatial relationships. However, although semantic features can effectively express the content of an image, semantic features are very complicated, and their extraction is a relatively difficult endeavor. At present, most semantic feature extraction algorithms are based on the low-level visual features of images [14]. Therefore, researchers have recently begun to consider image classification methods that integrate a variety of features. Zakeri et al. [15] extracted the texture and shape features of ultrasound images to differentiate benign and malignant breast masses. Lee et al. [16] proposed an automated detection and segmentation method for computed tomography (CT) images using texture and context feature classification that was intended for use in the computer-aided diagnosis of renal cancer. Dhara et al. [17] described pulmonary nodules in lung CT images using shape-based, margin-based and texture-based features to classify benign and malignant pulmonary nodules to assist in treatment. In the field of agriculture, Dubey et al. [18] studied an apple disease classification method based on color, texture and shape features and experimentally verified the method by comparing the results with individual features. Liu et al. [19] used the local binary pattern (LBP) operator to extract textural features from images and integrated the color information features from the images, improving the efficiency of image classification and retrieval to a certain degree. Mirzapour et al. [20] fused the spectral information contained in images with the texture and shape features of hyperspectral images and performed repeated experiments to determine the optimal combination of features; this practice improved the efficiency of hyperspectral image classification. When applying the multifeature image classification methods described above, it is of great importance to determine the optimal combination of weights of the different features. However, researchers generally determine the weights of various features through repeated experiments or artificial methods. Such approaches are prone to subjectivity and improve the classification effect for only specific kinds of images. The classification effectiveness often decreases when the same features and weights are applied to other types of images. Therefore, identifying a multifeature-weighted algorithm that is not strongly affected by human factors, has low complexity and displays good effectiveness in feature fusion has become a key research topic. Among the various underlying features of different image types, hue features can effectively describe the most common color information contained in images using histograms [21]. Similarly, LBP features possess significant advantages, including gray invariance, and they are simple to calculate [22]. Moreover, SIFT features are characterized by image scaling, rotation and affine transformation invariance, and they exhibit very strong distinguishing capabilities [23]. Thus, these three characteristics are widely used by researchers in the field of computer vision and are suitable for various types of image classification. Accordingly, this paper selects these three features for the fusion and classification of images.

This study conducts a comparison experiment that considers three aspects, namely, the system speedup, classification accuracy and training time.

This study conducts a detailed investigation of the multi-image feature adaptive fusion method and the SVM classification algorithm on the Hadoop platform and applies the MapReduce parallel programming model to the proposed algorithm to solve the issues associated with its high computational burden, poor time performance and low classification accuracy. While guaranteeing its time performance, the classification accuracy of the algorithm is effectively improved.




Leave a Reply

Your email address will not be published.