Research Article: Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields

Date Published: April 18, 2019

Publisher: Public Library of Science

Author(s): Xu Ma, Xiangwu Deng, Long Qi, Yu Jiang, Hongwei Li, Yuwei Wang, Xupo Xing, Jie Zhang.

http://doi.org/10.1371/journal.pone.0215676

Abstract

To reduce the cost of production and the pollution of the environment that is due to the overapplication of herbicide in paddy fields, the location information of rice seedlings and weeds must be detected in site-specific weed management (SSWM). With the development of deep learning, a semantic segmentation method with the SegNet that is based on fully convolutional network (FCN) was proposed. In this paper, RGB color images of seedling rice were captured in paddy field, and ground truth (GT) images were obtained by manually labeled the pixels in the RGB images with three separate categories, namely, rice seedlings, background, and weeds. The class weight coefficients were calculated to solve the problem of the unbalance of the number of the classification category. GT images and RGB images were used for data training and data testing. Eighty percent of the samples were randomly selected as the training dataset and 20% of samples were used as the test dataset. The proposed method was compared with a classical semantic segmentation model, namely, FCN, and U-Net models. The average accuracy rate of the SegNet method was 92.7%, whereas the average accuracy rates of the FCN and U-Net methods were 89.5% and 70.8%, respectively. The proposed SegNet method realized higher classification accuracy and could effectively classify the pixels of rice seedlings, background, and weeds in the paddy field images and acquire the positions of their regions.

Partial Text

Rice is one of the major global food crops that feeds over 65% of the Chinese [1]; however, weeds in farmland impede the growth of crops. Weeds decrease rice production by competing for moisture, nutrients, and light in paddy fields [2]. In traditional agriculture, the main weeding method of spraying chemical herbicides extensively without distinguishing between crops and weeds not only results in the waste of herbicide and labor forces but also causes environmental pollution and health hazards for humans [3]. Precise pesticide spraying via site-specific weed management (SSWM) in smart farming can reduce the pesticide consumption by approximately 40–60%, thereby reducing the environmental pollution and increasing the economic profits [4]. For realizing these benefits, identifying weeds and their positions accurately and automatically is the foundation of site-specific spraying.

This paper proposed a semantic segmentation method that is based on fully convolutional network with the SegNet model, which can extract the features from initial RGB images directly and classify and recognize the pixels that correspond to rice, background, and weeds in paddy field images. The proposed method is compared with a classic semantic segmentation model, namely, FCN, and U-Net models in terms of performance. The symmetric structure of encoding and decoding was established in SegNet, which was used to extract the multiscale features and improve the accuracy of feature extraction. SegNet was well-suited for processing the pixel classification of images of tiny and abnormally shaped rice seedlings and weeds in paddy fields. The proposed SegNet method realized higher classification accuracy. The average accuracy rate of the SegNet method was 92.7%, whereas the average accuracies of the FCN and U-Net methods were 89.5% and 70.8%. The proposed pixelwise classification method, which is based on fully convolutional neural networks, could effectively classify the rice, background and weeds in paddy field images. At the same time, this method could perform pixel classification of RGB images in real time to meet the application requirements.

 

Source:

http://doi.org/10.1371/journal.pone.0215676

 

Leave a Reply

Your email address will not be published.