Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/22544
Title: Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model
Authors: Yang, H
Chen, L
Chen, M
Ma, Z
Deng, F
Li, M
Li, X
Keywords: image recognition;YOLO-v3;convolutional neural network;image pyramid;tea shoot
Issue Date: 11-Dec-2019
Publisher: IEEE
Citation: Yang, H., Chen, L., Chen, M., Ma, Z., Deng, F., Li, M. and Li, X. (2019) 'Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model', IEEE Access, 2019, 7 pp. 180998 - 181011. doi: 10.1109/ACCESS.2019.2958614.
Abstract: To recognize the tender shoots for high-quality tea and to determine the picking points accurately and quickly, this paper proposes a method of recognizing the picking points of the tender tea shoots with the improved YOLO-v3 deep convolutional neural network algorithm. This method realizes the end-to-end target detection and the recognition of different postures of high-quality tea shoots, considering both efficiency and accuracy. At first, in order to predict the category and position of tender tea shoots, an image pyramid structure is used to obtain the characteristic map of tea shoots at different scales. The residual network block structure is added to the downsampling part, and the fully connected part is replaced by a \times 1$ convolution operation at the end, ensuring accurate identification of the result and simplifying the network structure. The K-means method is used to cluster the dimension of the target box. Finally, the image data set of picking points for high-quality tea shoots is built. The accuracy of the trained model under the verification set is over 90%, which is much higher than the detection accuracy of the research methods.
URI: https://bura.brunel.ac.uk/handle/2438/22544
DOI: https://doi.org/10.1109/ACCESS.2019.2958614
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf4.29 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons