Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/22544
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, H | - |
dc.contributor.author | Chen, L | - |
dc.contributor.author | Chen, M | - |
dc.contributor.author | Ma, Z | - |
dc.contributor.author | Deng, F | - |
dc.contributor.author | Li, M | - |
dc.contributor.author | Li, X | - |
dc.date.accessioned | 2021-04-16T09:54:08Z | - |
dc.date.available | 2019-01-01 | - |
dc.date.available | 2021-04-16T09:54:08Z | - |
dc.date.issued | 2019-12-11 | - |
dc.identifier.citation | Yang, H., Chen, L., Chen, M., Ma, Z., Deng, F., Li, M. and Li, X. (2019) 'Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model', IEEE Access, 2019, 7 pp. 180998 - 181011. doi: 10.1109/ACCESS.2019.2958614. | en_US |
dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/22544 | - |
dc.description.abstract | To recognize the tender shoots for high-quality tea and to determine the picking points accurately and quickly, this paper proposes a method of recognizing the picking points of the tender tea shoots with the improved YOLO-v3 deep convolutional neural network algorithm. This method realizes the end-to-end target detection and the recognition of different postures of high-quality tea shoots, considering both efficiency and accuracy. At first, in order to predict the category and position of tender tea shoots, an image pyramid structure is used to obtain the characteristic map of tea shoots at different scales. The residual network block structure is added to the downsampling part, and the fully connected part is replaced by a \times 1$ convolution operation at the end, ensuring accurate identification of the result and simplifying the network structure. The K-means method is used to cluster the dimension of the target box. Finally, the image data set of picking points for high-quality tea shoots is built. The accuracy of the trained model under the verification set is over 90%, which is much higher than the detection accuracy of the research methods. | en_US |
dc.description.sponsorship | Natural Science Foundation of Shandong Province under Grant ZR2019MEE102; Key Research and Development Program of Shandong Province under Grant 2018GNC112007; Project of Shandong Province Higher Educational Science and Technology Program under Grant J18KA015. | en_US |
dc.format.extent | 180998 - 181011 | - |
dc.format.medium | Electronic | - |
dc.language.iso | en_US | en_US |
dc.publisher | IEEE | en_US |
dc.rights | This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ | - |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | - |
dc.subject | image recognition | en_US |
dc.subject | YOLO-v3 | en_US |
dc.subject | convolutional neural network | en_US |
dc.subject | image pyramid | en_US |
dc.subject | tea shoot | en_US |
dc.title | Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model | en_US |
dc.type | Article | en_US |
dc.identifier.doi | https://doi.org/10.1109/ACCESS.2019.2958614 | - |
dc.relation.isPartOf | IEEE Access | - |
pubs.publication-status | Published | - |
pubs.volume | 7 | - |
dc.identifier.eissn | 2169-3536 | - |
Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | 4.29 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License