Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/23405
Title: Self-supervised Representation Learning for Videos by Segmenting via Sampling Rate Order Prediction
Authors: Meng, H
Huang, J
Huang, Y
Wang, Q
Yang, W
Keywords: task analysis;semantics;streaming media;feature extraction;data mining;motion segmentation;image segmentation
Issue Date: 20-Sep-2021
Publisher: IEEE
Citation: Huang, J. et al. 'Self-supervised Representation Learning for Videos by Segmenting via Sampling Rate Order Prediction,' IEEE Transactions on Circuits and Systems for Video Technology, 32 (6), pp. 3475 - 3489. doi: 10.1109/TCSVT.2021.3114209.
Abstract: Self-supervised representation learning for videos has been very attractive recently because these methods exploit the information inherently obtained from the video itself instead of annotated labels that is quite time-consuming. However, existing methods ignore the importance of global observation while performing spatio-temporal transformation perception, which highly limits the expression capabilities of the video representation. This paper proposes a novel pretext task that combines the temporal information perception of the video with the motion amplitude perception of moving objects to learn the spatio-temporal representation of the video. Specifically, given a video clip containing several video segments, each video segment is sampled by different sampling rates and the order of video segments is disrupted. Then, the network is used to regress the sampling rate of each video segment and classify the order of input video segments. In the pre-training stage, the network can learn rich spatio-temporal semantic information where content-related contrastive learning is introduced to make the learned video representation more discriminative. To alleviate the appearance dependency caused by contrastive learning, we design a novel and robust vector similarity measurement approach, which can take feature alignment into consideration. Moreover, a view synthesis framework is proposed to further improve the performance of contrastive learning by automatically generating reasonable transformed views. We conduct benchmark experiments with several 3D backbone networks on two datasets. The results show that our proposed method outperforms the existing state-of-the-art methods across the three backbones on two downstream tasks of human action recognition and video retrieval.
URI: https://bura.brunel.ac.uk/handle/2438/23405
DOI: https://doi.org/10.1109/TCSVT.2021.3114209
ISSN: 1051-8215
Other Identifiers: ORCiD: Jing Huang https://orcid.org/0000-0003-3445-6164
ORCiD: Yan Huang https://orcid.org/0000-0001-7868-093X
ORCiD: Qicong Wang https://orcid.org/0000-0001-7324-0433
ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelinesand-policies/post-publication-policies/2.97 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.