Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31391
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheng, K | - |
dc.contributor.author | Tang, J | - |
dc.contributor.author | Gu, H | - |
dc.contributor.author | Wan, H | - |
dc.contributor.author | Li, M | - |
dc.date.accessioned | 2025-06-04T11:33:18Z | - |
dc.date.available | 2025-06-04T11:33:18Z | - |
dc.date.issued | 2024-08-12 | - |
dc.identifier | ORCiD: Keyang Cheng https://orcid.org/0000-0001-5240-1605 | - |
dc.identifier | ORCiD: Jingfeng Tang https://orcid.org/0009-0001-0291-4047 | - |
dc.identifier | ORCiD: Mazhen Li https://orcid.org/0000-0002-0820-5487 | - |
dc.identifier.citation | Cheng, K. et al. (2024) 'Cross-Block Sparse Class Token Contrast for Weakly Supervised Semantic Segmentation', IEEE Transactions on Circuits and Systems for Video Technology, 34 (12), pp. 13004 - 13015. doi: 10.1109/TCSVT.2024.3442310. | en_US |
dc.identifier.issn | 1051-8215 | - |
dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/31391 | - |
dc.description.abstract | Most existing Vision Transformer-based frameworks for weakly supervised semantic segmentation utilize class activation maps to generate pseudo masks. Although it mitigates the class-agnostic issue, this approach still suffers from misclassification and noise in segmentation results. To overcome these limitations, we propose an attention-based framework named Cross-block Sparse Class Token Contrast (CB-SCTC), which incorporates Dynamic Sparse Attention module (DSA) and Cross-block Class Token Contrast scheme (CB-CTC). Specifically, the proposed Cross-block Class Token Contrast scheme forces diversity between the final class tokens by learning from the lower similarity of the class tokens in the relatively shallower blocks. Moreover, the Dynamic Sparse Attention module is designed to post-process the output from the softmax function in the attention mechanism to reduce noise. Extensive experiments prove the proposed framework is a valid alternative to class activation maps. Our framework demonstrates competitive mIoU scores on the PASCAL VOC 2012(val:75.5%, test:75.2%) and MS COCO 2014 dataset(val:46.9%). Our code is available at https://github.com/Jingfeng-Tang/CB-SCTC. | en_US |
dc.description.sponsorship | 10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 62372215 and 61972183); 10.13039/501100008668-Special Fund Project of Jiangsu Science and Technology Plan (Grant Number: BE2022781). | en_US |
dc.format.extent | 13004 - 13015 | - |
dc.format.medium | Print-Electronic | - |
dc.language | English | - |
dc.language.iso | en_US | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.rights | Copyright © 2024 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ | - |
dc.rights.uri | https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ | - |
dc.subject | weakly supervised | en_US |
dc.subject | semantic segmentation | en_US |
dc.subject | token contrast | en_US |
dc.subject | dynamic sparse | en_US |
dc.title | Cross-Block Sparse Class Token Contrast for Weakly Supervised Semantic Segmentation | en_US |
dc.type | Article | en_US |
dc.date.dateAccepted | 2024-08-08 | - |
dc.identifier.doi | https://doi.org/10.1109/TCSVT.2024.3442310 | - |
dc.relation.isPartOf | IEEE Transactions on Circuits and Systems for Video Technology | - |
pubs.issue | 12 | - |
pubs.publication-status | Published | - |
pubs.volume | 34 | - |
dc.identifier.eissn | 1558-2205 | - |
dcterms.dateAccepted | 2024-08-08 | - |
dc.rights.holder | Institute of Electrical and Electronics Engineers (IEEE) | - |
Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2024 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ | 36.69 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.