Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/30984
Title: | Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning |
Authors: | Wen, Y Zhang, G Wang, K Yang, K |
Keywords: | federated learning;split learning;edge computing;computing task offloading;resource allocation |
Issue Date: | 20-Feb-2025 |
Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
Citation: | Wen, Y. et al. (2025) 'Training Latency Minimization for Model-Splitting Allowed Federated Edge Learning', IEEE Transactions on Network Science and Engineering, 0 (early access), pp. 1 - 12. doi: 10.1109/TNSE.2025.3544313. |
Abstract: | To alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL), we leverage the edge computing and split learning to propose a model-splitting allowed FL (SFL) framework, with the aim to minimize the training latency without loss of test accuracy. Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session. Therefore, the training latency minimization problem (TLMP) is modelled as a minimizing-maximum problem. To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem. Considering that the two subproblems involved in the TLMP, namely, the cut-layer selection problem for the clients and the computing resource allocation problem for the parameter-server are relative independence, an alternate-optimization-based algorithm with polynomial time complexity is developed to obtain a high-quality solution to the TLMP. Extensive experiments are performed on a popular DNN-model EfficientNetV2 using dataset MNIST, and the results verify the validity and improved performance of the proposed SFL framework. |
URI: | https://bura.brunel.ac.uk/handle/2438/30984 |
DOI: | https://doi.org/10.1109/TNSE.2025.3544313 |
Other Identifiers: | ORCiD: Yao Wen https://orcid.org/0000-0002-5182-5999 ORCiD: Guopeng Zhang https://orcid.org/0000-0001-7524-3144 ORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800 ORCiD: Kun Yang https://orcid.org/0000-0002-6782-6689 |
Appears in Collections: | Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works ( https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ ). | 2.4 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.