Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32298Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ahmed, A | - |
| dc.coverage.spatial | London, UK | - |
| dc.date.accessioned | 2025-11-06T13:56:40Z | - |
| dc.date.available | 2025-11-06T13:56:40Z | - |
| dc.date.issued | 2025-10-01 | - |
| dc.identifier | Article number: 07008 | - |
| dc.identifier.citation | Ahmed, A. (2025 'Hybrid and deep learning architectures for predictive maintenance: Evaluating LSTM, and attention-based LSTM-XGBoost on turbofan engine RUL', MATEC Web of Conferences, 413, 07008, pp. 1 - 8. doi: 10.1051/matecconf/202541307008. | en_US |
| dc.identifier.issn | 2274-7214 | - |
| dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/32298 | - |
| dc.description.abstract | Accurate prediction of a machines Remaining Useful Life (RUL) underpins modern, costeffective predictive-maintenance programmes. This paper proposes a two-stage hybrid pipeline that couples sequence learning with tree-based residual modelling. In stage 1, 50-cycle windows of NASA C-MAPSS sensor data (FD001 and FD004 subsets) are processed by a bi-layer Long Short-Term Memory (LSTM) network equipped with an attention mechanism; attention weights highlight degradation-relevant time steps and yield a compact, interpretable context vector. In stage 2, this vector is concatenated with four statistical descriptors (mean, standard deviation, minimum, maximum) of each window and passed to an extreme gradient-boosted decision-tree regressor (XGBoost) tuned via grid search. Identical preprocessing and earlystopping schedules are applied to a baseline LSTM for fair comparison. The attention-LSTM–XGBoost model lowers Mean Absolute Error (MAE) by 9.8 % on FD001 and 7.4 % on the more challenging FD004, and reduces Root Mean Squared Error (RMSE) by 8.1 % and 5.6 %, respectively, relative to the baseline. Gains on FD004 demonstrate robustness to multiple fault modes and six operating regimes. By combining temporal attention with gradient-boosted residual fitting, the proposed architecture delivers state-of-the-art accuracy while retaining feature-level interpretability, an asset for safety-critical maintenance planning. | en_US |
| dc.format.extent | 1 - 8 | - |
| dc.format.medium | Print-Electronic | - |
| dc.language.iso | en | en_US |
| dc.publisher | EDP Sciences | en_US |
| dc.rights | Creative Commons Attribution 4.0 International | - |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | - |
| dc.source | International Conference on Measurement, AI, Quality and Sustainability (MAIQS 2025) | - |
| dc.source | International Conference on Measurement, AI, Quality and Sustainability (MAIQS 2025) | - |
| dc.title | Hybrid and deep learning architectures for predictive maintenance: Evaluating LSTM, and attention-based LSTM-XGBoost on turbofan engine RUL | en_US |
| dc.type | Conference Paper | en_US |
| dc.date.dateAccepted | 2025-06-08 | - |
| pubs.finish-date | 2025-08-28 | - |
| pubs.finish-date | 2025-08-28 | - |
| pubs.start-date | 2025-08-26 | - |
| pubs.start-date | 2025-08-26 | - |
| pubs.volume | 413 | - |
| dc.identifier.eissn | 2261-236X | - |
| dc.rights.license | https://creativecommons.org/licenses/by/4.0/legalcode.en | - |
| dcterms.dateAccepted | 2025-06-08 | - |
| dc.rights.holder | The Authors | - |
| Appears in Collections: | Mechanical and Aerospace Engineering Dept of Mechanical and Aerospace Engineering Research Papers | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FiullText.pdf | Copyright © The Authors, published by EDP Sciences, 2025. Licence: Creative Commons. This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0 (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 569.39 kB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License