Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32906| Title: | Fair Benchmarking in Short‐Term Load Forecasting |
| Authors: | Xing, L Kaheh, Z |
| Keywords: | deep learning;electricity demand;energy analytics;forecasting;machine learning benchmarking;reproducibility;temporal fusion transformer |
| Issue Date: | 24-Feb-2026 |
| Publisher: | Wiley on behalf of Institution of Engineering and Technology |
| Citation: | Xing, L. and Kaheh, Z. (2026) 'Fair Benchmarking in Short‐Term Load Forecasting', Artificial Intelligence for Engineering, 0 (ahead of print), pp. 1 - 14. doi: 10.1049/aie2.70011. |
| Abstract: | Performance comparisons in short-term load forecasting are often confounded by differences in preprocessing pipelines rather than reflecting intrinsic architectural capability. Variations in feature engineering, scaling, temporal windowing and data partitioning can dominate reported accuracy and obscure the actual behaviour of forecasting models. This study examines preprocessing–architecture interaction by benchmarking random forest, LightGBM, long short-term memory (LSTM), transformer and Temporal Fusion Transformer (TFT) under a shared tabular preprocessing pipeline, ensuring strict control over data handling and evaluation conditions. Under this controlled setting, tree-based models exhibit strong predictive performance, whereas deep sequence models experience substantial degradation when temporal continuity is not explicitly represented. To isolate architectural sensitivity from preprocessing effects, we further conduct a within-architecture analysis by retraining an identical LSTM under a sequence-aware pipeline aligned with its temporal inductive bias. This realignment yields an order-of-magnitude reduction in RMSE, demonstrating that preprocessing design is a first-order determinant of deep sequence model performance. The results establish a transparent and reproducible benchmarking framework and highlight the importance of aligning data representation with model assumptions when interpreting comparative performance in time series forecasting. |
| Description: | Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. |
| URI: | https://bura.brunel.ac.uk/handle/2438/32906 |
| DOI: | https://doi.org/10.1049/aie2.70011 |
| ISSN: | 3067-249X |
| Other Identifiers: | ORCiD: Zohreh Kaheh https://orcid.org/0000-0002-8518-8545 |
| Appears in Collections: | Department of Mathematics Research Papers |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FullText.pdf | Copyright © 2026 The Author(s). Artificial Intelligence for Engineering published by John Wiley & Sons Ltd on behalf of Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. | 1.54 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License