<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://bura.brunel.ac.uk/handle/2438/8627">
    <title>BURA Community:</title>
    <link>http://bura.brunel.ac.uk/handle/2438/8627</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://bura.brunel.ac.uk/handle/2438/33237" />
        <rdf:li rdf:resource="http://bura.brunel.ac.uk/handle/2438/33226" />
        <rdf:li rdf:resource="http://bura.brunel.ac.uk/handle/2438/32954" />
        <rdf:li rdf:resource="http://bura.brunel.ac.uk/handle/2438/32906" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-11T18:44:02Z</dc:date>
  </channel>
  <item rdf:about="http://bura.brunel.ac.uk/handle/2438/33237">
    <title>Higher mode filtering: optimum attenuation in a continuum of exceptional points</title>
    <link>http://bura.brunel.ac.uk/handle/2438/33237</link>
    <description>Title: Higher mode filtering: optimum attenuation in a continuum of exceptional points
Authors: Lawrie, JB; Afzal, M
Abstract: ...
Description: ...</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://bura.brunel.ac.uk/handle/2438/33226">
    <title>Mixed topics on geometry of varieties of Fano type</title>
    <link>http://bura.brunel.ac.uk/handle/2438/33226</link>
    <description>Title: Mixed topics on geometry of varieties of Fano type
Authors: Jiao, Dongchen
Abstract: In this thesis, we investigate the deformation properties of Fano threefolds and the birational ge-ometry of foliations. First, we try to find compactification of several families of Fano threefolds. Then we give a description of the connections between foliated minimal models. Finally, we will discuss geometric properties of Fano foliations. This thesis contains results of (1), (26), (19) and some recent independent work.
Description: This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London</description>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://bura.brunel.ac.uk/handle/2438/32954">
    <title>Jiangfeng Wang, Keming Yu and Rong Jiang's contribution to the Discussion of ‘Augmented balancing weights as linear regression’ by Bruns-Smith et al</title>
    <link>http://bura.brunel.ac.uk/handle/2438/32954</link>
    <description>Title: Jiangfeng Wang, Keming Yu and Rong Jiang's contribution to the Discussion of ‘Augmented balancing weights as linear regression’ by Bruns-Smith et al
Authors: Wang, J; Yu, K; Jiang, R
Abstract: This is an interesting and well-executed paper that makes a substantial contribution to the literature on semiparametric causal inference, elegantly bridging the seemingly distinct ﬁelds of balancing weights and regression adjustment. ...
Description: Discussion Paper Contribution.</description>
    <dc:date>2026-01-13T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://bura.brunel.ac.uk/handle/2438/32906">
    <title>Fair Benchmarking in Short‐Term Load Forecasting</title>
    <link>http://bura.brunel.ac.uk/handle/2438/32906</link>
    <description>Title: Fair Benchmarking in Short‐Term Load Forecasting
Authors: Xing, L; Kaheh, Z
Abstract: Performance comparisons in short-term load forecasting are often confounded by differences in preprocessing pipelines rather than reflecting intrinsic architectural capability. Variations in feature engineering, scaling, temporal windowing and data partitioning can dominate reported accuracy and obscure the actual behaviour of forecasting models. This study examines preprocessing–architecture interaction by benchmarking random forest, LightGBM, long short-term memory (LSTM), transformer and Temporal Fusion Transformer (TFT) under a shared tabular preprocessing pipeline, ensuring strict control over data handling and evaluation conditions. Under this controlled setting, tree-based models exhibit strong predictive performance, whereas deep sequence models experience substantial degradation when temporal continuity is not explicitly represented. To isolate architectural sensitivity from preprocessing effects, we further conduct a within-architecture analysis by retraining an identical LSTM under a sequence-aware pipeline aligned with its temporal inductive bias. This realignment yields an order-of-magnitude reduction in RMSE, demonstrating that preprocessing design is a first-order determinant of deep sequence model performance. The results establish a transparent and reproducible benchmarking framework and highlight the importance of aligning data representation with model assumptions when interpreting comparative performance in time series forecasting.
Description: Data Availability Statement: &#xD;
The data that support the findings of this study are available from the corresponding author upon reasonable request.</description>
    <dc:date>2026-02-24T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

