Please use this identifier to cite or link to this item:
Title: An architectural framework for assessing quality of experience of web applications
Authors: Radwan, Omar Amer
Advisors: Lycett, M
De Cesare, S
Keywords: Web architecture;Web quality modelling and monitoring;Machine learning;Agent-based modelling;Design science research
Issue Date: 2017
Publisher: Brunel University London
Abstract: Web-based service providers have long been required to deliver high quality services in accordance with standards and customer requirements. Increasingly, however, providers are required to think beyond service quality and develop a deeper understanding of their customers’ Quality of Experience (QoE). Whilst models exist that assess the QoE of Web Application, significant challenges remain in defining QoE factors from a Web engineering perspective, as well as mapping between so called ‘objective’ and ‘subjective’ factors of relevance. Specifically, the following challenges are considered as general fundamental problems for assessing QoE: (1) Quantifying the relationship between QoE factors; (2) predicting QoE as well as dealing with the limited data available in relation to subjective factors; (3) optimising and controlling QoE; and (4) perceiving QoE. In response, this research presents a novel model, called QoEWA (and associated software instantiation) that integrates factors through Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs). The mapping is incorporated into a correlation model that assesses QoE, in particular, that of Web Application, with a consideration of defining the factors in terms of quality requirements derived from web architecture. The data resulting from the mapping is used as input for the proposed model to develop artefacts that: quantify, predict, optimise and perceive QoE. The development of QoEWA is framed and guided by Design Science Research (DSR) approach, with the purpose of enabling providers to make more informed decisions regarding QoE and/or to optimise resources accordingly. The evaluation of the designed artefacts is based on a build-and-evaluate cycle that provides feedback and a better understanding of the utilised solutions. The key artefacts are developed and evaluated through four iterations: Iteration 1 utilises the Actual Versus-Target approach to quantify QoE, and applies statistical analysis to evaluate the outputs. Iteration 2: utilises a Machine Learning (ML) approach to predict QoE, and applies statistical tests to compare the performance of ML algorithms. Iteration 3 utilises the Multi-Objective Optimisation (MOO) approach to optimise QoE and control the balance between resources and user experience. Iteration 4 utilises the Agent-Based Modelling approach to perceive and gain insights into QoE. The design of iteration 4 is rigorously tested using verified and validated models.
Description: This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London
Appears in Collections:Computer Science
Dept of Computer Science Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf4.01 MBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.