Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/6473
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShepperd, M-
dc.contributor.authorMacDonell, S-
dc.date.accessioned2012-06-15T08:46:40Z-
dc.date.available2012-06-15T08:46:40Z-
dc.date.issued2012-
dc.identifier.citationInformation and Software Technology, 54(8): 820 - 827, Aug 2012en_US
dc.identifier.issn0950-5849-
dc.identifier.urihttp://www.sciencedirect.com/science/article/pii/S095058491200002Xen
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/6473-
dc.descriptionThis is the Pre-print version of the Article - Copyright @ 2012 Elsevieren_US
dc.description.abstractContext: Software engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. Objective: To reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. Method: A new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random ‘predictions’, that is guessing, and (3) calculation of effect sizes. Results: Previously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. Conclusions: Biased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.en_US
dc.description.sponsorshipMartin Shepperd was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/H050329.en_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectSoftware engineeringen_US
dc.subjectPrediction systemen_US
dc.subjectEmpirical validationen_US
dc.subjectRandomisation techniquesen_US
dc.titleEvaluating prediction systems in software project estimationen_US
dc.typeArticleen_US
dc.identifier.doihttp://dx.doi.org/10.1016/j.infsof.2011.12.008-
pubs.organisational-data/Brunel-
pubs.organisational-data/Brunel/Brunel Active Staff-
pubs.organisational-data/Brunel/Brunel Active Staff/School of Info. Systems, Comp & Maths-
pubs.organisational-data/Brunel/Brunel Active Staff/School of Info. Systems, Comp & Maths/IS and Computing-
pubs.organisational-data/Brunel/University Research Centres and Groups-
pubs.organisational-data/Brunel/University Research Centres and Groups/School of Information Systems, Computing and Mathematics - URCs and Groups-
pubs.organisational-data/Brunel/University Research Centres and Groups/School of Information Systems, Computing and Mathematics - URCs and Groups/Centre for Information and Knowledge Management-
Appears in Collections:Publications
Computer Science
Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
IST_Invited_2011_v7.pdf334.15 kBUnknownView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.