Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/12596
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSigweni, B-
dc.contributor.authorShepperd, M-
dc.contributor.authorTurchi, T-
dc.contributor.editorKitchenham, B-
dc.contributor.editorMacDonell, S-
dc.contributor.editorBeecham, S-
dc.coverage.spatialLimerick, Ireland-
dc.date.accessioned2016-05-09T10:32:09Z-
dc.date.available2016-05-09T10:32:09Z-
dc.date.issued2016-
dc.identifier.citation20th International Conference on Evaluation and Assessment in Software Engineering (EASE 2016), Limerick, Ireland, (1 - 3 June 2016)en_US
dc.identifier.isbn978-1-4503-3691-8/16/06-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/12596-
dc.description.abstractContext: It is unclear that current approaches to evaluating or comparing competing software cost or effort models give a realistic picture of how they would perform in actual use. Specifically, we’re concerned that the usual practice of using all data with some holdout strategy is at variance with the reality of a data set growing as projects complete. Objective: This study investigates the impact of using unrealistic, though possibly convenient to the researchers, ways to compare models on commercial data sets. Our questions are does this lead to different conclusions in terms of the comparisons and if so,are the results biased e.g., more optimistic than those that might realistically be achieved in practice. Method: We compare a traditional approach based on leave one out cross-validation with growing the data set chronologically using the Finnish and Desharnais data sets. Results: Our realistic, time-based approach to validation is significantly more conservative than leave-one-out cross-validation (LOOCV) for both data sets. Conclusion: If we want our research to lead to actionable findings it’s incumbent upon the researchers to evaluate their models in realistic ways. This means a departure from LOOCV techniques, while further investigation is needed for other validation techniques, such as k-fold validation.en_US
dc.language.isoenen_US
dc.publisherACMen_US
dc.source20th International Conference on Evaluation and Assessment in Software Engineering (EASE)-
dc.source20th International Conference on Evaluation and Assessment in Software Engineering (EASE)-
dc.subjectSoftware engineering experimentationen_US
dc.subjectSoftware effort estimationen_US
dc.subjectCross vaidation approachesen_US
dc.titleRealistic assessment of software effort estimation modelsen_US
dc.typeConference Paperen_US
dc.identifier.doihttp://dx.doi.org/10.1145/2915970.2916005-
dc.relation.isPartOfACM-
pubs.finish-date2016-06-03-
pubs.finish-date2016-06-03-
pubs.publication-statusAccepted-
pubs.start-date2016-06-01-
pubs.start-date2016-06-01-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Fulltext.pdfFile is embargoed until 06/06/2016258.59 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.