Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/11593
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sigweni, B | - |
dc.contributor.author | Shepperd, M | - |
dc.coverage.spatial | Nanjing | - |
dc.coverage.spatial | Nanjing | - |
dc.date.accessioned | 2015-11-13T12:40:58Z | - |
dc.date.available | 2015-11-13T12:40:58Z | - |
dc.date.issued | 2015 | - |
dc.identifier.citation | EASE '15 Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering, 32, Nanjing, China, (April 27 - 29, 2015) | en_US |
dc.identifier.isbn | 978-1-4503-3350-4 | - |
dc.identifier.uri | http://dl.acm.org/citation.cfm?id=2745832 | - |
dc.identifier.uri | http://bura.brunel.ac.uk/handle/2438/11593 | - |
dc.description.abstract | Context: In recent years there has been growing concern about conflicting experimental results in empirical software engineering. This has been paralleled by awareness of how bias can impact research results. Objective: To explore the practicalities of blind analysis of experimental results to reduce bias. Method : We apply blind analysis to a real software engineering experiment that compares three feature weighting approaches with a na ̈ıve benchmark (sample mean) to the Finnish software effort data set. We use this experiment as an example to explore blind analysis as a method to reduce researcher bias. Results: Our experience shows that blinding can be a relatively straightforward procedure. We also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety (i.e., the degree of trimming). Conclusion: Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and easy to implement method that supports more objective analysis of experimental results. Therefore we argue that blind analysis should be the norm for analysing software engineering experiments. | en_US |
dc.language.iso | en | en_US |
dc.publisher | ACM | en_US |
dc.source | Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering | - |
dc.source | Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering | - |
dc.subject | Researcher Bias | en_US |
dc.subject | Blind analysis | en_US |
dc.subject | Software engineering experimentation | en_US |
dc.subject | Software e ort estimation | en_US |
dc.title | Using blind analysis for software engineering experiments | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.doi | http://dx.doi.org/10.1145/2745802.2745832 | - |
pubs.publication-status | Published | - |
pubs.publication-status | Published | - |
Appears in Collections: | Computer Science Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Fulltext.pdf | 263.8 kB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.