Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/28947
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMüller, TF-
dc.contributor.authorBrinkmann, L-
dc.contributor.authorWinters, J-
dc.contributor.authorPescetelli, N-
dc.date.accessioned2024-05-07T14:49:29Z-
dc.date.available2024-05-07T14:49:29Z-
dc.date.issued2023-04-24-
dc.identifierORCiD: James Winters https://orcid.org/0000-0003-2982-2991-
dc.identifiere13288-
dc.identifier.citationMüller, T.F. et al. (2023) 'Machine Impostors Can Avoid Human Detection and Interrupt the Formation of Stable Conventions by Imitating Past Interactions: A Minimal Turing Test', Cognitive Science, 47 (4), e13288, pp. 1 - 24. doi: 10.1111/cogs.13288.en_US
dc.identifier.issn0364-0213-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/28947-
dc.descriptionSupporting Information is available online at: https://onlinelibrary.wiley.com/doi/10.1111/cogs.13288#support-information-section .en_US
dc.description.abstractInteractions between humans and bots are increasingly common online, prompting some legislators to pass laws that require bots to disclose their identity. The Turing test is a classic thought experiment testing humans’ ability to distinguish a bot impostor from a real human from exchanging text messages. In the current study, we propose a minimal Turing test that avoids natural language, thus allowing us to study the foundations of human communication. In particular, we investigate the relative roles of conventions and reciprocal interaction in determining successful communication. Participants in our task could communicate only by moving an abstract shape in a 2D space. We asked participants to categorize their online social interaction as being with a human partner or a bot impostor. The main hypotheses were that access to the interaction history of a pair would make a bot impostor more deceptive and interrupt the formation of novel conventions between the human participants. Copying their previous interactions prevents humans from successfully communicating through repeating what already worked before. By comparing bots that imitate behavior from the same or a different dyad, we find that impostors are harder to detect when they copy the participants’ own partners, leading to less conventional interactions. We also show that reciprocity is beneficial for communicative success when the bot impostor prevents conventionality. We conclude that machine impostors can avoid detection and interrupt the formation of stable conventions by imitating past interactions, and that both reciprocity and conventionality are adaptive strategies under the right circumstances. Our results provide new insights into the emergence of communication and suggest that online bots mining personal information, for example, on social media, might become indistinguishable from humans more easily.en_US
dc.description.sponsorshipOpen access funding enabled and organized by Projekt DEAL.en_US
dc.format.extent1 - 24-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherWiley on behalf of Cognitive Science Society (CSS)en_US
dc.rightsCopyright © 2023 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS). This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.-
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/-
dc.subjectTuring testen_US
dc.subjectexperimental semioticsen_US
dc.subjectconventionen_US
dc.subjectreciprocityen_US
dc.subjectinteraction historyen_US
dc.subjectlanguage evolutionen_US
dc.subjectcommunicationen_US
dc.subjecthuman–machine interactionen_US
dc.titleMachine Impostors Can Avoid Human Detection and Interrupt the Formation of Stable Conventions by Imitating Past Interactions: A Minimal Turing Testen_US
dc.typeArticleen_US
dc.date.dateAccepted2023-03-27-
dc.identifier.doihttps://doi.org/10.1111/cogs.13288-
dc.relation.isPartOfCognitive Science-
pubs.issue4-
pubs.publication-statusPublished-
pubs.volume47-
dc.identifier.eissn1551-6709-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc/4.0/legalcode.en-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Life Sciences Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS). This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.1.21 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons