Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31699
Title: AI ethics in action: a circular model for transparency, accountability and inclusivity
Authors: Hosseini Tabaghdehi, SA
Ayaz, Ö
Keywords: artificial intelligence;digital ethics;social inclusivity;fairness;fairness;accountability;responsible digital transformation
Issue Date: 27-Jun-2025
Publisher: Emerald Publishing
Citation: Hosseini Tabaghdehi, S.A. and Ayaz, O. (2025) 'AI ethics in action: a circular model for transparency, accountability and inclusivity', Journal of Managerial Psychology, 0 (ahead of print), pp. 1 - xx, doi: 10.1108/jmp-03-2024-0177.
Abstract: Purpose: Drawing upon a circular model proposition, Kantian ethics is employed to explore how ethical considerations within AI translate into concrete actions that prioritize transparency, privacy, inclusivity and equality. Additionally, agency theory is applied to understand the relevance of fairness in the interactions between agents, principals and algorithmic systems, particularly in the creation of value through digital platforms. Design/methodology/approach: A review of literature on ethical concerns within the AI ecosystem is conducted, proposing a unifying ethical principle and standards. The circular model for ethics in action is then developed, emphasising the responsible use of AI and its role in capturing and creating social value, ultimately contributing to sustainable organizational outcomes. The model also highlights key drivers that shape the ethical framing of AI, as well as the influence of the institutional context on its adoption and effectiveness. Findings: Responsible use of AI positively affects organizational performance and the digital ecosystem via the psychological mechanism of ethical identity. Ethical standards and regulations are the global requirements for the AI ecosystem that are required for achieving a sustainable digital society. Research limitations/implications: This study contributes to a comprehensive understanding of the responsible use of AI’s and its practical and theoretical implications for organizations in the current digital ecosystem. Lack of global understanding, awareness and implementation of ethical practice in the AI ecosystem is not yet developed and practice. Future researchers can design a cross-border ethical framework to overcome these limitations. Organizations targeting to increase responsible digital interactions can benefit from maintaining ethical principles through responsible labourers, leaders and all stakeholders involved in the ecosystem. Practical implications: This study offers practical guidance for businesses, policymakers and AI practitioners on the ethical use of AI. It emphasizes the need for robust data governance, a “human-first” approach focusing on privacy and accountability and alignment with ethical standards. Given AI’s global reach, international cooperation and standard-setting are essential to navigate diverse regulatory and cultural contexts. The paper also highlights the importance of ethics education for AI developers and practitioners. Investing in training that integrates technical skills with ethical awareness will help build a responsible AI workforce capable of addressing societal impacts and maintaining public trust. Social implications: This study underscores the urgent need for responsible AI adoption, highlighting risks such as bias, lack of transparency and privacy concerns. As AI reshapes work, decision-making and governance, its social impact grows – potentially deepening inequalities if left unchecked. The study calls for explainable, fair and inclusive AI systems guided by ethical frameworks that respect human dignity. A “human-first” approach ensures AI supports-not replaces-human agency. By fostering transparency, accountability and cultural sensitivity, organizations can build public trust, empower diverse communities and contribute to a more equitable digital future. Ethical leadership and inclusive design are essential to avoid reinforcing systemic harms. Originality/value: This study presents an original approach to integrating ethical considerations into the development and deployment of artificial intelligence, by conceptualizing how transparency, accountability and inclusivity can be embedded throughout the AI ecosystem, fostering trust and responsible innovation. Through a comprehensive examination of ethical principles and requirements, we recommend a set of tools and strategies needed to promote ethical AI practices, mitigate risks and maximize societal benefit. Furthermore, this study serves as a roadmap for building AI systems that prioritize human collaboration and uphold fundamental values in the digital age.
URI: https://bura.brunel.ac.uk/handle/2438/31699
DOI: https://doi.org/10.1108/jmp-03-2024-0177
ISSN: 0268-3946
Other Identifiers: ORCiD: Seyedeh Asieh Hosseini Tabaghdehi https://orcid.org/0000-0002-6650-766X
ORCiD: Özlem Ayaz https://orcid.org/0000-0002-2836-6317
Appears in Collections:Brunel Business School Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 Emerald Publishing Limited This author accepted manuscript is deposited under a Creative Commons Attribution Non-commercial 4.0 International (CC BY-NC) licence. This means that anyone may distribute, adapt, and build upon the work for non-commercial purposes, subject to full attribution. If you wish to use this manuscript for commercial purposes, please contact permissions@emerald.com (see: https://www.emeraldgrouppublishing.com/publish-with-us/author-policies/our-open-research-policies#green)..760.37 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons