Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32110
Title: Evaluating the normative implications of national and international artificial intelligence policies for Sustainable Development Goal 3: good health and well-being
Authors: Mazzi, F
Keywords: artificial intelligence;sustainability;SDG 3;AI in healthcare;AI governance;global health
Issue Date: 29-May-2025
Publisher: Oxford University Press on behalf of Project HOPE - The People-To-People Health Foundation
Citation: Mazzi, F. (2025) 'Evaluating the normative implications of national and international artificial intelligence policies for Sustainable Development Goal 3: good health and well-being', Health Affairs Scholar, 3 (6), qxaf108, pp. 1 - 14. doi: 10.1093/haschl/qxaf108.
Abstract: Introduction: Artificial intelligence (AI) has transformative potential in healthcare, promising advancements in diagnostics, treatment, and patient management, attracting significant investments and policy efforts globally. Effective AI governance, comprising guidelines, policy papers, and regulations, is crucial for its successful integration. Methods: This study evaluates 10 AI policies, namely focusing on 5 international organizations: the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Council of Europe, the G20, and UNESCO, and 5 regional/national entities: Brazil, the United States, the European Union (EU), China, and the United Kingdom, to highlight the implications of AI governance for healthcare. Results: The EU AI Act focuses on risk management and individual protection while fostering innovation aligned with European values. The United Kingdom and the United States adopt a more flexible approach, offering guidelines to stimulate rapid AI integration and innovation without imposing strict regulations. Brazil shows a convergence toward the EU's risk-based approach. Conclusions: The study explores the normative implications of these varied approaches. The EU's stringent regulations may ensure higher safety and ethical standards, potentially setting a global benchmark, but they could also hinder innovation and pose compliance challenges. The United Kingdom's lenient approach may drive faster AI adoption and competitiveness but risks inconsistencies in safety and ethics. The study concludes by offering recommendations for future research.
Description: Supplementary material: Supplementary data are available online at: https://academic.oup.com/healthaffairsscholar/article/3/6/qxaf108/8152586#supplementary-data .
URI: https://bura.brunel.ac.uk/handle/2438/32110
DOI: https://doi.org/10.1093/haschl/qxaf108
Other Identifiers: ORCiD: Francesca Mazzi https://orcid.org/0000-0002-6423-9147
Article number: qxaf108
Appears in Collections:Brunel Law School Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Author(s) 2025. Published by Oxford University Press on behalf of Project HOPE - The People-To-People Health Foundation, Inc. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.249.14 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons