Universitas Airlangga Official Website

Artificial intelligence in healthcare: risk analysis and regulatory issues

Ilustrasi AIP (Foto: UNAIR NEWS)
Ilustrasi AIP (Foto: UNAIR NEWS)

This article examines the rapid development and application of artificial intelligence (AI) in healthcare, alongside an analysis of the associated risks and regulatory challenges. AI has significantly transformed healthcare services by improving diagnostic accuracy, enabling the use of software as a medical device (SaMD), surgical robots, and various AI-driven applications such as machine learning, natural language processing, and clinical decision-support systems. These advancements not only enhance the efficiency and quality of healthcare services but also contribute substantially to economic growth in countries such as the United States, the United Kingdom, and Indonesia.

The article concludes that an adaptive, risk-based regulatory framework supported by clear institutional governance and harmonised international standards is crucial for the responsible use of AI in healthcare. Such regulation is necessary to ensure patient safety, data protection, transparency, and accountability, while simultaneously fostering innovation in healthcare technology.

Regulation of AI in healthcare in the United States, the United Kingdom and Indonesia

Legislative law making usually addresses a specific issue, significant enough to impact the welfare of several persons and their organisations, or the government itself, or sufficiently prominent to capture the attention of some politicians. However, laws can also be produced by several means, including fear, social unrest, violence, environmental degradation, and technical advancement. In one example of these dynamics, the use of AI in healthcare carries significant risks to patients and society. For instance, an AI algorithm can influence clinician productivity (e.g. if the AI tool inaccurately delineates the heart’s boundaries in a cardiac image volume, necessitating manual correction by the cardiologist), but it can also jeopardise patient health and significantly affect clinical outcomes (e.g. if the AI tool fails to identify a life-threatening condition). Therefore, to minimise the risks of AI and maximise its benefits in future healthcare, regulations are essential to identify, analyze, understand, and monitor potential AI risks. In this context, the regulation fulfils the rational requirements for proper legislation to be made by stakeholders.

Recent regulatory developments in the US and UK demonstrate how regulatory interests aim to minimise AI risks. Table 1 compares key points of regulatory approaches to AI in the healthcare sector in the US, UK, and Indonesia. However, their progress and approaches in establishing regulatory frameworks vary. The US uses a decentralised approach, placing the oversight and regulation of AI for healthcare under the responsibility of the Food and Drug Administration (FDA) and states. The UK adopts a prescriptive and precautionary stance with comprehensive coverage, focusing on rigorous risk assessment and compliance measures. The Medicines and Healthcare products Regulatory Agency (MHRA) unveiled the Software and AI as a Medical Device Change Programme, an initiative aimed at clarifying regulatory requirements for software and AI while safeguarding patient welfare.

Risks and issues of AI in healthcare in legal perspective

Over the past few years, some experts have expressed worry about the possible negative effects of medical AI, including dangers related to clinical, technical, and socio-ethical aspects.Footnote39 First, the risk to patient safety. AI algorithm errors can lead to misdiagnoses of diseases that threaten patient safety. Incorrect AI algorithms can also result in unnecessary treatment because healthy people are classified as having certain diseases.

Patient safety is also a risk if doctors and health workers cannot or use AI incorrectly. This is because AI algorithms depend on how they are used in practice by end users, including doctors and healthcare professionals. As a result, incorrect medical judgment and decision making can potentially harm patients. Therefore, it is not enough for doctors and the public to just have access to medical AI tools, but they also need to understand how and when to use the technology.

AI regulatory issues in healthcare in US, UK, and Indonesia

Several national, regional, and international agencies have started implementing AI strategies, action plans, and policy documents since the beginning of 2016. The US has adopted a piecemeal approach to regulating AI, in contrast to other countries. AI applications in particular industries, such as healthcare, have been the focus of several US regulations at the state, federal, and local levels.

There are now three proposals being considered in the US concerning AI in healthcare. First, on March 22, 2023, the Better Mental Health Care for Americans Act (S293) was submitted to the Senate. The Children’s Health Insurance Program, Medicaid, and Medicare are among the programmes and payments that this measure amends. Second, to establish that AI or machine learning technologies may be qualified to prescribe medications, the Health Technology Act of 2023 (H.R.206) was introduced on January 9, 2023. Third, to extend some Public Health Service Act initiatives, the Pandemic and All-Hazards Preparedness and Response Act (S2333) was presented in July 2023.

Regulatory framework for AI in healthcare

The development of complex societies (the use of AI in healthcare) requires social control, largely based on the internalisation of shared norms. Law is considered a key form of formal social control because it establishes rules of conduct and sanctions for misconduct. According to Wolfgang Friedmann, law, through legislative or executive responses to new social conditions and ideas, not only articulates but also sets the direction for major social change. This function refers to purposeful, planned, and directed social change initiated, guided, and supported by law.Footnote65 According to Roscoe Pound, to understand contemporary law, Pound views law as a social institution to meet social needs and demands in societal existence.

Penulis : Agus Yudha Hernoko

Artificial intelligence in healthcare: risk analysis and regulatory issues
Dona Budi Kharisma, Agus Yudha Hernoko & Prawitra Thalib
Published online: 24 Oct 2025, https://doi.org/10.1080/20508840.2025.2576415
Link : https://www.tandfonline.com/doi/full/10.1080/20508840.2025.2576415