In the world where AI is reshaping the running of mostly critical sectors including healthcare, the implication of AI-decision making is arguably grave, especially if it conflicts with the user’s fundamental rights. This entanglement introduces a new dimension of accountability surrounding AI in healthcare, one that forces legal educators to revisit how the assigning of liability can be taught in the classroom. The self learning capabilities of AI, coupled with the black-box property of the technology, are defying the fault-based mechanism underlying all traditional liability rules that was once considered the pillar of our legal system for addressing harm and assigning responsibility. Equipping future legal professionals with the knowledge and insights of this current development prepares them for the dynamic of the legal market in order to stay relevant. This way, legal professionals are slated to foster an environment where technological innovations can co-exist with societal values. The call for agile legal professionals is higher now given that AI is taking more critical roles in fields such as healthcare – where it catalyses innovations across the healthcare ecosystem.
In this context, the domain of AI represents a frontier where the full depth of both its capabilities and risks has yet to be comprehensively mapped, setting it apart from other predecessor technologies. Looking from the upside, the socioeconomic dynamic of AI in healthcare especially, compounded by proven profitable endeavours, create the appetite for its adoption within the sector (Challen et al., 2019). AI’s adeptness of identifying meaningful correlation of data and thus producing invaluable and groundbreaking health-related insights are strategically positioned at the heart of the data-intensive healthcare sector. It is progressively driving innovative solutions in clinical applications, healthcare management and administration, research, and development to public and global health (Bélisle-Pipon et al., 2021). In the context of this paper, AI is making strides in clinical decision-making – from supporting medical decisions via real-time assistance (Secinaro et al., 2021), automating diagnostic processes, assessing risk profiles, to optimising therapeutic decisions (Xu et al., 2023) and advancing clinical research (Lekadir et al., 2022), AI’s potential in clinical practice is monumental. Applications like IBM Watson for cancer diagnosis (Jie et al., 2021) (Taulli, 2021), IDX-DR for autonomous-detection of diabetic retinopathy, Google’s DeepMind Health for advanced eye screening and treatment, automated interpretation of cardiac imaging data processing and risk assessment in cardiology, to name a few – are making us rethink of how healthcare is delivered.
Yet, despite AI’s capacity to perform miracles in healthcare settings, it also walks a fine line, teetering between revolutionary benefits and risky pitfalls. In situations where AI potentially harms a patient, the question of responsibility is multifaceted. Should liability fall upon the designer of the initial algorithms, the technician who inputs the data, such as the echocardiograph operator, or the clinician who decides how much weight to give to the AI’s recommendations when they conflict with other visible but unrecorded clinical evidence? The legal community is currently confronting a bottleneck caused by an influx of theories that attempt to reconcile AI-related incidents with the conventional approach of imposing liability (Mohd Shith Putera & Saripan, 2019). This is simply because the nature of AI defies every bit of our understanding of causal attribution and accountability. To apply conventional regulatory paradigms to AI, devoid of autonomous learning capabilities, proves ineffective. Due to AI being a novel technology with unprecedented risks, the notion of attributing liability that is taught in classrooms for law students has been profoundly affected. This shift forces future legal professionals to start rethinking how liability imposition can be approached when intelligent systems are introduced. Incorporating a renewed understanding of liability, along with leveraging high-tech tools such as AI and Augmented Reality to nurture students’ critical thinking skills, is crucial in legal education. These advanced technologies not only enhance graduates’ practical capabilities but also provide innovative platforms for developing deeper analytical abilities essential for addressing complex legal issues (Sulistyanto et al., 2024). Recent research highlights that while AI tools like ChatGPT present concerns over academic integrity, they also offer immense potential for enhancing critical thinking and intellectual integrity when used responsibly in educational contexts (Plata et al., 2023). By integrating technology-driven critical thinking into legal training, students are better equipped to handle multifaceted challenges, elevating the overall quality of education and ensuring high standards of practice in the legal profession, ultimately benefiting society and upholding legal integrity.
Although the literature is replete with discussions on the challenges of resolving accountability issues in AI and the relevant liability rules, notably, none of these proposed theories have been empirically evaluated for their effectiveness, likely due to the scarcity of AI-related cases presented in courts. Likewise, the involvement of legal professionals in resolving these accountability issues in practice is rather nominal despite their fundamental role in shaping liability rules. Therefore, this research aims to enhance legal frameworks for AI accountability in healthcare by empirically testing the effectiveness of existing theories among legal professionals and developing actionable steps, bridging the gap between theoretical discussions and practical legal applications. The practical implications of this research lie in its attempt to integrate empirical evidence into theoretical frameworks, which thereafter facilitates the development of responsive legal structures that stay ahead of the curve in AI advancements. Ultimately, the research contributes to the broader social goal of demarcating healthcare services while safeguarding patient rights in a technologically ingrained healthcare setting.
Author: Iman Prihandono, S.H., M.H., LL.M., Ph.D.
Full article may be accessed through this link https://ajue.uitm.edu.my/wp-content/uploads/2024/10/18-Sakinatul_Bridging-Legal-Education.pdf