The application of artificial intelligence (AI) in the domains of medical science has introduced significant innovations that have improved the diagnosis of various kinds of disease, including surgery and patient treatment. Certain AI technologies viz diagnostic software with machine learning, robot surgery, and predictive analytics have transformed at such a level, where medical care can be made quite accurate and efficient. However, through these technologies, few legal concerns, particularly in the spheres of medical malpractice, are getting challenged. 


Redefining Medical Negligence In The Era Of Ai 

In the ordinary course of event, medical professionals have a duty of care towards their patients and if they act negligently or inflict any injury onto the patients due to their inefficient as well as deficient service, inter alia, it results in medical negligence. However, it would be challenging to apportion blame if AI had been used to make key medical decisions. 

One of the primary challenges in AI-engaged medical malpractice is the standard of care. It is difficult to compare the behaviour of AI-guided medical devices with a skilled human physician because a substantial portion of the same operates at a level which is totally incomprehensible to humans. For example, human radiologists do not have the same level of accuracy in diagnosis of uncommon diseases at par with AI machine learning systems which are trained on better databases. A skilled doctor cannot usually recognize a particular sickness but in recent times AI systems are being used to detect cancer cells that are at a nascent stage in a human body effectively.

Another considerable issue that needs to be addressed is that, the information which is being stored in the database of the AI systems has a very high probability of getting leaked or can be blatantly misused. As a result, the particular information is accessible to the other users at a global stage. Now the question arises that who should be held liable for such data breach. For instance, when an individual uses an AI system and uploads a document within the interface of that AI system in order to generate information regarding that document, the AI system stores such document within its database and may further transmit the same confidential information present in the document to some other individual who has inadvertently searched those key words that are there embedded in that document or was searching something related to that document, thus breaching the confidentiality of the document. In such cases, the accountability of such data breach will fall on whose shoulders, the person who has accessed such document or the AI database who has saved such document in its database.  Further, the case titled ANI Media Pvt. Ltd. v. OpenAI Inc & Anr. bearing case no. CS(COMM) 1028/2024 has been filed in the Hon’ble High Court of Delhi in the year 2024 wherein the ANI’s News has alleged that, ChatGPT is using their confidential information with the users without obtaining ANI’s prior approval or without any licensing agreement. The Hon’ble High Court to have a better clarity about the case the Hon’ble High Court appointed experts also, who gave their opinions against such AI platforms and ChatGPT. Moreover, the Federation of India Publishers, Indian Music Industry, IGAR Project LLP, Flux Labs AI Private Ltd have also intervened in the case and have made their submission against the ChatGPT for using their confidential information as well. However, the fate of the same is still subsisting and yet to be decided by the Hon’ble High Court of Delhi.

The maxim “the thing speaks for itself,” or “res ipsa loquitur,” is often used in medical malpractice matters if it is too apparent that the harm has been caused by a change in normal treatment. The application of this principle has become complicated by the nature of AI as it makes certain decision-making processes difficult to understand. When the purpose of an AI advice is discussed, it becomes difficult to determine the actual cause for which it has rendered the particular advice, which nevertheless increases the need for an expert testimony to determine whether the AI activities deviate from the current medical standards or not.

In India, the matters of medical negligence are governed under the Consumer Protection Act, 2019, which considers the medical services to be ‘services’ and the patients as ‘consumers.’ The act provides the aggrieved parties with a forum to approach and to get redressal against negligence in medical services. To establish a case of medical negligence, the complainant must prove that the healthcare provider owed a duty of care which was breached, and which consequently resulted in legal injury. Furthermore, such legal injury, if proved, gives rise to damages. The landmark case of Indian Medical Association v. V.P. Shantha AIR 1995 SC 550 held that medical professionals are subject to consumer protection laws, accordingly holding them liable for acting negligently in the circumstances where a certain standard of care was their primary duty

Further it becomes necessary to consider a particular article which was published by the Times of India in October 2024 wherein a patient who died from advanced liver disease after a doctor’s inappropriate referral. The Karnataka State Consumer Disputes Redressal Commission held the doctor liable for referring the patient to a dermatologist in place of a gastroenterologist, due to which there was a delay in the treatment and due to such carelessness, the patient died. This case directly raises concern towards the clinical judgment and timely intervention. Now, if we take a scenario wherein an AI based system suggests any point of view or opinion, and that view/opinion is duly followed by the doctor as it is, without applying his mind or without doing any due diligence. Then in that case, who should be held accountable, should it be the AI or the doctor himself?

The use of the AI in manufacturing of medical products as well as manufacturing of AI based medical products / equipment have tremendously grown in recent times but are there any laws that hold the manufacturer liable if there are any defects in that AI based machinery? The answer to the aforesaid question is YES, there is a specific law that deals with the accountability of the manufacturers.   

Manufacturers of AI based medical devices ought to be held strictly liable for defects that results in injury, regardless of the negligence. The Consumer Protection Act, 2019 includes the product liability provisions, which allows patients to claim compensation or redressal from manufacturers of defective AI systems. Furthermore, determining whether a product or its user interface is defective, especially when AI systems are programmed to evolve and learn from experience is another such considerable issue.

Regulatory bodies also play a crucial role in preventing the risks of AI-driven medical malpractice. The Indian Council of Medical Research (ICMR) has published guidelines regarding the use of AI in healthcare which prioritize patient safety, increases transparency in algorithms, and also accountability. Adhering to the guidelines, it can provide a good base for establishing or developing the AI-driven medical solutions. Additionally, the Bureau of Indian Standards (BIS) is working upon the development of the standards for AI in healthcare, which can further bring solutions to such legal obligations.

The guidelines are made to emphasize upon the significance of informed consent, allowing patients to be well-informed about the level of AI integration in their treatment which is aligned with current legal provisions under Indian law, ensuring that patients are well-informed of their risks, benefits, and alternatives (if any) which are involved in AI-assisted medical interventions. The guidelines also point out the necessity of algorithmic transparency, calling for required details from the AI models to enable healthcare providers and patients to understand.


Conclusion 

It can be concluded that an AI has the transformative power to redefine India’s healthcare industry, which would also require significant amendments in the current legal system that can provide solutions to any of the legal challenges that have cropped up or will arise in the near future. This will include but will not be limited to defining rules on liability, informed consent, and standard of care. Moreover, the legal system has to be developed in such a way which prioritises as well as protects the patient’s rights and also ensures that advancements in technology should be within the legal spheres that benefit the patients. As the jurisprudence continues to develop, a cooperative effort between legal specialists, medical experts, and technologists is necessary to balance AI innovation with ethical and legal responsibility.


Author: Namanveer Singh Sodhi
Co- Author: Pritish Kumar Panda