Short Communication, J Pulm Med Vol: 5 Issue: 1
Artificial Intelligence in Health Care: Ethical Challenges
Pathology Department, Memorial Hospital and Research Center, Jaipur, India
Received Date: 11 January, 2021; Accepted Date: 25 January, 2021; Published Date: 01 February, 2021
Citation: Rateesh S. Artificial Intelligence in Health Care: Ethical Challenges (2021) J Pulm Med 5:1
Artificial intelligence (AI) has given powerful models for diagnosis, using huge patient data with increased precision in a dynamic manner. It poses ethical challenges in terms of patient safety, privacy, responsibility for decisions taken and confidentiality. These machine based learning ‘black boxes’ may seriously jeopardize the traditional Hippocratic ethical principles of doctor patient relationship. It is essential for politicians, policy makers, legislators and regulatory bodies to meet out ethical challenges pertaining to the use of AI in healthcare if opportunities offered by the new technology can be used in benefit of patient without compromising ethical medical practice. The paper enumerates ethical challenges for this emerging machine based decision making.
Keywords: Artificial intelligence; Machine learning algorithms; Bias; Ethical issue
Artificial intelligence (AI) is a collection of technologies using algorithms and computer software to emulate human cognition in analysis of wide complicated data. Its application in health care is for peruse and accurate intervention for patients cure [1,2].
The historical developments can be traced back to 1960’s when ‘Dendral’, the earliest problem solving program was developed followed by MYCIN ; which was the first platform using AI in health care setting. In 1990’s, ‘Bayesian networks’ and ‘artificial neural networks’ were applied to add on to AI in health care [5,6].
In brief, AI uses machine learning algorithms which recognize patterns in behavior and create their own logic. It is essential that these developed algorithms are tested repeatedly and validated before they are applied in any field. When we see health care sector they have been used in diagnostics, in development of treatment protocols, drug development and monitoring of patient care. AI brings on board medical and information technology (IT) in a complimentary fashion. The world renowned health care services like The Mayo clinic from US and National Health Services (NHS) of UK along with techno giants like IBM and Google have developed AI algorithms for health care [7-10].
The recent literature has been obsessed with the utilitarian advantages of AI application in health care. Some of the advantages include development of radiological tools, meeting out deficiency of skilled radiologists and ultrasonic’ s technicians by using AI for chest X-ray screenings. AI has been used in keeping electronic health records, in analysis of pathological images, antibiotic resistance, medical devices and immunotherapy in cancer patients, as a risk predictor and in day to day monitoring of vital parameters of body by wearable personnel devices. It also finds application in genomic sequencing databases, faster data collection, and data processing and enhanced add on precision of robot assisted surgery [11-18].
A lot of emphasis has been given on application of AI in developing countries where there is scarcity of trained health care persons or doctors. It is hypothesized that AI could reduce outsourcing and therefore could improve health care facilities.
This technology poses ethical challenges in terms of patient’s privacy, safety and preferences which in the absence of current regulations and policies could jeopardize the pillars of medical ethics autonomy, beneficence, non-malfeasance, justice, confidentiality and principles of informed consent. AI application in health care setting cannot be done in hasty hay wire manner rather calls for reframing the basics of medical education. Legal issues like malpractice and product liability also needs to be settled. At the most AI can impact delivery of care to patients but the black box could eliminate human element or a doctor in the process [19,20].
The major limitation of AI is explained ability, huge data requirement and transferability. The machine learning algorithms known as ‘black box’ make decisions on the basis of huge number of connections so it becomes difficult to human mind how and on what basis decision was arrived. It creates problem of bias and questions reliability. The neural network requires large volume data for training and this at times can be a limitation in application of AI .
Doctor patient relationship is a bond that needs faith and interaction. The patient treatment is on holistic approach, tailored for each one taking into account patient wishes based on shared decision making. An automated system undermines the basic of communication with patient, where one to one patient contact, facial expressions, voice and other nonverbal signs are not intercepted by machine making the process highly mechanical and emotionless without human contact adding to patient’s loneliness and despair. The other ethical issues like reliance on decision making ‘black box’ by doctors, responsibility of taking stress in care also mushroom up with AI application. In case where AI harms the patient or there is disagreement between doctor and AI who will be perceived right is again an ethical dilemma .
The day to day increase of wearable health device although promotes patient’s own health ownership and support but could add to their anxiety as well .
Gaining trust on AI in healthcare application is a daunting task. There are no nationally agreed guidelines on standard of quality of AI devices. Once AI gets incorporated into system, patient issue, pricing and profit sharing issues will crop up. The younger generation more acceptances to AI will lead to two tiered health care system-younger population relying on AI and older population on traditional doctors. In such scenario, do we be able to give choice to patients for opting between doctor and AI? These are contentious issues that have to be addressed before full throttle of AI is applied in healthcare.
Who shall be responsible for wrong that occurs in course of algorithm decisions? The clinician, health care organizers, policy makers or AI developer. Accountability for decision making by AI remains an ethical issue .
Machine based algorithms outside health care are criticized for discrimination based on race, gender, age and religion . In training, data is obtained by voluntary consent from participants, who at times will be those who have given informed consent for their data use; this might result in underrepresentation of low socioeconomic strata or representative population of developing countries . Do clinicians can rely on decision making abilities of AI that have emerged out of data from developed nations based on their subset of population which is different from the population where AI is applied will it not result in bias, inequality or unfairness .
The use of data has to abide by European 2018 General data protection Regulation (GDPR), it raises concerns for maintainenece of confidentiality of patient data . Does the value of wider society of data about a person’s health triumph an individual right to withdraw consent for it’s use. After all the one who owns data the patient (source), the system (the collector) or the developer, is the question that remains unanswered .
The increased automation and dependence thereof of clinicians may skew doctors’ view of normality and hamper their cognitive pattern recognition . In case of failure of technology what shall be the scenario? How machine mistakes could be detected again raises concerns. The protagonists of AI will cite example of application of AI in airline autopilot mode the one that do not compromise the training of pilots but I would say be responsible for autopilot failures as causing plane crash.
AI enables researchers to analyze data quickly, thoroughly in an in expensive manner; this might lead to shifting research methods from ‘gold standard’ research methods to mere analysis of larger data only . The machine based learning focuses more on finding patterns and correlation in data without knowledge of causation ultimately defeating the whole purpose of research.
The next issue is the role of regulatory bodies to balance between public clinicians and the service promoting, growth, innovations. For example psychiatric patients who are at risk from any bad advice from digitized system, raises concerns .
The big business of health care using resources and expertise invites capital, time and knowledge. It is possible that health care providers sell data for profits. Now the next question is of course who owns the data? Financial interests from collaboration with Technology Company may generate conflicts of interest as well as duration of intellectual property rights over technology are some of the unanswered questions companies possessing the intellectual property ownership have to think .
A successful AI would improve clinical efficiency as doctors will then be able to delegate non-human tasks and could spare time for more meaningful works. The decision supporting tools will enhance doctors’ confidence in managing clinical uncertainty. The caveat is medico legal position of such decisions. It could well demean the profession and also affect job satisfaction as there is reduction in social element of consultation . One of the major long term effects on health care delivery system could be two tiered health care. The wealthy having access to the best AI healthcare as their deep pockets will enable them to afford. The uncommon diseases could be under diagnosed using AI. Will the western countries share technology with developing countries coupled with funding problems might curtail benefits of AI .
AI has a long way to go, as on now it appears to be a two edged sword. The policymakers, legislators, politicians, clinicians and ethicists need to formulate a feasible blue print looking around each aspect and predicted outcome of AI in healthcare so that the future generations could reap benefits of AI eliminating the harm .
There is no doubt that potential obstacles await every road of innovation that mandates man and machine, what needs to be guarded is the way by which professionals and industrial governance make it success.
- Patel VL , Shortliffe EH , Stefanelli M (2009) The coming of age of artificial intelligence in medicine. Artif Intell Med 46:5-17
- Graham J (2006) Artificial intelligence, machine learning and the FDA.
- Lindsay RK, Buchanan BG, Feigenbaum EA, Lederberg J (1993) Dendral: A case study of the first expert system for scientific hypothesis formation 61(2):209-261.
- Clancey WJ, Shortliffe EH (1984) Readings in medical artificial intelligence: the first decade. Addison-Wesley Longman Publishing Co., Inc :152
- William G. Baxt (1991) Use of an artificial neural network for the diagnosis of myocardial infarction. Annals of Int Medi 115(11): 843-848.
- Maclin PS, Dempsey J, Brooks J, Rand (1991) Using neural networks to diagnose cancer". J Medi Systems. 15 (1): 11???9.
- Coiera E (1997) Guide to medical informatics, the Internet and telemedicine. Chapman & Hall, Ltd.
- Power B (2015) Artificial Intelligence Is Almost Ready for Business. Massachusetts General Hospital.
- Bloch-Budzier S (2016) NHS using Google technology to treat patients
- Lorenzetti L (2016) Here's ibm watson health is transforming the health care Industry.
- Lee CS, Nagy PG, Weaver SJ (2013) Cognitive and system factors contributing to diagnostic errors in radiology. AJR Am J Roentgenol 201:6117
- Bouton CE , Shaikhouni A , Annetta NV (2016) Restoring cortical control of functional movement in a human with quadriplegia. Nature 533:247???50
- Jha S, Topol EJ (2016) Adapting to Artificial Intelligence: Radiologists and pathologists as information specialists. J American Medi Asso 316:2353???4
- Fiszman M, Chapman WW, Aronsky D (2000) Automatic detection of acute bacterial pneumonia from chest X-ray reports. J Am Med Inform Assoc 7:593???604
- Darcy AM, Louie AK, Roberts LW (2016) Machine Learning and the Profession of Medicine. J American Medi Asso 315:551???2
- Li CY, Liang GY, Yao WZ (2016) Integrated analysis of long noncoding RNA competing interactions reveals the potential role in progression of human gastric Cancer. Int J Oncol 48:1965???76
- Barnes B, Dupre J (2009) Genomes and what to make of them. University of Chicago Press.
- Artificial Intelligence and Machine Learning for Healthcare". Sigmoidal.
- Luxton DD (2016) Artificial intelligence in behavioral and mental health care. San Diego, CA: Elsevier Academic Pres
- Peek N, Combi C, Marin R, Bellazzi R (2015) Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes. Artif Intell Med 65(1):61-73
- Amato F, Lopez A, Pena-Mendez EM, Vanhara P, Hampl A, et.al (2013) Artificial neural networks in medical diagnosis. J ApplBiomed 11(2):47-58
- Ramesh AN, Kambhampati C, Monson JRT, Drew PJ (2004) Artificial intelligence in medicine. Ann R Coll Surg Engl. 86(5):334-338
- Lupton D (2007) Self-tracking, health and medicine. Health Sociol Rev. 26(1):1???5
- Daniel Schonberger (2009) Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications, International Journal of Law and Information Technology. 27: 2
- Sears M (2018) AI bias and the ???people factor. In: AI development. Jersey City: Forbes Media
- Nebeker C, Harlow J, Giacinto-Espinoza R, Linares-Orozco R, Bloss C, et.al (2018) Ethical and regulatory challenges of research using pervasive sensing and other emerging technologies: IRB perspectives. Am J Bioeth Empir Bioeth AJOB Empir Bioeth. 8(4):266???76
- Pimple KD (2013) Emerging pervasive information and communication tTechnologies (PICT): Ethical challenges, opportunities and safeguards. Dordrecht: Springer Netherlands
- Nebeker C, Bartlett Ellis RJ, Torous J (2019) Development of a decision-making checklist tool to support technology selection in digital health research. Transl Behav Med
- Washington (1979) DC: Department of Health, Education, and Welfare, US Department of Health and Human Services 23192???7
- Belsher BE, Smolenski DJ, Pruitt LD, Bush NE, Beech EH, et al. (2019) Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry
- The Institute for Ethics in Artificial Intelligence. 2019.
- Van Velthoven MH, Smith J, Wells G, Brindley D (2018) Digital health app development standards: A systematic review protocol. BMJ Open 8(8):e022969
- Jiang F, Jiang Y, Zhi H, Dong Y, Li H, et al. (2017) Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol. 2(4):230???43
- Lupton D (2017) Self-tracking, health and medicine. Health Sociol Rev 26(1):1???5