Correspondence on Letter 2 regarding “Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma”

Article information

Clin Mol Hepatol. 2023;29(3):823-824
Publication date (electronic) : 2023 May 31
doi : https://doi.org/10.3350/cmh.2023.0182
1Karsh Division of Gastroenterology and Hepatology, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
2Bristol Medical School, University of Bristol, Bristol, UK
Corresponding author : Yee Hui Yeo Karsh Division of Gastroenterology and Hepatology, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA Tel: +1-310-423-1971, Fax: +1-310-423-2356, E-mail: Yeehui.yeo@cshs.org
Editor: Seung Up Kim, Yonsei University College of Medicine, Korea
Received 2023 May 27; Accepted 2023 May 30.

Dear Editor,

We appreciate thoughtful comments from Dr. Kleebayoon and Dr. Wiwanitkit regarding our research [1]. We welcome the discussion around the ethical implications of artificial intelligence (AI) technology, particularly in the context of ChatGPT’s application within healthcare. In the letter, the authors underscored the importance of establishing ethically sound practices and safeguarding against misuse of such technology, emphasizing the dual responsibility that lies with AI developers and users.

We concur with the authors’ assertion regarding the need for professional healthcare provider involvement in verifying the information rendered by ChatGPT, despite the model’s high accuracy, as shown in our study [2]. The complexity of medical decision-making necessitates human oversight and professional judgment. While AI can provide extensive information, real-world applications in healthcare demand nuanced understanding, clinical acumen, and the human touch, especially in areas such as emotional support and interpretation of patient-specific contexts [3]. Therefore, while ChatGPT can serve as an informational adjunct, its responses should be validated by healthcare professionals to ensure safe and reliable patient care [4].

Further, we are in alignment with your views on the ethical application of AI, specifically chatbots. AI, akin to societal frameworks, requires comprehensive and adaptive guidelines for its conduct. Currently, there are two fundamental controls over AI conduct: (A) the alignment of the AI models and (B) the preferences and moral values of the user.

For the former, developers of AI models like GPT-4 bear the responsibility of ensuring that these technologies respect ethical, equitable, and moral values [5]. This includes ensuring that the training data is diverse and representative and does not perpetuate harmful biases or contain contentious content. Strategies such as fine-tuning the data, setting up rules to avoid the generation of illegal or harmful content, soliciting and integrating feedback on problematic model outputs, and developing techniques to make the model refuse certain types of requests are crucial to this endeavor [6]. Nevertheless, we must also acknowledge the inherent biases in the data the model is trained on and the technical limitations of current AI technology. On the user end, the ethical use of AI necessitates responsible conduct [7]. Users must be guided to use these tools in a manner that respects their potential implications and respects the boundaries of lawful and ethical conduct.

Together, we must strive towards creating a cooperative and ethically sound environment for AI applications and foster trust and understanding in these emerging technologies, particularly in sensitive fields such as healthcare.

Notes

Authors’ contribution

YHY: drafting of the manuscript. JSS, WHN: critical review and final approval of the manuscript.

Conflicts of Interest

The authors have no conflicts to disclose.

Abbreviations

AI

artificial intelligence

References

1. Kleebayoon A, Wiwanitkit V. Assessing the performance of ChatGPT: Comment. Clin Mol Hepatol 2023;May. 24. doi: 10.3350/cmh.2023.0170.
2. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;Mar. 22. doi: 10.3350/cmh.2023.0089 .
3. Jeffrey D. Empathy, sympathy and compassion in healthcare: Is there a problem? Is there a difference? Does it matter? J R Soc Med 2016;109:446–452.
4. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - Addressing ethical challenges. N Engl J Med 2018;378:981–983.
5. OpenAI. GPT-4 System Card. OpenAI web site, <https://openai.com/gpt-4 >. Accessed 25 May 2023.
6. Blackman R. A practical guide to building ethical AI. Harvard Business Review 2020;15
7. Marr B. How Do We Use Artificial Intelligence Ethically?. Forbes web site, <https://www.forbes.com/sites/bernardmarr/2021/09/10/how-do-we-use-artificial-intelligence-ethically/?sh=591f0bc279fd>. Accessed 25 May 2023.

Article information Continued