Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening

Article information

Clin Mol Hepatol. 2025;31(1):e1-e2
Publication date (electronic) : 2024 September 26
doi : https://doi.org/10.3350/cmh.2024.0769
1Department of Urology, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Shengli Clinical Medical College of Fujian Medical University, Fuzhou, Fujian, China
2Department of Urology, Xuzhou Central Hospital, Xuzhou, Jiangsu, China
Corresponding author : Tian Xia Department of Urology, Xuzhou Central Hospital, No. 199, Jiefang South Road, Quanshan District, Xuzhou, Jiangsu, 221009, China Tel: +86-0516-83956900, Fax: +86-010-83956365, E-mail: 542819434@qq.com
Jiawen Wang Department of Urology, Fujian Provincial Hospital, Fuzhou University Affiliated Provincial Hospital, Shengli Clinical Medical College of Fujian Medical University, No. 134, Dongjie Street, Fuzhou, Fujian, 350001, China Tel: +86-0591-87557768, Fax: +86-0591-87532356, E-mail: 1811210684@pku.edu.cn
Editor: Gi-Ae Kim, Kyung Hee University, Korea
Received 2024 September 9; Accepted 2024 September 24.

Dear Editor,

We have read with great interest the recent publication by Yeo et al. [1], which presents an optimistic view of the potential for ChatGPT in responding to patient queries about cirrhosis and hepatocellular carcinoma (HCC). The study suggests that ChatGPT exhibits promising capabilities, which could contribute to increased awareness and enhanced management efficacy of these conditions. The authors propose that, with adequate training, language generation systems like ChatGPT could be further optimized to improve their performance in the context of patient selfmanagement.

While we commend the authors for their insightful work, we feel compelled to raise some pertinent concerns. Specifically, we question the accountability for any inaccuracies in the responses provided by ChatGPT. A previous study has indicated that, when compared to the guidelines, only 26% of ChatGPT’s responses to clinical questions were fully accurate, while 48% of the answers contained errors or were misleading [2]. This discrepancy could result in a substantial number of patients receiving misleading information, with potentially grave consequences.

Despite the significant advancements in ChatGPT’s conversational and interactive capabilities, the potential for factual inaccuracies remains [3]. It is imperative that human stakeholders take on the responsibility to ensure ChatGPT’s effectiveness and appropriate use in healthcare settings.

When medical decisions influenced by ChatGPT result in harm to individuals, the issue of accountability becomes particularly critical. It is necessary to clearly define the principles of accountability. However, the current legal frameworks in most jurisdictions offer little clarity on this issue, and it is uncertain whether traditional product liability theories apply to ChatGPT. Although medical software is clas-sified as medical devices in courts across Europe and the United States, developers may categorize ChatGPT as a service rather than a product to evade liability, complicating the process for patients seeking redress. Therefore, it must be clarified who bears the responsibility when ChatGPT is involved in medical decision-making and causes harm—the service provider, the technology supporter, or the user. This determination may require a joint assessment of the roles of the algorithmic model, training data, and user input in the tortious act, based on specific circumstances.

Moreover, it is essential to establish a risk transfer mechanism. Drawing on the insurance systems established in some countries, the socialized transfer of risks can be achieved through the purchase of commercial insurance, ensuring that victims receive timely compensation when liability is difficult to define.

Finally, ethical and legal training is indispensable. Medical professionals must be trained on the ethical and legal frameworks concerning artificial intelligence to ensure they can make responsible decisions when using ChatGPT.

We are supportive of the potential application of ChatGPT in the medical field. Nevertheless, we believe that until ethical concerns, such as accountability, are adequately addressed, clinicians should refrain from endorsing ChatGPT for patient self-management.

Notes

Authors’ contribution

Jiawen Wang: Writing - Original Draft. Tian Xia: Writing - Review & Editing.

Conflicts of Interest

The authors have no conflicts to disclose.

Abbreviations

HCC

hepatocellular carcinoma

References

1. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;29:721–732.
2. Lombardo R, Gallo G, Stira J, Turchi B, Santoro G, Riolo S, et al. Quality of information and appropriateness of Open AI outputs for prostate cancer. Prostate Cancer Prostatic Dis 2024;Jan. 16. doi: 10.1038/s41391-024-00789-0.
3. Whiles BB, Bird VG, Canales BK, DiBianco JM, Terry RS. Caution! AI bot has entered the patient chat: ChatGPT has limitations in providing accurate urologic healthcare advice. Urology 2023;180:278–284.

Article information Continued