Clin Mol Hepatol > Volume 31(1); 2025 > Article
Wang and Xia: Accountability in AI medicine: A critical appraisal of ChatGPT in patient self-management and screening
Dear Editor,
We have read with great interest the recent publication by Yeo et al. [1], which presents an optimistic view of the potential for ChatGPT in responding to patient queries about cirrhosis and hepatocellular carcinoma (HCC). The study suggests that ChatGPT exhibits promising capabilities, which could contribute to increased awareness and enhanced management efficacy of these conditions. The authors propose that, with adequate training, language generation systems like ChatGPT could be further optimized to improve their performance in the context of patient selfmanagement.
While we commend the authors for their insightful work, we feel compelled to raise some pertinent concerns. Specifically, we question the accountability for any inaccuracies in the responses provided by ChatGPT. A previous study has indicated that, when compared to the guidelines, only 26% of ChatGPT’s responses to clinical questions were fully accurate, while 48% of the answers contained errors or were misleading [2]. This discrepancy could result in a substantial number of patients receiving misleading information, with potentially grave consequences.
Despite the significant advancements in ChatGPT’s conversational and interactive capabilities, the potential for factual inaccuracies remains [3]. It is imperative that human stakeholders take on the responsibility to ensure ChatGPT’s effectiveness and appropriate use in healthcare settings.
When medical decisions influenced by ChatGPT result in harm to individuals, the issue of accountability becomes particularly critical. It is necessary to clearly define the principles of accountability. However, the current legal frameworks in most jurisdictions offer little clarity on this issue, and it is uncertain whether traditional product liability theories apply to ChatGPT. Although medical software is clas-sified as medical devices in courts across Europe and the United States, developers may categorize ChatGPT as a service rather than a product to evade liability, complicating the process for patients seeking redress. Therefore, it must be clarified who bears the responsibility when ChatGPT is involved in medical decision-making and causes harm—the service provider, the technology supporter, or the user. This determination may require a joint assessment of the roles of the algorithmic model, training data, and user input in the tortious act, based on specific circumstances.
Moreover, it is essential to establish a risk transfer mechanism. Drawing on the insurance systems established in some countries, the socialized transfer of risks can be achieved through the purchase of commercial insurance, ensuring that victims receive timely compensation when liability is difficult to define.
Finally, ethical and legal training is indispensable. Medical professionals must be trained on the ethical and legal frameworks concerning artificial intelligence to ensure they can make responsible decisions when using ChatGPT.
We are supportive of the potential application of ChatGPT in the medical field. Nevertheless, we believe that until ethical concerns, such as accountability, are adequately addressed, clinicians should refrain from endorsing ChatGPT for patient self-management.

FOOTNOTES

Authors’ contribution
Jiawen Wang: Writing - Original Draft. Tian Xia: Writing - Review & Editing.
Conflicts of Interest
The authors have no conflicts to disclose.

Abbreviations

HCC
hepatocellular carcinoma

REFERENCES

1. Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023;29:721-732.
pmid pmc
2. Lombardo R, Gallo G, Stira J, Turchi B, Santoro G, Riolo S, et al. Quality of information and appropriateness of Open AI outputs for prostate cancer. Prostate Cancer Prostatic Dis 2024 Jan 16;doi: 10.1038/s41391-024-00789-0.
crossref pmid pdf
3. Whiles BB, Bird VG, Canales BK, DiBianco JM, Terry RS. Caution! AI bot has entered the patient chat: ChatGPT has limitations in providing accurate urologic healthcare advice. Urology 2023;180:278-284.
crossref pmid
TOOLS
METRICS Graph View
  • 0 Crossref
  •  0 Scopus
  • 470 View
  • 38 Download
ORCID iDs

Jiawen Wang
https://orcid.org/0000-0003-0245-1449

Tian Xia
https://orcid.org/0009-0002-5739-5931

Related articles

Editorial Office
The Korean Association for the Study of the Liver
Room A1210, 53 Mapo-daero(MapoTrapalace, Dowha-dong), Mapo-gu, Seoul, 04158, Korea
TEL: +82-2-703-0051   FAX: +82-2-703-0071    E-mail: cmh_journal@ijpnc.com
Copyright © The Korean Association for the Study of the Liver.         
COUNTER
TODAY : 5798
TOTAL : 2392821
Close layer