Davenport T, Kalakota R The potential for artificial intelligence in healthcare. Future Healthc J.. 2019; 6:(2)94-98

Foresee Medical. Natural language processing in healthcare. 2023.

Nuffield Council on Bioethics. Artificial intelligence (AI) in healthcare and research. 2018.

Panarthy K, Cheung D Doctors notes to data insights: how natural language processing helps decode healthcare. 2022;

Schulman J, Zoph B, Kim C Introducing ChatGPT. 2022;

Artificial intelligence in healthcare

02 August 2023
Volume 31 · Issue 8

Artificial intelligence has been increasingly talked about in recent months, particularly in relation to the release of ChatGPT, a model for conversational interaction made available for public use in November 2022 (Schulman et al, 2022). ChatGPT, like other natural language processing tools, is designed to provide a written response to questions in a way that appears to be natural (Foresee Medical, 2023).

The field of artificial intelligence has multiple areas that have been identified as potentially useful in a healthcare setting, including natural language processing, machine learning and process automation (Nuffield Council on Bioethics, 2018; Davenport and Kalakota, 2019). It has the potential to allow for more efficient diagnosis and treatment of illness, improved administration, and the creation of clinical documentation (Davenport and Kalakota, 2019).

Just one example of the potential for natural language processing in healthcare is Google Cloud’s Healthcare Natural Language Application Programming Interface, which is designed to read unstructured medical text and generate a structured representation of knowledge from the data (Pamarthy and Cheung, 2022). This can potentially be used to extract relevant information from a large dataset and derive insights from medical records that can then be integrated into healthcare processes.

However, research into healthcare applications of artificial intelligence also presents concerns, particularly for the ethics of patient care (Nuffield Council on Bioethics, 2018). Artificial intelligence can make mistakes, and is dependent on the data used to train and inform any model. This has been evident for anyone who has used ChatGPT, as it is designed to produce an answer that sounds natural and plausible, not necessarily one that is strictly accurate. This poses the question of who is responsible if a patient is given inaccurate information by a tool that uses artificial intelligence. It also raises concerns about transparency and informed decision making, when the logic that drives a particular tool or model may be difficult to explain to someone who is not an expert.

Artificial intelligence is likely to vastly improve healthcare, if applied carefully, responsibly and ethically. It has the potential to reduce pressure on the workforce and streamline the provision of care. However, I think we are a long way from artificial intelligence being usefully implemented in everyday healthcare. Before this can happen, there are important ethical questions to be addressed and a lot of research to be done.