A Significant Number of Doctors are Already Integrating AI Into Medical Care

A Significant Number of Doctors are Already Integrating AI Into Medical Care

Credit: Pixabay

A recent survey of approximately 1,000 UK general practitioners found that one in five doctors use generative AI tools—such as OpenAI’s ChatGPT or Google’s Gemini—to support clinical practice.

Doctors reported using generative AI for tasks like creating documentation after appointments, assisting in clinical decision-making, and delivering patient information—such as easy-to-understand discharge summaries and treatment plans.

The Role of AI in Transforming Healthcare

Given the excitement surrounding AI and the current pressures on health systems, it’s unsurprising that both doctors and policymakers view AI as essential to modernizing and transforming healthcare.

However, GenAI is a recent development that challenges our approach to patient safety, and much remains to be understood before it can be safely incorporated into routine clinical practice.

AI applications have traditionally been designed for specific tasks. For instance, deep learning neural networks are effective in classification tasks, such as analyzing mammograms to support breast cancer screening.
Using AI in clinical practice could pose a range of issues. (Tom Werner/DigitalVision/Getty Images)

AI applications have traditionally been designed for specific tasks. For instance, deep learning neural networks are effective in classification tasks, such as analyzing mammograms to support breast cancer screening.

The Versatility of Generative AI

However, GenAI is not limited to a single, defined task. Built on foundational models, these systems have broad capabilities, allowing them to generate text, images, audio, or a mix of these. These abilities can then be tailored for various uses, such as answering questions, coding, or image creation, with potential applications limited only by the user’s creativity.

A key challenge is that GenAI wasn’t designed with a specific purpose in mind, so safe applications in healthcare remain uncertain, making it unsuited for widespread clinical use at this time.

Another issue with GenAI in healthcare is the well-known occurrence of “hallucinations“—outputs that are nonsensical or inaccurate responses to the input provided.

Hallucinations in GenAI have been studied when it’s used to summarize text. One study found that various GenAI tools sometimes made incorrect connections based on the text or included information not actually present in the original content.

These hallucinations happen because GenAI relies on probability—predicting the next likely word in a given context—rather than truly “understanding” as humans do. As a result, GenAI outputs are often plausible but not necessarily accurate.

This reliance on plausibility over accuracy makes GenAI unsafe for regular use in medical practice right now.

For example, a GenAI tool that listens to patient consultations and generates summary notes could allow doctors and nurses to focus more on the patient. However, the tool might also create notes based on what it “thinks” could be true.

The Risks of Inaccurate GenAI Summaries in Healthcare

The GenAI-generated summary could incorrectly change the frequency or severity of symptoms, add symptoms the patient never mentioned, or include details not actually discussed. Healthcare professionals would then need to carefully review these notes and rely on memory to catch any plausible-sounding but inaccurate information.

In a traditional family doctor setting, where the GP knows the patient well, identifying inaccuracies may not be a major issue. However, in a fragmented healthcare system where patients are frequently seen by different providers, inaccuracies in patient records could lead to serious health risks, including delays, incorrect treatments, and misdiagnoses.

The risks tied to hallucinations are considerable. However, it’s important to note that researchers and developers are actively working to minimize these occurrences.

Another reason GenAI isn’t ready for healthcare is that patient safety relies on evaluating its interactions within specific contexts—assessing how the technology performs with people, aligns with regulations and pressures, and fits the culture and priorities of a broader health system. This systems-based view would be essential to determine whether GenAI can be used safely.

The Adaptive Nature of Generative AI

However, GenAI’s open-ended design makes it adaptable for uses that may be hard to anticipate. Additionally, developers continually update GenAI with new generic capabilities, which can change the tool’s behavior.

Moreover, harm could still occur even if GenAI functions as intended, depending on the usage context.

For instance, using GenAI chatbots for triaging could impact patients’ willingness to engage with healthcare. Those with lower digital literacy, non-native English speakers, or non-verbal patients might struggle with GenAI, potentially leading to unequal outcomes. Thus, while the technology may “work,” it could inadvertently disadvantage certain users.

Such risks are difficult to anticipate through conventional safety analysis, which typically examines how failures may cause harm in specific situations. While GenAI and other AI tools hold promise for healthcare, widespread adoption will require more adaptable safety standards and regulatory oversight as these technologies evolve.

Developers and regulators must also collaborate with communities using these tools to ensure they can be safely integrated into routine clinical practice.


Read the original article on: Science Alert

Read more: AI Rapidly Evaluates Antidepressant Efficacy

Share this post