Artificial intelligence assistant can explain these laboratory results for you

1758085611 GettyImages 2197823918 resizeed scaled

When Judith Miller had routine blood works in July, she got a phone alert on the same day when the results of her laboratory were published on the Internet. Therefore, when the doctor sent her the next day that her comprehensive tests were good, Miller wrote again to ask about high carbon dioxide and the low -reported anonal gap.

While a 76 -year -old resident waited for listening, Miller did something that patients are increasingly doing when they could not reach the health care team. She put her test results in Claude and asked AI to evaluate data.

“Claude helped give me a clear understanding of the deformities,” Miller said. She said that the Truidic IQ was not reported to anything that warns of danger, so she was not worried while waiting for her to hear her doctor.

Patients can reach unprecedented access to their medical records, often through online patient gates such as mychart, because federal law requires health institutions to immediately issue electronic health information, such as notes about doctor visits and test results. study It was published in 2023 that 96 % of patients surveyed want immediate access to their records, even if their provider does not review them.

Many patients use large language models, or LLMS, such as Chatgpt from Openai, Claude’s Claude, and Google’s Gemini, to explain their records. This help comes with some risks. Doctors and the patient’s preachers warn that AI tools can produce wrong answers and that sensitive medical information may not remain special.

However, most adults are cautious about artificial intelligence and health. Fifty -five percent of those who use or interact with artificial intelligence are not confident that the information provided by AI Chatbots is accurate, according to 2024 KFF poll. KFF is a non -profit health information that includes KFF Health News.

“LLMS is very strong in theory and can provide great advice, but they can also give a truly terrible advice depending on how to pay it,” said Adam Rodman, an internal specialist at the Beth Israel Medical Center in Massachusetts.

Justin Hons, a neurologist at Augushath in Colorado, said it may be very difficult for patients who are not medical trainers to see if chat tools of artificial intelligence make mistakes.

“In the end, it is just a need to be careful in general with LLMS. With the latest models, these concerns continue to obtain a lower and less problem but not completely solved,” said House.

Rodman has seen an increase in Amnesty International among his patients in the past six months. In one case, the patient took a screenshot for the results of the hospital laboratory on mychart and then downloaded it to Chatgpt to prepare questions before appointing him. Rodman said he welcomes patients who show him how to use artificial intelligence, and that their research creates an opportunity to discuss.

Almost 1 out of every 7 adults are more than 50 who use artificial intelligence to receive health information, according to a recent poll from Michigan UniversityWhile 1 out of 4 adults under the age of 30 do it, according to the KFF poll.

The use of the Internet to defend better care for itself is not new. Patients traditionally used websites such as WebMD, PubMed or Google to find the latest research and asked for advice from other patients on social media platforms such as Facebook or Reddit. But the AI ​​Chatbots ability to generate custom recommendations or second opinions in seconds are new.

Liz SalmiOpennotes, the director of communications and patient initiatives at Opennotes, an academic laboratory in Beth Israel, the deacon who calls for transparency in health care, asked about the quality of artificial intelligence in interpretation, especially for patients.

in Study proof of concept Salmi and his colleagues have been published this year, and they analyzed the accuracy of Chatgpt, Claud and Gemini responses on patient questions about a clinical note. Salemi said that the three artificial intelligence models were well, but how patients were important, their questions were important. For example, telling Chatbot Amnesty International with the participation of the doctor’s personality and asking it to one question at a time of improving the accuracy of his responses.

Salmi said that privacy is a source of concern, so it is important to remove personal information such as your name or the social security number from claims. Rodman said that the data is transmitted directly to the technology companies that have developed artificial intelligence models, adding that he is not aware of any of what is compatible with the federal privacy law or thinking about the safety of patients. Sam Al -Tamman, CEO of Openai, warned against a Podcast last month About personal information in Chatgpt.

“Many people in the use of large language models may not know about hallucinations,” Salmi said, referring to a response that may seem reasonable but inaccurate. For example, Openai’s Whisper, AI-AI-AAT in hospitals, provided a fake medical treatment in a copy, according to A report from the Associated Press.

Salmi and Dave Debronscar, one of the survivors of cancer patients and a patient’s defender who abandons your privacy over the Internet, said that the use of the Tawoid AI requires a new type of digital health literacy that includes asking questions in a certain way, and verification. He writes a blog Dursed to use patients from artificial intelligence.

Patients are not the only ones who use artificial intelligence to explain the test results. Stanford Healthcare AI’s assistant has launched its doctors to explain clinical tests and laboratory results to send them to patients. Colorado researchers studied the accuracy of the summaries created by ChatGPT from 30 radi reports, along with the satisfaction of four patients about them. Of the 118 valid responses from patients, 108 referred to ChatGPT summaries clarifying details about the original report.

But sometimes, House, in which he participated, said that the small number is important of the responses that indicated that patients who have been confirmed at times or not pecticated. Preprint Study.

Meanwhile, four weeks and two follow -up messages from Miller in Mychart, Miller ordered her doll and additional test suggested by Miller. The results are normal. Miller was comfortable and said she was better aware of her inquiries from artificial intelligence.

“It is a very important tool in this regard,” said Miller. “It helps me to organize my questions, do my research and stadium level.”

KFF Health News It is a national news room that produces in-depth press on health issues and is one of the basic operating programs in KFF-independent source of health policy research, polling, and journalism. Learn more about KFF.

Use our content

This story can be republished for free (details).

Post Comment