It has deceived a simple development of artificial intelligence – and revealed a dangerous defect in medical ethics
A study conducted by researchers at the ICAHN Medical College in Mount Sinai, in cooperation with colleagues from the Rabin Medical Center in Israel and other collaborators, indicates that even the most advanced artificial intelligence models (AI) can make minor errors amazing when facing complex medical ethics scenarios.
Results, which raise important questions about how and when to rely on large language models (LLMS), such as ChatGPT, have been reported in health care settings, in the July 22 issue via the Internet of Digital medicine npj(10.1038/S41746-025-01792-Y).
The research team was inspired by the book of Daniel Kahaiman “Thinking, Fast and Slow”, which contradicts the rapid and intuitive reactions with slow and analytical thinking. It has been observed that LLMS models stumble when classic side -minded puzzles receive hidden adjustments. Based on this insight, the study tested the transformation of artificial intelligence systems between these two situations when facing well -known moral dilemmas.
“Artificial intelligence can be very strong and very effective, but our study showed that it might fail to answer the most knowledgeable or intuitive answer, even when this response over the critical details,” says ICAHN School at ICAHN School at ICAHN School at ICAHN School at Mount Sinai. “In daily situations, this type of thinking may pass without anyone noticing it. But in health care, the decisions often carry serious ethical and unique effects, they can have these nuances real consequences for patients.”
To explore this trend, the research team tested many LLMS commercially available using a mixture of creative side thinking puzzles and a little known medical ethics. In one of the examples, they adapted the classic “dilemma of the surgeon”, a mystery in the seventies of the last century highlighting the stimulant bias. In the original version, a boy was injured in a car accident with his father and rushed to the hospital, where the surgeon shouts, “I cannot work on this boy – he is my son!” Following is that the surgeon is his mother, although many people do not consider this possibility due to gender bias. In the modified version of the researchers, they explicitly stated that the boy’s father was the surgeon, which led to the removal of mystery. However, some artificial intelligence models still respond that the surgeon should be the boy’s mother. The error reveals how LLMS can cling to familiar patterns, even when it contradicts new information.
In another example of testing whether LLMS depends on familiar patterns, the researchers have been based on a classic moral dilemma in which religious parents refuse to transmit life to their child. Even when the researchers changed the scenario to mention that parents have already agreed, many models still recommend canceling a rejection that no longer exists.
The co -author, the director, author, director, director, director, director of intelligence, director, director of the opening intelligence, director, director of the opening intelligence, “, director, director of the opening intelligence,”, director, director of the opening intelligence, “Professor Fishburg at the Faculty of Medicine in ICAN in Jabal Sinai, and the head of Amnesty International in the Sinai Sinai System.” Unbelievable, but it is not infallible. Doctors and patients alike must understand that artificial intelligence is used better as a supplement to enhance clinical experience, not an alternative to it, especially when moving in complex or high -risk decisions.
“Simple adjustments to familiar exposed cases blind spots that doctors cannot tolerate,” says main author Shili Sofer, a colleague of the blood diseases institute at the Davidov Cancer Center at the Rabin Medical Center. “It emphasizes why human oversight should remain central when we publish Amnesty International in the care of patients.”
After that, the research team plans to expand their business by testing a wide range of clinical examples. They also develop a “artificial intelligence guarantee laboratory” to systematically assess how different models deal with medical complexity in the real world.
The paper is entitled “Disadvantages of big language models in medical moral thinking.”
Study authors, as included in the magazine, are Shelly Soffeer, MD; Vera Surin, PhD in Medicine; Girish N. Nadkarni, MD, MPH; Eyal Klang, MD.
About the Sinai Mountain Department in Windric from artificial intelligence and human health
Leaded by Gearsh n. He calls me, MD, MPH – an international authority in the safe, effective and moral use of Amnesty International in the field of health care – is the Windric department in Mount Sinai in Amnesty International and Human Health is the first of its kind at the United States College of Medicine, and the leading transformative progress at the intersection of artificial intelligence and human health.
The department is committed to taking advantage of artificial intelligence in a responsible, effective, moral and safe way to transform research, clinical care, education and processes. By combining Amnesty International’s experience on a global level, advanced infrastructure, and unparalleled mathematical power, the department provides achievements in the integration of multimedia data while simplifying the paths of rapid testing and translation into practice.
The department benefits from the dynamic cooperation throughout Mount Sinai, including with the Haso Platner Digital Health Institute in Mount Sinai-in partnership between the Hasu Platner Digital Engineering Institute in Potsdam, Germany, and Mount Sinai’s health system-which complements its mission by applying in the approaches that data driven and improving the patient’s health.
At the heart of this innovation, ICAHN School for Medicine at Mount Sinai, which works as a central learning and cooperation center. This unique integration provides dynamic partnerships through institutes, academic departments, hospitals and outpatient centers, achieving progress in preventing diseases, improving treatments for complex diseases, and raising the quality of life on a global scale.
In 2024, the innovative Nutrisscan Ai app in the department, which was developed by the Clinical Data Science Team at Mount Sinai Health System in partnership with faculty members, won the Mount Sinai Health Award. Nutrisscan is designed to facilitate identification and treatment of malnutrition in hospital patients. This automated learning tool improves malnutrition diagnosis and resource use rates, which indicates the influencing application of Amnesty International in health care.
* Mount Sinai Health System: Mount Sinai Hospital; Mount Sinai Brooklyn. Mount Sinai Morningide. Mount Sinai Queens. Mount Sinai, south of Naco; Mount Sinai West and Eye and Ear Distribution in New York
(Tagstotranslate) Eye Care; My heart disease; Diseases and conditions; Today & amp;#039; health care ; Computers and the Internet; Computer Modeling Communications Artificial Intelligence














Post Comment