The research, Sociodemographic biases in medical decision making by large language models, was conducted by multiple experts from different institutions and led by the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai in New York.
A new study reveals multiple AI models show racial and socioeconomic bias in medical recommendations. Researchers considered 9 LLMs and 1,000 cases for the study, including racial and socioeconomic tags. The results showed AI models make unjustified clinical care recommendations when including tags such as “black” or “LGBTQIA+”
The researchers considered 9 Large Language Models (LLMs)—proprietary and open-source—and analyzed more than 1.7 million outputs from 1,000 emergency department cases—half of these real and the other half fictitious—including 32 variations. The abstract of the study states: In the variations, the researchers included sociodemographic and racial identifiers, revealing that the outcomes had a strong influence in these. For example, cases with the LGBTQIA+ subgroup tag or identified as black patients were suggested to receive more mental health analysis, get more invasive treatment, and were recommended more often to visit urgent care. The researchers wrote: The researchers claimed that the behavior was not supported by clinical guidelines or reasoning and warned that the bias could lead to health disparities. The experts note that more strategies to mitigate the bias are needed and that LLMs should focus on patients and remain equitable. Multiple institutions and organizations have raised concerns over AI use and data protection in the medical field in the past few days. A few days ago, openSNP announced its shutdown due to data privacy concerns, and another study highlighted a lack of AI education among medical professionals.