In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects (Nick Mokey/VentureBeat)
Nick Mokey / VentureBeat:
In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects — Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans.
from Techmeme https://ift.tt/bTJNeZz
In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects (Nick Mokey/VentureBeat)
Reviewed by swadu
on
June 14, 2025
Rating:
No comments: