分享缩略图
 

AI-assisted health care draws concerns in U.S.

0 Comment(s)Print E-mail Xinhua, December 26, 2024
Adjust font size:

NEW YORK, Dec. 25 (Xinhua) -- Over the past year, millions of people have started being treated by U.S. health providers using artificial intelligence (AI) for repetitive clinical work. The hope is that it will make doctors less stressed out, speed up treatment and possibly spot mistakes, reported The Washington Post on Wednesday.

"Medicine, traditionally a conservative, evidence-based profession, is adopting AI at the hyper speed of Silicon Valley," noted the report. "These AI tools are being widely adopted in clinics even as doctors are still testing when they're a good idea, a waste of time or even dangerous."

The harm of generative AI, notorious for "hallucinations," producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an "inappropriate" answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice, according to the report.

Another study found that chatbots can echo doctors' own biases, such as the racist assumption that Black people can tolerate more pain than white people. Transcription software, too, has been shown to invent things that no one ever said, it added. Enditem

Follow China.org.cn on Twitter and Facebook to join the conversation.
ChinaNews App Download
Print E-mail Bookmark and Share

Go to Forum >>0 Comment(s)

No comments.

Add your comments...

  • User Name Required
  • Your Comment
  • Enter the words you see:   
    Racist, abusive and off-topic comments may be removed by the moderator.
Send your storiesGet more from China.org.cnMobileRSSNewsletter