分享缩略图
 

Amazon launches new tool to tackle AI hallucinations

0 Comment(s)Print E-mail Xinhua, December 4, 2024
Adjust font size:

SAN FRANCISCO, Dec. 3 (Xinhua) -- Amazon Web Services (AWS), Amazon's cloud computing division, launched on Tuesday a new tool to combat AI hallucinations, the scenarios where an AI model behaves unreliably.

The service, Automated Reasoning checks, validates a model's responses by cross-referencing customer-supplied info for accuracy. AWS claimed in a press release that Automated Reasoning checks is the "first" and "only" safeguard for hallucinations.

Automated Reasoning checks, which is available through AWS' Bedrock model hosting service, attempts to figure out how a model arrived at an answer and discern whether the answer is correct.

Customers upload info to establish a ground truth of sorts, and Automated Reasoning checks creates rules that can then be refined and applied to a model, said AWS.

As a model generates responses, Automated Reasoning checks verifies them, and, in the event of a probable hallucination, draws on the ground truth for the right answer. It presents this answer alongside the likely mistruth so customers can see how far off-base the model might've been.

AWS said PwC is already using Automated Reasoning checks to design AI assistants for its clients.

"With the launch of these new capabilities, we are innovating on behalf of customers to solve some of the top challenges that the entire industry is facing when moving generative AI applications to production," Swami Sivasubramanian, VP of AI and data at AWS, said in a statement.

AWS claims that Automated Reasoning checks uses "logically accurate" and "verifiable reasoning" to arrive at its conclusions. But the company volunteered no data showing that the tool is reliable, according to a report by TechCrunch.

AI models hallucinate because they are statistical systems that identify patterns in a series of data, and predict which data comes next based on previously seen examples. It does not provide answers, but predictions of how questions should be answered within a margin of error, the report said.

Microsoft rolled out the Correction feature this summer, which flags AI-generated text that might be factually wrong, and Google also offered a tool in Vertex AI, its AI development platform, to let customers "ground" models by using data from third-party providers, their own datasets, or Google Search. Enditem

Follow China.org.cn on Twitter and Facebook to join the conversation.
ChinaNews App Download
Print E-mail Bookmark and Share

Go to Forum >>0 Comment(s)

No comments.

Add your comments...

  • User Name Required
  • Your Comment
  • Enter the words you see:   
    Racist, abusive and off-topic comments may be removed by the moderator.
Send your storiesGet more from China.org.cnMobileRSSNewsletter