Wed Sep 25 08:37:20 UTC 2024: ## Microsoft Unveils New AI Correction Tool to Combat Hallucinations, But Experts Remain Skeptical

**Redmond, WA** – Microsoft has launched a new tool called “Correction” designed to improve the accuracy of AI-generated text by detecting and correcting factual errors. This tool, part of Microsoft’s Azure AI Content Safety API, can be used across various text-generation models like Meta’s Llama and OpenAI’s GPT-4.

The Correction tool aims to address the ongoing issue of AI “hallucinations,” where AI models generate responses that are factually incorrect. Microsoft claims the tool will enhance the reliability and trustworthiness of AI-generated content, reducing user dissatisfaction and potential reputational risks.

However, AI experts remain cautious about the tool’s effectiveness. Dr. OS Keyes, a University of Washington PhD candidate, argues that eliminating hallucinations from AI is akin to removing hydrogen from water, as it’s an inherent part of how the technology works.

While the tool may address some issues, it might also create new ones. There are concerns that it could create a false sense of security, leading users to misinterpret incorrect information as accurate.

This latest development comes after Microsoft’s earlier efforts to address AI hallucination issues in its Bing Chat (now Microsoft Copilot) and the recent incident where Google’s AI Overviews feature suggested harmful actions.

Despite these concerns, Microsoft remains optimistic about the tool’s potential, emphasizing its role in aligning AI output with grounding documents. The company is also launching Copilot Academy, a program designed to help users effectively utilize the capabilities of Copilot.

Ultimately, the success of the Correction tool and its impact on the reliability of AI-generated content remains to be seen. It will be interesting to observe how users respond to this new development and whether it proves effective in addressing the complex challenges surrounding AI hallucinations.

Read More