New Azure tool

Microsoft presents “Correction” to combat AI hallucinations

Microsoft
Image source: WD Stock Photos/Shutterstock.com

Microsoft has introduced a tool called “Correction”. This innovation, which is part of the Azure AI Studio, promises not only to detect errors in AI-generated content, but also to correct them automatically.

Who hasn’t seen them, the curious slip-ups of ChatGPT and Co. Microsoft now wants to put an end to these AI hallucinations – with a tool that not only finds errors, but also corrects them itself.

Ad

“Correction” is the name of the new beacon of hope in Azure AI Studio, Microsoft’s toolbox for AI developers. The idea sounds simple at first: before the user even has the pleasure of being surprised by outlandish AI ideas, the function is supposed to put texts through their paces, detect inconsistencies and rewrite them without further ado. Companies that run their AI applications with Microsoft Azure should be able to use it.

This is how it works:

  1. The system scans AI outputs for inaccuracies.
  2. It compares the content with the customer’s source material.
  3. Errors are marked and explained.
  4. The incorrect content is rewritten.
Correct hallucinations and ungrounded outputs using Azure AI Content Safety

But is this really the holy grail of AI accuracy? It’s not quite that simple after all. Microsoft admits that even its correction tool is not infallible. After all, it uses AI models itself to improve other AI models.

Ad

What are AI hallucinations?

AI hallucinations are a phenomenon in which artificial intelligence systems, especially large language models, generate information or make statements that are false, misleading or completely fabricated. These “hallucinations” occur when the AI model produces content based on its training data and patterns that may seem plausible at first glance, but does not correspond to reality. This can take various forms, from minor inaccuracies to completely fictitious stories or facts.

Ad

Weitere Artikel