Skip to main content
Version: 2.0

Correct Hallucinations API Definition

The Hallucination Correctors API enables users to automatically detect and correct factual inaccuracies, commonly referred to as hallucinations, in generated summaries or responses. By comparing a user-provided summary against one or more source documents, the API returns a corrected version of the summary with minimal necessary edits.

Use this API to validate and improve the factual accuracy of summaries generated by LLMs in Retrieval Augmented Generation (RAG) pipelines, ensuring that the output remains grounded in trusted source content. If HCM does not detect hallucination, it preserves the original summary.

Hallucination Correctors Request and Response Details

To correct a potentially hallucinated summary, send a POST request to /v2/hallucination_correctors/correct_hallucinations. The request body must include:

  • generated_text: The generated text to evaluate and potentially correct.
  • documents: One or more source documents containing the factual information that the summary should be based on.
  • text: The full content of a source document.
  • model_name: The name of the LLM model to use for hallucination correction.
  • query: (Optional) The original user query that led to the generated text. This helps improve accuracy by giving the model additional context about the user's intent and expected output format. It works well with the vhc-large-1.0 model.

Example request

This example provides a summary about a historical event.

{
"summary": "The Treaty of Versailles was signed in 1920, officially ending World War I.
It was primarily negotiated by France, Britain, Italy, and Japan.",
"documents": [
{
"text": "The Treaty of Versailles was signed on June 28, 1919. The United States
played a major role, represented by President Woodrow Wilson.",
}
],
"model": "vhc-small-1.0",
}

Example response

The response corrects the original summary.

{
"original_text": "The Treaty of Versailles was signed in 1920, officially ending World War I.
It was primarily negotiated by France, Britain, Italy, and Japan.",
"corrected_text": "The Treaty of Versailles was signed in 1919, officially ending World War I.
It was primarily negotiated by France, Britain, Italy, and the United States."
}

If the input summary is accurate, the corrected_summary matches the original_summary.

Interpreting empty corrections

In some cases, the corrected_text field in the response may be an empty string. This indicates VHC determined that the entire input text was hallucinated, and VHC recommends removing it completely.

This outcome is valid and typically occurs when none of the content in the generated_text is supported by the provided source documents or query. The response still includes an explanation of why VHC removed the text. For example:

{
"original_text": "According to Martian Guide to Humanity, The Earth has three moons..",
"corrected_text": "",
"explanation": "There is no source found for Martian Guide to Humanity (hallucinated source), and there is no source for the earth having three moons. The entire statement is factually incorrect"
}

Error responses

  • 400 Bad Request – The request body was malformed or contained invalid parameters. This can occur if you use model instead of the required model_name parameter.
  • 403 Forbidden – The user does not have permission to perform factual consistency evaluation.

REST 2.0 URL

Hallucination Correctors Endpoint Address

Vectara exposes an HTTP endpoint for the Hallucination Correctors: https://api.vectara.io/v2/hallucination_correctors