Skip to main content
Gateway fact-checks LLM-generated text against your source documents. You provide a claim and sources, and Gateway tells you whether the claim is supported, refuted, or lacks sufficient evidence.
from gateway import TextSource
from gateway.aws import AwsClient

client = AwsClient.from_sfn_arn("arn:aws:states:us-east-1:123456789:stateMachine:gateway")

ruling = client.judge(
    claim="The Eiffel Tower is in Paris and was built in 1889.",
    sources=[
        TextSource(id="doc1", text="The Eiffel Tower is located in Paris, France."),
        TextSource(id="doc2", text="It was completed in 1889."),
    ],
)

print(ruling.verdict)  # Verdict.SUPPORTS

How it works

  1. Claim decomposition - Gateway breaks your claim into individual statements
  2. Evidence retrieval - Each statement is matched against relevant passages in your sources
  3. Verdict classification - We determine whether each statement is supported, refuted, or lacks evidence

Supported clouds

Gateway deploys to your cloud account. We currently support AWS, with Azure coming soon. During onboarding, we provide the SDK and Terraform configuration for your cloud via SFTP.

Next steps