Skip to main content
Once you have installed the SDK, you can start using it to check claims against sources. Here’s a code snippet to get you started:
from truthsys import AsyncRemoteClient, TextSource, Verdict

client = AsyncRemoteClient.from_url("YOUR_GATEWAY_URL")

ruling = await client.judge(
    claim="Sally is a cat",
    sources=[
        TextSource.from_text("I have a cat"),
        TextSource.from_text("I only have one pet"),
        TextSource.from_text("My pet is called Sally"),
    ],
)
assert ruling.verdict == Verdict.SUPPORTS
Let’s see what happened here:
  1. We created an AsyncRemoteClient instance. It’s called “remote” because all execution happens on the server. We recommend this for most users, but if you want LLM calls to happen locally instead, refer to the section on the Local client. As you may have guessed, there’s also a synchronous version called RemoteClient whose API is the same.
  2. We called client.judge with a claim, i.e. a string whose veracity we want to check, and a list of sources. Currently, the only supported source type is TextSource - a wrapper around a string. The sources are the evidence we want to check the claim against.
  3. The server broke the claim down into individual statements, each of which was assigned a verdict (whether or not the sources support it). For a detailed explanation of the ruling object, see this page.

Local client

While our core technology is based on proprietary machine learning models, we also use LLMs to help fill some gaps. If you don’t want the server to make LLM calls, you can opt to use the local client instead, like so:
from openai import OpenAI
from truthsys import AsyncLocalClient, QualityLevel

AsyncLocalClient.from_url(
    "YOUR_GATEWAY_URL",
    openai_client=OpenAI(),
    quality_level=QualityLevel.MEDIUM,
)
The difference is that the remote client doesn’t make any LLM calls itself, and instead has the Gateway server do them. On the other hand, the local client makes LLM calls directly using the OpenAI client you provide. We recommend using the remote client unless you have specific reasons not to (e.g. you want to shut off all outbound traffic from the Gateway server, or you want to supply a custom OpenAI client). Except for instantiation, the API of LocalClient follows that of RemoteClient. Like with the RemoteClient, we also provide an synchronous version - however, due to the fact that this client needs to make numerous HTTP requests, we strongly recommend using the async version instead. The quality level setting here has the same function as on the server - see this page for an explanation.

Types

All types are available to import from truthsys. These are:
  • RemoteClient, AsyncRemoteClient, LocalClient, and AsyncLocalClient
  • Influence and TextInfluence - a statement made in a source which influenced the verdict
  • Ruling - the top-level object returned by client.judge
  • Source and TextSource - a source that could support or refute a claim, e.g. a document
  • Statement - a single statement in a claim
  • Verdict - an enum representing the possible assessments of a claim

Errors

All error types raised by the SDK are available to import from truthsys.errors. If an error looks like a bug on our side, please report it directly to us.