- We created an
AsyncRemoteClientinstance. It’s called “remote” because all execution happens on the server. We recommend this for most users, but if you want LLM calls to happen locally instead, refer to the section on the Local client. As you may have guessed, there’s also a synchronous version calledRemoteClientwhose API is the same. - We called
client.judgewith a claim, i.e. a string whose veracity we want to check, and a list of sources. Currently, the only supported source type isTextSource- a wrapper around a string. The sources are the evidence we want to check the claim against. - The server broke the claim down into individual statements, each of which was assigned a verdict (whether or not the sources support it). For a detailed explanation of the ruling object, see this page.
Local client
While our core technology is based on proprietary machine learning models, we also use LLMs to help fill some gaps. If you don’t want the server to make LLM calls, you can opt to use the local client instead, like so:LocalClient follows that of RemoteClient. Like with the RemoteClient, we also provide an synchronous version - however, due to the fact that this client needs to make numerous HTTP requests, we strongly recommend using the async version instead.
The quality level setting here has the same function as on the server - see this page for an explanation.
Types
All types are available to import fromtruthsys. These are:
RemoteClient,AsyncRemoteClient,LocalClient, andAsyncLocalClientInfluenceandTextInfluence- a statement made in a source which influenced the verdictRuling- the top-level object returned byclient.judgeSourceandTextSource- a source that could support or refute a claim, e.g. a documentStatement- a single statement in a claimVerdict- an enum representing the possible assessments of a claim
Errors
All error types raised by the SDK are available to import fromtruthsys.errors. If an error looks like a bug on our side, please report it directly to us.
