Skip to content

Verifiable Inference

You can use the Python SDK to use our verified and decentralized inference infrastructure directly from traditional Python code and applications. This enables building end-user applications that use decentralized AI inference that is end-to-end verified and secured by our blockchain network rather than centralized AI solutions.

Using our SDK, you can ensure the full integrity and security of AI models and inferences at a more competitive price. Behind the scenes, our inference methods trigger an on-chain inference transaction on OpenGradient, which means it's fully verified and secured by the entire value of our network.

Library Client Initialization

To use verifiable inference on OpenGradient, you must provide your credentials and configuration for connecting to the network with the following parameters using og.init():

Please refer to the SDK docs for more details

  • Private Key: Your private key; see SDK Overview for more details.
  • RPC URL: Network address, currently: http://18.218.115.248:8545
  • Inference Address: Should be hard coded to: 0x350E0A430b2B1563481833a99523Cfd17a530e4e
python
import opengradient as og

og.init(email="<email>", password="<password>")

Model Inference

The inference is exposed through the infer method:

python
def infer(model_cid, inference_mode, model_input)

Arguments

  • model_cid: the CID of the model file you want to execute.
  • inference_mode: the security method you want to use to verify and secure the inference. For more details, see Inference Verification. The supported options are:
    • VANILLA
    • ZKML
    • TEE
  • model_input: a dictionary that defines the model input. The keys must match the expected input names in the model ONNX file. The values must be an array of type numbers or strings. We also support native lists and numpy arrays.

Returns

  • The output of the ONNX model returned as a dictionary, where the keys are the names of the output tensor, and the values are numpy arrays of either strings or numbers.

Example

python
import opengradient as og
import numpy as np
import os

# retrieve private key
private_key = os.environ.get('private_key')

# initialize SDK
og.init(email="<email>", password="<password>")

# run inference
model_cid = "QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ"
inference_mode = og.InferenceMode.VANILLA
tx_hash, model_output = og.infer(model_cid, inference_mode, {
    "params": np.array([1, 2, 3, 4, 5])
})

# print output
predictions = model_output["values"]
print(predictions)

Inference via the CLI

In addition to the Python library, we provide an out-of-the-box CLI for running inferences from your terminal.

The infer command takes the following arguments: model CID, inference mode (VANILLA, TEE, or ZKML), and model input as a dictionary of a type string or as a JSON file:

Using inline input:

bash
opengradient infer -m QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ \
--mode VANILLA \
--input '{"num_input1":[1.0, 2.0, 3.0], "num_input2":10, "str_input1":["hello", "ONNX"], "str_input2":" world"}'

Using file input:

bash
opengradient infer -m QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ --mode VANILLA --input-file input.json

TIP

Remember to initialize your configuration using opengradient config init if you haven't already

To get more information on how to make inferences using the CLI, you can run:

bash
opengradient infer --help

OpenGradient 2024