Skip to content

Python CLI Tutorial

Welcome to the OpenGradient Python CLI Tutorial! This guide will walk you through the basic steps to install, configure, and use the OpenGradient CLI tool.

The OpenGradient CLI enables developers to interact with OpenGradient’s services directly from the command line, making it easy to upload models, run inferences on a model, and run LLMs.

Prerequisites

Before you begin, make sure you have the following:

  • Python is installed on your system; we support versions 3.10, 3.11, or 3.12.
  • pip (Python package installer) is available.

Make sure Python and pip are installed in the same environment.

NOTE

Windows Users: temporarily enable WSL when installing opengradient, fix in progress.

Installation

To install the OpenGradient CLI, use pip to install the package. Run the following command in your terminal:

bash
pip install --upgrade opengradient

If you previously installed opengradient, the --upgrade option ensures you have the latest version. Once the installation is complete, you can verify it by running:

bash
opengradient --version

This command should display the current version of the OpenGradient CLI, confirming the installation.

Set Up Accounts

To use OpenGradient's Python SDK, you'll need to create 2 accounts:

  • Model Hub account: You can create this using your email and password on the Hub Sign Up.
  • OpenGradient account: You will receive verifiable inference transactions on this blockchain account on the OpenGradient devnet. You can use any existing Ethereum-compatible wallet with an account (e.g., MetaMask) or create a new one using our SDK. See below.

We provide an account creation wizard in our SDK that guides you through this process. You can access it by running:

bash
opengradient config init

After you complete the accounts' set up, you can check your details by running the following command:

bash
opengradient config show

Basic Commands

Here are some basic commands you can use with the OpenGradient CLI:

  • Create a New Model Repository To create a new model repository, run:
bash
opengradient create-model-repo --name "<model-repo-name>" --description "<description>"

This command creates an empty repository for a model with a version 0.01.

  • Upload Model Upload your dataset to OpenGradient for training or inference with the following command:
bash
opengradient --upload-file <path/to/your/model.onnx>

Note, currently, we only support ONNX models. Read our Model Formats on how to convert your models to ONNX format.

Run Inference

To run inference using a specific model_id, use the following command:

bash
opengradient infer \
--model <model_cid> \
--mode <mode>  
--input '<input_data>'

Arguments

  • --model the CID of the model file you want to execute. Upload your model via --upload-file command or via Model Hub
  • --mode the security method you want to use to verify and secure the inference. The supported options are VANILLA, ZKML, TEE. By default, verification mode is VANILLA. For more details, see Inference Verification.
  • --input is the input data to the model. You can use either --input <your-input-data> or --file <your-file-name.json> to pass input date to the CLI.

Example

Example with OpenGradient dynamic fee model. It takes a batched input size of (batch_size, 15). In this example, we use a fee_input.json file as input:

bash
opengradient infer --model "QmSACzMPxteoN1Qpg47E7ZZ6uPphjB5sZu91Lo4RFuoS4X" \
--mode VANILLA --file fee_input.json

where fee_input.json file is:

bash
{"X": [[-7.236132340828155, -6.405568474034984, -6.12083601071824, -6.997675007949456, -6.0150836816794335, -5.945294881589101, -6.957896161280256, -4.992644569262474, -8.123823126888762, -4.766905004824663, -8.721882840626717, -8.815475900002543, -8.569360787913975, -8.84391999196547, -6.405568474034984]]}

The output of running this inference will be:

bash
Inference result:
{
  "variable": [
    [
      -7.423771858215332
    ]
  ]
}

Run LLM

You can LLM by running opengradient llm command and providing model and input parameters:

bash
opengradient llm --model "meta-llama/Meta-Llama-3-8B-Instruct" \
--prompt "hello who are you?" --max-tokens 50

The output of this AI inference run is:

bash
 - Hello! I'm a bot, nice to meet you! \
 I'm here to help answer any questions \
 you might have, provide information, \
 or just chat with you. \
 What's on your mind? - I'm a bot, nice to meet you

Currently, we only support LLM models from the Model Hub.

Getting Help

To see a list of all available commands and options, you can always run:

bash
opengradient --help

To get help on a specific commands, add the command before --help. For example, to get help on infer command use opengradient infer --help, to get help for LLM command you can run opengradient llm --help

Congratulations! You have successfully set up and used the OpenGradient Python CLI. With these basic commands, you can start exploring the full capabilities of OpenGradient.

OpenGradient 2024