Package opengradient
OpenGradient Python SDK for interacting with AI models and infrastructure.
Submodules
- alphasense: OpenGradient AlphaSense Tools
- llm: OpenGradient LLM Adapters
- workflow_models: OpenGradient Hardcoded Models
Functions
Create model
def create_model(model_name: str, model_desc: str, model_path: Optional[str] = None) ‑> opengradient.types.ModelRepositoryCreate a new model repository.
Arguments
model_name: Name for the new model repositorymodel_desc: Description of the modelmodel_path: Optional path to model file to upload immediately
Returns
ModelRepository: Creation response with model metadata and optional upload results
Raises
RuntimeError: If SDK is not initialized
Create version
def create_version(model_name, notes=None, is_major=False)Create a new version for an existing model.
Arguments
model_name: Name of the model repositorynotes: Optional release notes for this versionis_major: If True, creates a major version bump instead of minor
Returns
dict: Version creation response with version metadata
Raises
RuntimeError: If SDK is not initialized
Infer
def infer(model_cid, inference_mode, model_input, max_retries: Optional[int] = None) ‑> opengradient.types.InferenceResultRun inference on a model.
Arguments
model_cid: Blob ID of the model to useinference_mode: Mode of inference (e.g. VANILLA)model_input: Input data for the modelmax_retries: Maximum number of retries for failed transactions
Returns
InferenceResult (InferenceResult): A dataclass object containing the transaction hash and model output. * transaction_hash (str): Blockchain hash for the transaction * model_output (Dict[str, np.ndarray]): Output of the ONNX model
Raises
RuntimeError: If SDK is not initialized
Init
def init(email: str, password: str, private_key: str, rpc_url='https://eth-devnet.opengradient.ai', api_url='https://sdk-devnet.opengradient.ai', contract_address='0x8383C9bD7462F12Eb996DD02F78234C0421A6FaE')Initialize the OpenGradient SDK with authentication and network settings.
Arguments
email: User's email address for authenticationpassword: User's password for authenticationprivate_key: Ethereum private key for blockchain transactionsrpc_url: Optional RPC URL for the blockchain network, defaults to mainnetapi_url: Optional API URL for the OpenGradient API, defaults to mainnetcontract_address: Optional inference contract address
List files
def list_files(model_name: str, version: str) ‑> List[Dict]List files in a model repository version.
Arguments
model_name: Name of the model repositoryversion: Version string to list files from
Returns
List[Dict]: List of file metadata dictionaries
Raises
RuntimeError: If SDK is not initialized
Llm chat
def llm_chat(model_cid: opengradient.types.LLM, messages: List[Dict], inference_mode: opengradient.types.LlmInferenceMode = LlmInferenceMode.VANILLA, max_tokens: int = 100, stop_sequence: Optional[List[str]] = None, temperature: float = 0.0, tools: Optional[List[Dict]] = None, tool_choice: Optional[str] = None, max_retries: Optional[int] = None) ‑> opengradient.types.TextGenerationOutputHave a chat conversation with an LLM.
Arguments
model_cid: Blob ID of the LLM model to usemessages: List of chat messages, each with 'role' and 'content'inference_mode: Mode of inference, defaults to VANILLAmax_tokens: Maximum tokens to generatestop_sequence: Optional list of sequences where generation should stoptemperature: Sampling temperature (0.0 = deterministic, 1.0 = creative)tools: Optional list of tools the model can usetool_choice: Optional specific tool to usemax_retries: Maximum number of retries for failed transactions
Returns
TextGenerationOutput
Raises
RuntimeError: If SDK is not initialized
Llm completion
def llm_completion(model_cid: opengradient.types.LLM, prompt: str, inference_mode: opengradient.types.LlmInferenceMode = LlmInferenceMode.VANILLA, max_tokens: int = 100, stop_sequence: Optional[List[str]] = None, temperature: float = 0.0, max_retries: Optional[int] = None) ‑> opengradient.types.TextGenerationOutputGenerate text completion using an LLM.
Arguments
model_cid: Blob ID of the LLM model to useprompt: Text prompt for completioninference_mode: Mode of inference, defaults to VANILLAmax_tokens: Maximum tokens to generatestop_sequence: Optional list of sequences where generation should stoptemperature: Sampling temperature (0.0 = deterministic, 1.0 = creative)max_retries: Maximum number of retries for failed transactions
Returns
TextGenerationOutput: Transaction hash and generated text
Raises
RuntimeError: If SDK is not initialized
Login
def login(model_name: str, version: str) ‑> List[Dict]List files in a model repository version.
Arguments
model_name: Name of the model repositoryversion: Version string to list files from
Returns
List[Dict]: List of file metadata dictionaries
Raises
RuntimeError: If SDK is not initialized
New workflow
def new_workflow(model_cid: str, input_query: opengradient.types.HistoricalInputQuery, input_tensor_name: str, scheduler_params: Optional[opengradient.types.SchedulerParams] = None) ‑> strDeploy a new workflow contract with the specified parameters.
This function deploys a new workflow contract and optionally registers it with the scheduler for automated execution. If scheduler_params is not provided, the workflow will be deployed without automated execution scheduling.
Arguments
model_cid: Blob ID of the modelinput_query: HistoricalInputQuery containing query parametersinput_tensor_name: Name of the input tensorscheduler_params: Optional scheduler configuration as SchedulerParams instance If not provided, the workflow will be deployed without scheduling.
Returns
str: Deployed contract address. If scheduler_params was provided, the workflow will be automatically executed according to the specified schedule.
Read workflow history
def read_workflow_history(contract_address: str, num_results: int) ‑> List[opengradient.types.ModelOutput]Gets historical inference results from a workflow contract.
Returns
List[Dict]: List of historical inference results
Read workflow result
def read_workflow_result(contract_address: str) ‑> opengradient.types.ModelOutputReads the latest inference result from a deployed workflow contract.
This function retrieves the most recent output from a deployed model executor contract. It includes built-in retry logic to handle blockchain state delays.
Returns
Dict[str, Union[str, Dict]]: A dictionary containing: - status: "success" or "error" - result: The model output data if successful - error: Error message if status is "error"
Raises
RuntimeError: If OpenGradient client is not initialized
Run workflow
def run_workflow(contract_address: str) ‑> opengradient.types.ModelOutputExecutes the workflow by calling run() on the contract to pull latest data and perform inference.
Returns
Dict[str, Union[str, Dict]]: Status of the run operation
Upload
def upload(model_path, model_name, version) ‑> opengradient.types.FileUploadResultUpload a model file to OpenGradient.
Arguments
model_path: Path to the model file on local filesystemmodel_name: Name of the model repositoryversion: Version string for this model upload
Returns
FileUploadResult: Upload response containing file metadata
Raises
RuntimeError: If SDK is not initialized
Classes
CandleOrder
class CandleOrder(*args, **kwds)
Enum where members are also (and must be) ints
Variables
static
ASCENDING- The type of the None singleton.static
DESCENDING- The type of the None singleton.
CandleType
class CandleType(*args, **kwds)
Enum where members are also (and must be) ints
Variables
static
CLOSE- The type of the None singleton.static
HIGH- The type of the None singleton.static
LOW- The type of the None singleton.static
OPEN- The type of the None singleton.static
VOLUME- The type of the None singleton.
HistoricalInputQuery
HistoricalInputQuery(base: str, quote: str, total_candles: int, candle_duration_in_mins: int, order: opengradient.types.CandleOrder, candle_types: List[opengradient.types.CandleType])
To abi format
def to_abi_format(self) ‑> tupleConvert to format expected by contract ABI
Variables
static
base : str- The type of the None singleton.static
candle_duration_in_mins : int- The type of the None singleton.static
candle_types : List[opengradient.types.CandleType]- The type of the None singleton.static
order : opengradient.types.CandleOrder- The type of the None singleton.static
quote : str- The type of the None singleton.static
total_candles : int- The type of the None singleton.
InferenceMode
class InferenceMode(*args, **kwds)
Enum for the different inference modes available for inference (VANILLA, ZKML, TEE)
Variables
static
TEE- The type of the None singleton.static
VANILLA- The type of the None singleton.static
ZKML- The type of the None singleton.
LLM
class LLM(*args, **kwds)
Enum for available LLM models
Variables
static
DOBBY_LEASHED_3_1_8B- The type of the None singleton.static
DOBBY_UNHINGED_3_1_8B- The type of the None singleton.static
LLAMA_3_2_3B_INSTRUCT- The type of the None singleton.static
META_LLAMA_3_1_70B_INSTRUCT- The type of the None singleton.static
META_LLAMA_3_8B_INSTRUCT- The type of the None singleton.static
QWEN_2_5_72B_INSTRUCT- The type of the None singleton.
LlmInferenceMode
class LlmInferenceMode(*args, **kwds)
Enum for differetn inference modes available for LLM inferences (VANILLA, TEE)
Variables
static
TEE- The type of the None singleton.static
VANILLA- The type of the None singleton.
SchedulerParams
class SchedulerParams(frequency: int, duration_hours: int)
SchedulerParams(frequency: int, duration_hours: int)
From dict
def from_dict(data: Optional[Dict[str, int]]) ‑> Optional[opengradient.types.SchedulerParams]Variables
static
duration_hours : int- The type of the None singleton.static
frequency : int- The type of the None singleton.end_time : int
TEE_LLM
class TEE_LLM(*args, **kwds)
Enum for LLM models available for TEE execution
Variables
- static
META_LLAMA_3_1_70B_INSTRUCT- The type of the None singleton.
