OpenGradient Glossary
- DA - Data Availability layer.
- Enclave nodes - fully isolated virtual machines that are highly protected by a single security perimeter. They have no persistent storage, no external networking, no interactive access.
- Facilitator - optional services that handle payment verification and settlement complexity for x402 LLM inference. Facilitators provide payment verification, settlement management, payment method abstraction, rate limiting, and receipt generation.
- HACA - Hybrid AI Compute Architecture
- IModelExecutor - a Solidity interface that contracts must implement to use Model Scheduler. It requires two functions:
run()for executing model inference logic andgetInferenceResult()for returning the latest model output. - Inference - a process of running a specific AI/ML model with a given input
- Inference Network - OpenGradient's open network of inference nodes.
- Inference Node - Nodes with specialized hardware, such as GPUs, responsible for model execution on OpenGradient.
- LLM - Large Language Model - a type of AI model designed to understand and generate human-like text based on natural language prompts.
- ML - Machine Learning - a subset of artificial intelligence that enables systems to learn and improve from experience without being explicitly programmed.
- Model Output - the result returned from a model inference execution, containing the model's predictions or generated outputs.
- Model Scheduler (AlphaSense) - enables scheduling and automating recurring model inference tasks on the OpenGradient blockchain. Contracts implementing
IModelExecutorcan be registered to execute automatically on a defined schedule. - ONNX - Open Neural Network Exchange is an open standard format for ML models.
- PIPE - Parallelized Inference Pre-Execution Engine - an on-chain inference execution method that allows models to be natively used from smart contracts.
- Settlement Modes - three modes for x402 LLM inference settlement that determine what data is stored on-chain:
SETTLE_INDIVIDUAL(input/output hashes only),SETTLE_BATCH(batch hashes for multiple inferences), andSETTLE_INDIVIDUAL_WITH_METADATA(full model information, complete input/output data, and all inference metadata). - TEE - Trusted Execution Environment - hardware or software security architecture that protects sensitive data/code from unauthorized access.
- Tensor - a data structure that stores N-dimensional data.
- TPU - Tensor Processing Unit, circuits specifically designed to accelerate ML workloads.
- VCS - Validation-Computation Separation architecture, where validator nodes are separated from inference nodes. This framework means that running inference on inference nodes does not have to be replicated on all validator nodes.
- x402 - an open, neutral standard for internet-native payments. It absolves the Internet's original sin by natively making payments possible between clients and servers, creating win-win economies that empower agentic payments at scale. x402 exists to build a more free and fair internet.
- ZKML - Zero-Knowledge Machine Learning - a combination of ZK and ML techniques, where running inference and training models are designed to generate zk-proofs that validate computation without revealing the input data.
