Skip to content

Glossary and Technologies

  • DA - Data Availability layer.
  • Enclave nodes - fully isolated virtual machines that are highly protected by a single security perimeter. They have no persistent storage, no external networking, no interactive access.
  • HACA - Hybrid AI Compute Architecture
  • Inference - a process of running a specific AI/ML model with a given input
  • Inference Network - OpenGradient's open network of inference nodes.
  • Inference Node - Nodes with specialized hardware, such as GPUs, responsible for model execution on OpenGradient.
  • ONNX - Open Neural Network Exchange is an open standard format for ML models.
  • PIPE - Parallelized Inference Pre-Execution Engine - an on-chain inference execution method that allows models to be natively used from smart contracts.
  • TEE - Trusted Execution Environment - hardware or software security architecture that protects sensitive data/code from unauthorized access.
  • Tensor - a data structure that stores N-dimensional data.
  • TPU - Tensor Processing Unit, circuits specifically designed to accelerate ML workloads.
  • VCS - Validation-Computation Separation architecture, where validator nodes are separated from inference nodes. This framework means that running inference on inference nodes does not have to be replicated on all validator nodes.
  • ZKML - Zero-Knowledge Machine Learning - a combination of ZK and ML techniques, where running inference and training models are designed to generate zk-proofs that validate computation without revealing the input data.

OpenGradient 2024