Skip to content

NeuroML Library

NeuroML is a set of Solidity interfaces and precompiles that allow smart contract developers on the OpenGradient network to use native inference capabilities directly from smart contracts. Developers can run inference AI and ML models through a simple function call executed atomically within the same transaction

Demonstrated through a simple example:

solidity
function testNeuroML(ModelInput calldata modelInput) {
    // using NeuroML
    ModelOutput memory ouput = NeuroML.runModel(
        ModelInferenceMode.ZK,
        ModelInferenceRequest(
            "QmbbzDwqSxZSgkz1EbsNHp2mb67rYeUYHYWJ4wECE24S7A",
            modelInput
    ));
}

NOTE

The OpenGradient Network is an EVM chain compatible with most existing EVM frameworks and tools. In addition to the standard EVM capabilities, we support native AI inference directly from smart contracts. To learn more about how on-chain inference works, go to Onchain Inference

Any model uploaded to the Model Hub can be used through NeuroML.

Benefits

The main benefits of running inference through NeuroML include:

  • Atomic execution: inferences are atomically executed as part of the EVM transaction that triggers it; this makes it easier to ensure state consistency
  • Simple interface: inferences can be run through a simple function call without the need for callback functions and handlers
  • Composability: through the use of smart contract transactions, multiple models can be chained together using arbitrarily complex logic - supporting advanced real-world use cases
  • Native verification: inference validity proofs (e.g., ZKML and TEE) are natively validated by the underlying OpenGradient network protocol. This means that smart contracts can trust the results without explicit verification.

Installation

Our Solidity framework can be installed by running:

bash
npm i opengradient-neuroml

Framework Components

  • Solidity ML Inference: Interfaces and precompiles that allow any smart contract to use natively inference of any ML/AI model from our Model Hub
  • Solidity LLM Inference: Interfaces and precompiles that are specifically designed for running LLMs from smart contracts
  • Data Preprocessing (in progress): Helps convert on-chain data into a format suitable for inferencing, e.g., aggregation.
  • Statistical Analysis (in progress): Performs statistical or analytical tests on data to uncover trends and discover predictive or causal relationships between features, e.g. cross-sectional regressions.

NOTE

To see end-to-end tutorials using NeuroML, check out our Tutorials.

OpenGradient 2024