Skip to content

x402 Gateway

The x402 Gateway provides direct HTTP access to OpenGradient's TEE verified LLM inference without requiring the Python SDK. Because x402 is a standard HTTP protocol, you can integrate from any language or platform - JavaScript, Go, Rust, Python, curl, or any HTTP client.

What is x402?

x402 is an open standard that extends HTTP with payment requirements using the 402 Payment Required status code. When you make a request to the x402 Gateway:

  1. The server responds with payment requirements
  2. You sign a payment authorization
  3. You resubmit the request with the signed payment
  4. The inference executes, payment settles on Base Sepolia, and proofs settle on the OpenGradient network

All LLM inferences are verified using TEE (Trusted Execution Environments), providing cryptographic proof of execution.

TIP

For a deep dive into how x402 works on OpenGradient, including TEE verification and settlement modes, see LLM Execution.

Quick Start

Prerequisites

  • A wallet with a private key (such as MetaMask)
  • $OPG testnet tokens on Base Sepolia (get tokens from the faucet)

Basic Flow

bash
# 1. Make initial request - get payment requirements
curl -X POST https://llm.opengradient.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}],
    "max_tokens": 100
  }'

# Response: 402 Payment Required with payment details in headers

The server returns a 402 status with payment requirements. You then:

  1. Parse the payment requirements from the response headers
  2. Create and sign a payment payload with your wallet
  3. Resubmit the request with the X-PAYMENT header containing your signed payment

TIP

You can find x402 client libraries to automate these steps in Coinbase's x402 repo.

See Examples for complete implementations in multiple languages.

Endpoint

EnvironmentURL
Productionhttps://llm.opengradient.ai

Supported Models

Model IDProvider
openai/gpt-4.1-2025-04-14OpenAI
openai/gpt-4oOpenAI
openai/o4-miniOpenAI
anthropic/claude-4.0-sonnetAnthropic
anthropic/claude-3.7-sonnetAnthropic
anthropic/claude-3.5-haikuAnthropic
google/gemini-2.5-flashGoogle
google/gemini-2.5-proGoogle
google/gemini-2.5-flash-liteGoogle
google/gemini-2.0-flashGoogle
x-ai/grok-3-betaxAI
x-ai/grok-3-mini-betaxAI
x-ai/grok-4.1-fastxAI
x-ai/grok-4-1-fast-non-reasoningxAI
x-ai/grok-2-1212xAI
x-ai/grok-2-vision-latestxAI

Payment Details

x402 LLM inference is paid for using $OPG testnet tokens on Base Sepolia. All other operations — TEE node registration, inference execution, proof settlement, and verification — happen on the OpenGradient network.

PropertyValue
Payment NetworkBase Sepolia
Token$OPG (0x240b09731D96979f50B2C649C9CE10FcF9C7987F)
Chain ID84532
Proof SettlementOpenGradient Network

Payments are settled on Base Sepolia after inference execution. Inference proofs are settled separately on the OpenGradient network. You can get $OPG testnet tokens from the faucet.

Settlement Modes

Control how inference data is recorded on the OpenGradient blockchain by setting the X-SETTLEMENT-TYPE header:

ModeHeader ValueDescription
PrivateprivateNo on-chain data, most private
IndividualindividualInput/output hashes on-chain
BatchbatchAggregated hashes for multiple inferences (default)

NOTE

The SDK uses og.x402SettlementMode.SETTLE, og.x402SettlementMode.SETTLE_METADATA, and og.x402SettlementMode.SETTLE_BATCH as aliases for these values.

Use individual (or og.x402SettlementMode.SETTLE_METADATA in the SDK) when you need to prove which prompts were used for agent actions or decision verification.

When to Use x402 vs SDK

Use x402 directly when...Use Python SDK when...
Building in JavaScript, Go, Rust, etc.Building in Python
You want fine-grained control over paymentsYou want automatic payment handling
Integrating into existing HTTP infrastructureStarting a new Python project
Building a custom SDK for another languagePrototyping quickly

Next Steps