Developers Overview
OpenGradient provides multiple developer tools for building AI-enabled applications on top of OpenGradient infrastructure, these include the OpenGradient Python SDK, x402 Integration, and MemSync.
Python SDK
Python SDK enables you to integrate OpenGradient's decentralized, verified AI infrastructure into any application or agent. The SDK abstracts away all network complexity, while maintaining full security and decentralization properties.
For LLM inference, the Python SDK provides a convenient wrapper around the x402 Gateway, allowing you to access verified LLM inference with a simple Python API instead of managing HTTP requests and payment flows directly. Beyond x402 convenience, the SDK also provides additional features:
- ML Inference & Workflows: Run ML model inference and deploy automated workflows with scheduled execution entirely onchain (alpha testnet only)
- LLM Inference: Use secure and verifiable large language models for completions and chat with support for tools and TEE execution
- Model Management: Upload, manage, and organize models on the Model Hub
TIP
Use the Python SDK if you're building traditional applications (web apps, APIs, scripts) that need decentralized AI capabilities.
x402 Gateway
x402 is the open standard for payment-gated HTTP APIs that OpenGradient uses for LLM inference. Because x402 works over standard HTTP/REST, any client in any language can access OpenGradient's verified AI infrastructure - no SDK required. Key properties include:
- Universal Access: Standard HTTP APIs mean any language or platform can integrate - JavaScript, Go, Rust, curl, etc.
- Payment-Gated Inference: Cryptographically-verified payments using
$OPGtestnet tokens on Base Sepolia - Provable Prompts: Cryptographic proof of which prompts were used, enabling transparent verification of agent actions and AI decision-making
- TEE Verification: All LLM inferences are verified using Trusted Execution Environments with hardware attestation
Learn more in the x402 Gateway documentation.
TIP
Use x402 directly if you're building in a language other than Python or want fine-grained control over the payment flow.
MemSync
MemSync is a long-term memory layer for AI built on top of OpenGradient's verifiable inference and embeddings infrastructure. It provides a REST API for long-term context management and AI personalization. Key features include:
- Fact Extraction: Extract and store semantic (long-term) and episodic (temporary) facts from conversations
- Semantic Search: Search across personal memories and context using semantic similarity
- User Profiles: Manage user profiles and preferences for personalized AI experiences
- Context Enrichment: Integrate with external services for enhanced context
TIP
Use MemSync if you're building AI applications that need persistent memory, user personalization, or long-term context management.
Security & Decentralization
The Python SDK and MemSync are powered by the OpenGradient network and provide the same security guarantees, including:
- Decentralization: No single point of failure
- Censorship resistance: Open access to AI models
- Verifiability: Inferences are cryptographically verified
- Transparency: Full auditability of model execution
