Core Features
On-Chain Verification

The first DA layer based on Ethereum on-chain data verification, more secure than DAS.

High Performance

Support concurrent R&W, fast encoding and verification for big data.

Infinitely Scalable

Support EB+ capacity and millions of storage nodes.

Programmable Ownership

Support custom data ownership and access control.

Core Components

Unibase DA++

The first AI-native DA layer based on Ethereum, Utilizing erasure coding and ZKP technology, it provides high-performance, infinitely scalable storage and verification capabilities for on-chain AI applications.

Unibase Storage

A highly available decentralized storage layer, Supports exabyte-level storage capacity and millions of node expansions, Enabling anyone to securely store models, datasets, and training data.

Unibase Chain

An Ethereum L2 based on the OP stack, Offers secure, low-cost data publishing and payment services for users.

Use Cases
Unibase App

The first permissionless decentralized AI platform, Enabling anyone to securely and cost-effectively deploy and monetize AI applications.

Smart Depin2.0

An AI-driven smart depin, with Unibase providing highly available storage and verifiable inference services for Depin devices.

Self-Evolving AI

The first super AI Agent application, with its own ID and wallet, can earn money and upgrade itself.

How to Participate
Developers
Securely and cost-effectively upload, deploy, and monetize your AI models, allowing the world to experience your creativity.
Users
Follow our Twitter, join the telegram community, participate in the latest activities to earn points and airdrop opportunities.
Miner
Share your storage and computing resources to earn the most generous rewards.
Investors
Discover and invest in the best models and developers to jointly win the decentralized AI future.

Road Map

2024
Q3: Testnet
Release Unibase DA, enhancing AI data transparency and interoperability while reducing storage and validation costs.
2024
Q4: Mainnet
Release a decentralized AI collaboration platform, making it easier for developers to build, deploy, and monetize AI apps.
2025
Q2:
Release a verifiable computation framework based on opML, enhancing the verifiability of inference.
2025
Q4:
Release AI acceleration hardware based on FPGA, reducing AI computation costs and enhancing inference efficiency.