Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (2024)

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (1)

Summary

Apus Network, through its Trustless GPU solution, provides a robust framework for achieving Trustless AI within the AO ecosystem. Leveraging AO’s hyper-parallel computation, Apus Network’s Trustless GPU network is designed as a secure, efficient, and scalable AI inference solution, providing the necessary computing power for AI within Arweave while meeting the three pillars of Trustless AI: GPU integrity, AI model trustworthiness, and verifiable inference results.

Author : Apus Network

Reviewer : Jomosis

Source : Content Guild News

Introduction

In the rapidly evolving field of artificial intelligence (AI), ensuring the trustworthiness and verifiability of AI inference results is paramount. Traditional centralized AI systems often lack transparency and can be susceptible to tampering, which undermines trust.

The Apus Network addresses these challenges by leveraging a Trustless GPU solution designed to achieve Trustless AI within the AO ecosystem. This approach combines Arweave's decentralized storage and AO’s hyper-parallel computation to provide verifiable and trustworthy AI inference results, paving the way for more reliable and transparent AI applications.

Inspired by aos-llama and opML, we designed the Apus Network , please see the following introduction.

1. aos-llama Introduction

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (2)

1. aos-llama Features

The main features of aos-llama are as follows:

1.Builds an AOS Image:

  • aos-llama uses the llama2.c inference engine to build an AOS image, enabling full on-chain execution in AO processes.

2.Provides a Lua Interface:

  • It provides a Lua interface to load Llama models from Arweave and generate text.

3.Includes Conversion Tools:

4.Offers Comprehensive Toolset:

  • It offers tools for building AOS images, converting models, and publishing to Arweave.

These features allow aos-llama to efficiently execute AI inference on AO.

2. Technical Architecture Diagram

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (3)

The technical architecture of aos-llama can be divided into the following main parts:

1.Llama AO Process:

  • In the AO environment, the Llama AO Process is responsible for running the Llama models.

2.AOS.wasm:

  • This is a WebAssembly module that is published to Arweave. It is built using the emcc-lua tool.

3.Build Docker:

  • The build process takes place in a Docker container, using emcc-lua to generate AOS.wasm.
  • The container includes the necessary source code and build scripts, such as llama-run.cpp, llama.lua, emcc_lua_lib, and main.lua.

4.Model Conversion and Publishing:

  • Specific tools are used to convert models to the appropriate format and publish them to Arweave.
  • After conversion, the model files are stored on Arweave, ready to be loaded by the AO Process.

5.Arweave:

  • Arweave acts as a decentralized storage platform, storing AOS.wasm and model files.
  • The AO loads AOS.wasm and model files from Arweave to perform AI inference tasks.

Detailed Process:

Pre-build:

  • In the Docker build container, the pre-build steps include compiling llama-run.cpp to generate llama-run.o and obtaining libllama.a from the GitHub repository.

Build AOS.wasm:

  • Using the emcc-lua tool, the pre-built llama-run.o and libllama.a are combined to generate the final AOS.wasm file.

Publish to Arweave:

  • The generated AOS.wasm and model files are published to Arweave for use by the AO.

Load and Execute:

  • The AO loads the AOS.wasm and model files from Arweave and executes the AI inference tasks. This enables on-chain AI model invocation and verifiable AI inference results.

2. What is Trustless AI?

Trustless AI refers to a framework or system where the results of AI inference are trustworthy and verifiable without the need for relying on a central authority or intermediary. This concept ensures that the AI model's predictions and outputs are transparent, tamper-proof, and can be independently verified by any party.

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (4)

opML (Optimistic Machine Learning on Blockchain) is an innovative approach that enables blockchain systems to perform AI model inference using an interactive fraud proof protocol, similar to optimistic rollup systems. This can be readily implemented based on the architecture of AO and Arweave.

3. Trustless GPU solution by Apus Network for achieving Trustless AI in AO

AO's primary focus is running processes (smart contracts) using WASM (WebAssembly), which is executed on CPUs to provide deterministic computation results. This determinism is crucial for smart contract verifiability.

However, GPU-based computations, which are non-deterministic, pose challenges for traditional blockchain platforms. Since Apus Network is not specifically designed for smart contracts, it does not need to be limited by the design of WebAssembly (Wasm) for AI inference. Based on the design of SCP, it can achieve the goal of Trustless AI on AO.

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (5)

Leveraging AO’s Hyper Parallel Computer, Apus Network’s trustless GPU network is designed for secure, efficient, and scalable AI inference, providing necessary computing power for AI within Arweave while satisfying the three pillars of trustless AI: GPU integrity, AI model trustworthiness, and verifiable inference. The diagram provides a visual overview of the key components and their interactions.

Components and Workflow:

1.Apus AO Process:

  • Developers of AO Processes (smart contracts) using AI models or AI DApps can develop their desired prompts for inference and conveniently query their specified model from a catalog of supported models on the Apus Network official documentation. This triggers an Apus AO Process. AO manages the data transfer and the connection between the AO Process and the AI model.

2.Arweave Storage:

  • AI models are stored on Arweave, ensuring the immutability and availability of these models.
  • Each AI model on Arweave has a unique ID for efficient querying. The models are published from the GPU network to Arweave through a gateway selected via the Ar.io gateway explorer.

3.GPU Network:

  • Apus Network utilizes a network of GPUs with established integrity to perform AI inference tasks.
  • The GPUs are identified and managed using DePHY DID, ensuring that the computations are performed by trusted hardware.
  • Apus Network is designed based on SCP, ensuring that all programs, messages, AI models, and logs are stored on Arweave, maintaining the trustworthiness and verifiability of the results.

4.Inference Request and Result Verification:

  • The process begins with an inference request from an AI DApp or AO Process, specifying the input and model ID.
  • The Apus AO Process manages the data transfer, querying the specified model from Arweave.
  • The AI inference is executed on the GPU network, and the results are returned to the Apus AO Process.

5.Security and Trustworthiness:

  • Based on the design of SCP (Storage-based Consensus Paradigm) and the verification principles of opML, implementing AI inference results becomes very straightforward.
  • The Apus Network design is based on SCP, guaranteeing that all programs, messages, AI models, and logs are stored on Arweave, maintaining the trustworthiness and verifiability of the AI processes.

6.Cost Management:

  • Compute costs are managed by the users, who can either perform the calculations themselves or select specific network nodes for execution.
  • This flexibility allows users to optimize their cost and performance needs.

Conclusion

The Apus Network, through its Trustless GPU solution, provides a robust framework for achieving Trustless AI within the AO ecosystem. By combining the power of Arweave's decentralized storage with the integrity of a GPU network, Apus Network ensures that AI inference results are verifiable, trustworthy, and transparent. This innovative approach not only enhances the reliability of AI applications but also paves the way for more secure and efficient AI integration into various decentralized platforms.

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (6)

🏆 Spot typos, grammatical errors, or inaccuracies in this article? Report and Earn !

Disclaimer: The content of this article would not be an investment advice but for reference only.

🔗 More about PermaDAO :Website|Twitter|Telegram|Discord|Medium |Youtube

Trustless GPU solution by Apus Network for achieving Trustless AI in AO Title 1:Trustless GPU solution by Apus Network for achieving Trustless AI in AO | by PermaDAO | Cointime (2024)
Top Articles
Latest Posts
Article information

Author: Gov. Deandrea McKenzie

Last Updated:

Views: 5818

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Gov. Deandrea McKenzie

Birthday: 2001-01-17

Address: Suite 769 2454 Marsha Coves, Debbieton, MS 95002

Phone: +813077629322

Job: Real-Estate Executive

Hobby: Archery, Metal detecting, Kitesurfing, Genealogy, Kitesurfing, Calligraphy, Roller skating

Introduction: My name is Gov. Deandrea McKenzie, I am a spotless, clean, glamorous, sparkling, adventurous, nice, brainy person who loves writing and wants to share my knowledge and understanding with you.