zkPoT: ZKP based on machine learning model training

1 Introduction

In the 2023 paper Experimenting with Zero-Knowledge Proofs of Training by Sanjam Garg et al ., the zkPoT (zero-knowledge proof of training) protocol designed:

  • For streaming-friendly.
  • The RAM required is not proportional to the size of the training circuit.
  • Combined MPC-in-the-head + zk-SNARKs.
  • The total proof size is 10% less than the training data set size and is divided into 3 stages: [Take training a logistic regression model using mini-batch gradient descent on a 4 GB dataset of 262,144 records with 1024 features as an example]
    • Data independent offline stage.
    • Depends on data, but does not rely on the model stage. This stage:
      • Prover takes about 1 hour.
      • Verifier takes a few seconds.
    • Online stage: relies on both data and models. Online stage:
      • Prover takes less than 10 minutes.
      • Verifier takes less than half a minute.
        Insert image description here

For open source code implementation, see:

Guess you like

Origin blog.csdn.net/mutourend/article/details/133558165