Polygon zkEVM Prover的 RPC功能

1. 引言

https://github.com/0xPolygonHermez/zkevm-prover为Polygon zkEVM生成proof,其主要承担3类RPC功能:

  • 1)作为Aggregator RPC client(Prover模块)
  • 2)作为Executor RPC server(Executor模块)
  • 3)作为StateDB RPC server(StateDB模块)

2. 作为Aggregator RPC client

zkEVM Prover连接某Aggregator server时,其作为Aggregator RPC client:

  • 支持多个zkEVM Prover同时连接同一Aggregator,从而可提供更多的proof generation power。

Channel实现的为Prover client 与 Aggregator server 之间的双向通讯通道,通过Channel函数可获取aggregator messages,并以同一id,通过Channel函数来返回prover messages。详细的RPC接口规范见aggregator.proto

/**
 * Define all methods implementes by the gRPC
 * Channel: prover receives aggregator messages and returns prover messages with the same id
 */
service AggregatorService {
    
    
    rpc Channel(stream ProverMessage) returns (stream AggregatorMessage) {
    
    }
}

/*type AggregatorService_ChannelServer interface {
	Send(*AggregatorMessage) error //发送request
	Recv() (*ProverMessage, error) //接收response
	grpc.ServerStream
}*/

message AggregatorMessage
{
    
    
    string id = 1;
    oneof request
    {
    
    
        GetStatusRequest get_status_request = 2;
        GenBatchProofRequest gen_batch_proof_request = 3;
        GenAggregatedProofRequest gen_aggregated_proof_request = 4;
        GenFinalProofRequest gen_final_proof_request = 5;
        CancelRequest cancel_request = 6;
        GetProofRequest get_proof_request = 7;
    }
}

message ProverMessage
{
    
    
    string id = 1;
    oneof response
    {
    
    
        GetStatusResponse get_status_response = 2;
        GenBatchProofResponse gen_batch_proof_response = 3;
        GenAggregatedProofResponse gen_aggregated_proof_response = 4;
        GenFinalProofResponse gen_final_proof_response = 5;
        CancelResponse cancel_response = 6;
        GetProofResponse get_proof_response = 7;
    }
}

向Aggregator server调用Channel函数,配套的ProverMessage参数 与 AggregatorMessage参数主要有以下六大类:

  • 1)GetStatusRequest VS GetStatusResponse:Aggregator询问Prover client状态:

    /**
     * @dev GetStatusRequest
     */
    message GetStatusRequest {
          
          }
    /**
     * @dev Response GetStatus
     * @param {status} - server status
     * - BOOTING: being ready to compute proofs
     * - COMPUTING: busy computing a proof
     * - IDLE: waiting for a proof to compute
     * - HALT: stop
     * @param {last_computed_request_id} - last proof identifier that has been computed
     * @param {last_computed_end_time} - last proof timestamp when it was finished
     * @param {current_computing_request_id} - id of the proof that is being computed
     * @param {current_computing_start_time} - timestamp when the proof that is being computed started
     * @param {version_proto} - .proto verion
     * @param {version_server} - server version
     * @param {pending_request_queue_ids} - list of identifierss of proof requests that are in the pending queue
     * @param {prover_name} - id of this prover server, normally specified via config.json, or UNSPECIFIED otherwise; it does not change if prover reboots
     * @param {prover_id} - id of this prover instance or reboot; it changes if prover reboots; it is a UUID, automatically generated during the initialization
     * @param {number_of_cores} - number of cores in the system where the prover is running
     * @param {total_memory} - total memory in the system where the prover is running
     * @param {free_memory} - free memory in the system where the prover is running
     */
    message GetStatusResponse {
          
          
        enum Status {
          
          
            STATUS_UNSPECIFIED = 0;
            STATUS_BOOTING = 1;
            STATUS_COMPUTING = 2;
            STATUS_IDLE = 3;
            STATUS_HALT = 4;
        }
        Status status = 1;
        string last_computed_request_id = 2;
        uint64 last_computed_end_time = 3;
        string current_computing_request_id = 4;
        uint64 current_computing_start_time = 5;
        string version_proto = 6;
        string version_server = 7;
        repeated string pending_request_queue_ids = 8;
        string prover_name = 9;
        string prover_id = 10;
        uint64 number_of_cores = 11;
        uint64 total_memory = 12;
        uint64 free_memory = 13;
        uint64 fork_id = 14;
    }
    
  • 2)CancelRequest VS CancelResponse:Aggregator向Prover client请求取消指定的proof request。

    /**
     * @dev CancelRequest
     * @param {id} - identifier of the proof request to cancel
     */
    message CancelRequest {
          
          
        string id = 1;
    }
    /**
     * @dev CancelResponse
     * @param {result} - request result
     */
    message CancelResponse {
          
          
        Result result = 1;
    }
    /**
     * @dev Result
     *  - OK: succesfully completed
     *  - ERROR: request is not correct, i.e. input data is wrong
     *  - INTERNAL_ERROR: internal server error when delivering the response
     */
    enum Result {
          
          
        RESULT_UNSPECIFIED = 0;
        RESULT_OK = 1;
        RESULT_ERROR = 2;
        RESULT_INTERNAL_ERROR = 3;
    }
    
  • 3)GetProofRequest VS GetProofResponse:Aggregator向Prover client请求获取指定的recursive proof或final proof。

    /**
     * @dev Request GetProof
     * @param {id} - proof identifier of the proof request
     * @param {timeout} - time to wait until the service responds
     */
    message GetProofRequest {
          
          
        string id = 1;
        uint64 timeout = 2;
    }
    /**
     * @dev GetProofResponse
     * @param {id} - proof identifier
     * @param {final_proof} - groth16 proof + public circuit inputs
     * @param {recursive_proof} - recursive proof json
     * @param {result} - proof result
     *  - COMPLETED_OK: proof has been computed successfully and it is valid
     *  - ERROR: request error
     *  - COMPLETED_ERROR: proof has been computed successfully and it is not valid
     *  - PENDING: proof is being computed
     *  - INTERNAL_ERROR: server error during proof computation
     *  - CANCEL: proof has been cancelled
     * @param {result_string} - extends result information
     */
    message GetProofResponse {
          
          
        enum Result {
          
          
            RESULT_UNSPECIFIED = 0;
            RESULT_COMPLETED_OK = 1;
            RESULT_ERROR = 2;
            RESULT_COMPLETED_ERROR = 3;
            RESULT_PENDING = 4;
            RESULT_INTERNAL_ERROR = 5;
            RESULT_CANCEL = 6;
        }
        string id = 1;
        oneof proof {
          
          
            FinalProof final_proof = 2;
            string recursive_proof =3;
        }
        Result result = 4;
        string result_string = 5;
    }
    
  • 4)GenBatchProofRequest VS GenBatchProofResponse:Aggregator向Prover client发送生成batch proof请求。

    /**
     * @dev GenBatchProofRequest
     * @param {input} - input prover
     */
    message GenBatchProofRequest {
          
          
        InputProver input = 1;
    }
    /**
     * @dev InputProver
     * @param {public_inputs} - public inputs
     * @param {db} - database containing all key-values in smt matching the old state root
     * @param {contracts_bytecode} - key is the hash(contractBytecode), value is the bytecode itself
     */
    message InputProver {
          
          
        PublicInputs public_inputs = 1;
        map<string, string> db = 4; // For debug/testing purpposes only. Don't fill this on production
        map<string, string> contracts_bytecode = 5; // For debug/testing purpposes only. Don't fill this on production
    }
    /*
     * @dev PublicInputs
     * @param {old_state_root}
     * @param {old_acc_input_hash}
     * @param {old_batch_num}
     * @param {chain_id}
     * @param {batch_l2_data}
     * @param {global_exit_root}
     * @param {sequencer_addr}
     * @param {aggregator_addr}
     */
    message PublicInputs {
          
          
        bytes old_state_root = 1;
        bytes old_acc_input_hash = 2;
        uint64 old_batch_num = 3;
        uint64 chain_id = 4;
        uint64 fork_id = 5;
        bytes batch_l2_data = 6; //输入a batch of EVM transactions
        bytes global_exit_root = 7;
        uint64 eth_timestamp = 8;
        string sequencer_addr = 9;
        string aggregator_addr = 10;
    }
    
    /**
     * @dev GenBatchProofResponse
     * @param {id} - proof identifier, to be used in GetProofRequest()
     * @param {result} - request result
     */
    message GenBatchProofResponse {
          
          
        string id = 1;
        Result result = 2;
    }
    /**
     * @dev Result
     *  - OK: succesfully completed
     *  - ERROR: request is not correct, i.e. input data is wrong
     *  - INTERNAL_ERROR: internal server error when delivering the response
     */
    enum Result {
          
          
        RESULT_UNSPECIFIED = 0;
        RESULT_OK = 1;
        RESULT_ERROR = 2;
        RESULT_INTERNAL_ERROR = 3;
    }
    
  • 5)GenAggregatedProofRequest VS GenAggregatedProofResponse:Aggregator向Prover client发送生成aggregated proof请求。

    /**
     * @dev GenAggregatedProofRequest
     * @param {recursive_proof_1} - proof json of the first batch to aggregate
     * @param {recursive_proof_2} - proof json of the second batch to aggregate
     */
    message GenAggregatedProofRequest {
          
          
        string recursive_proof_1 = 1;
        string recursive_proof_2 = 2;
    }
    /**
     * @dev GenAggregatedProofResponse
     * @param {id} - proof identifier, to be used in GetProofRequest()
     * @param {result} - request result
     */
    message GenAggregatedProofResponse {
          
          
        string id = 1;
        Result result = 2;
    }
    /**
     * @dev Result
     *  - OK: succesfully completed
     *  - ERROR: request is not correct, i.e. input data is wrong
     *  - INTERNAL_ERROR: internal server error when delivering the response
     */
    enum Result {
          
          
        RESULT_UNSPECIFIED = 0;
        RESULT_OK = 1;
        RESULT_ERROR = 2;
        RESULT_INTERNAL_ERROR = 3;
    }
    
  • 6)GenFinalProofRequest VS GenFinalProofResponse:Aggregator向Prover client发送生成final proof请求。

    /**
     * @dev GenFinalProofRequest
     * @param {recursive_proof} - proof json of the batch or aggregated proof to finalise
     * @param {aggregator_addr} - address of the aggregator
     */
    message GenFinalProofRequest {
          
          
        string recursive_proof = 1;
        string aggregator_addr = 2;
    }
    /**
     * @dev Response GenFinalProof
     * @param {id} - proof identifier, to be used in GetProofRequest()
     * @param {result} - request result
     */
    message GenFinalProofResponse {
          
          
        string id = 1;
        Result result = 2;
    }
    /**
     * @dev Result
     *  - OK: succesfully completed
     *  - ERROR: request is not correct, i.e. input data is wrong
     *  - INTERNAL_ERROR: internal server error when delivering the response
     */
    enum Result {
          
          
        RESULT_UNSPECIFIED = 0;
        RESULT_OK = 1;
        RESULT_ERROR = 2;
        RESULT_INTERNAL_ERROR = 3;
    }
    

对应的Aggregator server端代码为:

// Channel implements the bi-directional communication channel between the
// Prover client and the Aggregator server.
func (a *Aggregator) Channel(stream pb.AggregatorService_ChannelServer) error {
    
    
	metrics.ConnectedProver()
	defer metrics.DisconnectedProver()

	ctx := stream.Context()
	var proverAddr net.Addr
	p, ok := peer.FromContext(ctx)
	if ok {
    
    
		proverAddr = p.Addr
	}
	prover, err := prover.New(stream, proverAddr, a.cfg.ProofStatePollingInterval)
	if err != nil {
    
    
		return err
	}

	log := log.WithFields(
		"prover", prover.Name(),
		"proverId", prover.ID(),
		"proverAddr", prover.Addr(),
	)
	log.Info("Establishing stream connection with prover")

	// Check if prover supports the required Fork ID
	if !prover.SupportsForkID(a.cfg.ForkId) {
    
    
		err := errors.New("prover does not support required fork ID")
		log.Warn(FirstToUpper(err.Error()))
		return err
	}

	for {
    
    
		select {
    
    
		case <-a.ctx.Done():
			// server disconnected
			return a.ctx.Err()
		case <-ctx.Done():
			// client disconnected
			return ctx.Err()

		default:
			isIdle, err := prover.IsIdle() //判断GetStatusRequest返回的是否为GetStatusResponse_STATUS_IDLE
			if err != nil {
    
    
				log.Errorf("Failed to check if prover is idle: %v", err)
				time.Sleep(a.cfg.RetryTime.Duration)
				continue
			}
			if !isIdle {
    
    
				log.Debug("Prover is not idle")
				time.Sleep(a.cfg.RetryTime.Duration)
				continue
			}

			_, err = a.tryBuildFinalProof(ctx, prover, nil)
			if err != nil {
    
    
				log.Errorf("Error checking proofs to verify: %v", err)
			}

			proofGenerated, err := a.tryAggregateProofs(ctx, prover)
			if err != nil {
    
    
				log.Errorf("Error trying to aggregate proofs: %v", err)
			}
			if !proofGenerated {
    
    
				proofGenerated, err = a.tryGenerateBatchProof(ctx, prover)
				if err != nil {
    
    
					log.Errorf("Error trying to generate proof: %v", err)
				}
			}
			if !proofGenerated {
    
    
				// if no proof was generated (aggregated or batch) wait some time before retry
				time.Sleep(a.cfg.RetryTime.Duration)
			} // if proof was generated we retry immediately as probably we have more proofs to process
		}
	}
}

Prover client端代码为:

void* aggregatorClientThread(void* arg)
{
    
    
    cout << "aggregatorClientThread() started" << endl;
    string uuid;
    AggregatorClient *pAggregatorClient = (AggregatorClient *)arg;

    while (true)
    {
    
    
        ::grpc::ClientContext context;
        std::unique_ptr<grpc::ClientReaderWriter<aggregator::v1::ProverMessage, aggregator::v1::AggregatorMessage>> readerWriter;
        readerWriter = pAggregatorClient->stub->Channel(&context);
        bool bResult;
        while (true)
        {
    
    
            ::aggregator::v1::AggregatorMessage aggregatorMessage;
            ::aggregator::v1::ProverMessage proverMessage;

            // Read a new aggregator message
            bResult = readerWriter->Read(&aggregatorMessage);
            if (!bResult)
            {
    
    
                cerr << "Error: aggregatorClientThread() failed calling readerWriter->Read(&aggregatorMessage)" << endl;
                break;
            }
            
            switch (aggregatorMessage.request_case())
            {
    
    
                case aggregator::v1::AggregatorMessage::RequestCase::kGetProofRequest:
                    break;
                case aggregator::v1::AggregatorMessage::RequestCase::kGetStatusRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kGenBatchProofRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kCancelRequest:
                    cout << "aggregatorClientThread() got: " << aggregatorMessage.ShortDebugString() << endl;
                    break;
                case aggregator::v1::AggregatorMessage::RequestCase::kGenAggregatedProofRequest:
                    cout << "aggregatorClientThread() got genAggregatedProof() request" << endl;
                    break;
                case aggregator::v1::AggregatorMessage::RequestCase::kGenFinalProofRequest:
                    cout << "aggregatorClientThread() got genFinalProof() request" << endl;
                    break;
                default:
                    break;
            }

            // We return the same ID we got in the aggregator message
            proverMessage.set_id(aggregatorMessage.id());

            string filePrefix = pAggregatorClient->config.outputPath + "/" + getTimestamp() + "_" + aggregatorMessage.id() + ".";

            if (pAggregatorClient->config.saveRequestToFile)
            {
    
    
                string2file(aggregatorMessage.DebugString(), filePrefix + "aggregator_request.txt");
            }

            switch (aggregatorMessage.request_case())
            {
    
    
                case aggregator::v1::AggregatorMessage::RequestCase::kGetStatusRequest:
                {
    
    
                    // Allocate a new get status response
                    aggregator::v1::GetStatusResponse * pGetStatusResponse = new aggregator::v1::GetStatusResponse();
                    zkassert(pGetStatusResponse != NULL);

                    // Call GetStatus
                    pAggregatorClient->GetStatus(*pGetStatusResponse);

                    // Set the get status response
                    proverMessage.set_allocated_get_status_response(pGetStatusResponse);
                    break;
                }

                case aggregator::v1::AggregatorMessage::RequestCase::kGenBatchProofRequest:
                {
    
    
                    // Allocate a new gen batch proof response
                    aggregator::v1::GenBatchProofResponse * pGenBatchProofResponse = new aggregator::v1::GenBatchProofResponse();
                    zkassert(pGenBatchProofResponse != NULL);

                    // Call GenBatchProof
                    pAggregatorClient->GenBatchProof(aggregatorMessage.gen_batch_proof_request(), *pGenBatchProofResponse);

                    // Set the gen batch proof response
                    proverMessage.set_allocated_gen_batch_proof_response(pGenBatchProofResponse);
                    break;
                }

                case aggregator::v1::AggregatorMessage::RequestCase::kGenAggregatedProofRequest:
                {
    
    
                    // Allocate a new gen aggregated proof response
                    aggregator::v1::GenAggregatedProofResponse * pGenAggregatedProofResponse = new aggregator::v1::GenAggregatedProofResponse();
                    zkassert(pGenAggregatedProofResponse != NULL);

                    // Call GenAggregatedProof
                    pAggregatorClient->GenAggregatedProof(aggregatorMessage.gen_aggregated_proof_request(), *pGenAggregatedProofResponse);

                    // Set the gen aggregated proof response
                    proverMessage.set_allocated_gen_aggregated_proof_response(pGenAggregatedProofResponse);
                    break;
                }

                case aggregator::v1::AggregatorMessage::RequestCase::kGenFinalProofRequest:
                {
    
    
                    // Allocate a new gen final proof response
                    aggregator::v1::GenFinalProofResponse * pGenFinalProofResponse = new aggregator::v1::GenFinalProofResponse();
                    zkassert(pGenFinalProofResponse != NULL);

                    // Call GenFinalProof
                    pAggregatorClient->GenFinalProof(aggregatorMessage.gen_final_proof_request(), *pGenFinalProofResponse);

                    // Set the gen final proof response
                    proverMessage.set_allocated_gen_final_proof_response(pGenFinalProofResponse);
                    break;
                }

                case aggregator::v1::AggregatorMessage::RequestCase::kCancelRequest:
                {
    
    
                    // Allocate a new cancel response
                    aggregator::v1::CancelResponse * pCancelResponse = new aggregator::v1::CancelResponse();
                    zkassert(pCancelResponse != NULL);

                    // Call Cancel
                    pAggregatorClient->Cancel(aggregatorMessage.cancel_request(), *pCancelResponse);

                    // Set the cancel response
                    proverMessage.set_allocated_cancel_response(pCancelResponse);
                    break;
                }

                case aggregator::v1::AggregatorMessage::RequestCase::kGetProofRequest:
                {
    
    
                    // Allocate a new cancel response
                    aggregator::v1::GetProofResponse * pGetProofResponse = new aggregator::v1::GetProofResponse();
                    zkassert(pGetProofResponse != NULL);

                    // Call GetProof
                    pAggregatorClient->GetProof(aggregatorMessage.get_proof_request(), *pGetProofResponse);

                    // Set the get proof response
                    proverMessage.set_allocated_get_proof_response(pGetProofResponse);
                    break;
                }

                default:
                {
    
    
                    cerr << "Error: aggregatorClientThread() received an invalid type=" << aggregatorMessage.request_case() << endl;
                    break;
                }
            }

            // Write the prover message
            bResult = readerWriter->Write(proverMessage);
            if (!bResult)
            {
    
    
                cerr << "Error: aggregatorClientThread() failed calling readerWriter->Write(proverMessage)" << endl;
                break;
            }
            
            switch (aggregatorMessage.request_case())
            {
    
    
                case aggregator::v1::AggregatorMessage::RequestCase::kGetStatusRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kGenBatchProofRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kGenAggregatedProofRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kGenFinalProofRequest:
                case aggregator::v1::AggregatorMessage::RequestCase::kCancelRequest:
                    cout << "aggregatorClientThread() sent: " << proverMessage.ShortDebugString() << endl;
                    break;
                case aggregator::v1::AggregatorMessage::RequestCase::kGetProofRequest:
                    if (proverMessage.get_proof_response().result() != aggregator::v1::GetProofResponse_Result_RESULT_PENDING)
                        cout << "aggregatorClientThread() getProof() response sent; result=" << proverMessage.get_proof_response().result_string() << endl;
                    break;
                default:
                    break;
            }
            
            if (pAggregatorClient->config.saveResponseToFile)
            {
    
    
                string2file(proverMessage.DebugString(), filePrefix + "aggregator_response.txt");
            }
        }
        cout << "aggregatorClientThread() channel broken; will retry in 5 seconds" << endl;
        sleep(5);
    }
    return NULL;
}

2.1 生成batch proof

当Aggregator调用Prover请求生成batch proof时,Prover会:

  • 1)执行input data(a batch of EVM transactions)
  • 2)calculate the resulting state
  • 3)基于PIL polynomials definition和PIL polynomial constraints,为该calculation 生成proof。
    • Executor模块(即Executor RPC server)中结合了14个状态机来处理input data,以 生成 生成proof 所需的 committed polynomials的evaluations。每个状态机会生成自己的computation evidence data,并将更复杂的证明计算委托给下一状态机。
    • Prover模块(即Aggregator RPC client)会调用Stark模块,来为 Executor状态机的committed polynomials 生成proof。

2.2 生成aggregated proof

当Aggregator调用Prover请求生成aggregated proof时,Prover会:

  • 将Aggregator所提供的之前的2个calculated batch proofs或aggregated proofs合并,生成一个aggregated proof。

2.3 生成 final proof

当Aggregator调用Prover请求生成final proof时,Prover会:

  • 1)将Aggregator所提供的之前的一个calculated aggregated proof,生成一个可验证的final proof。

3. 作为Executor RPC server

作为Executor RPC server,其并不生成proof,(生成proof的工作由Aggregator RPC client模块完成),Executor会:

  • 执行input data(a batch of EVM transactions),并计算the resulting state。
  • 提供了一种快速的方式来检查:
    • 所提议的batch of transactions是否正确构建,且其工作量是否适合在单个batch内进行证明。
  • 当被Executor service调用时,Executor模块仅使用Main状态机。因此时不需要生成proof,也就不需要committed polynomials。

由其它节点服务(包括但不限于Aggregator)调用ProcessBatch函数,详细的接口规范见executor.proto

service ExecutorService {
    
    
    /// Processes a batch
    rpc ProcessBatch(ProcessBatchRequest) returns (ProcessBatchResponse) {
    
    }
}

message ProcessBatchRequest {
    
    
    bytes old_state_root = 1;
    bytes old_acc_input_hash = 2;
    uint64 old_batch_num = 3;
    uint64 chain_id = 4;
    uint64 fork_id = 5;
    bytes batch_l2_data = 6;
    bytes global_exit_root = 7;
    uint64 eth_timestamp = 8;
    string coinbase = 9;
    uint32 update_merkle_tree = 10;
    // flag to indicate that counters should not be taken into account
    uint64 no_counters = 11;
    // from is used for unsigned transactions with sender
    string from = 12;
    // For testing purposes only
    map<string, string> db = 13;
    map<string, string> contracts_bytecode = 14; // For debug/testing purpposes only. Don't fill this on production
    TraceConfig trace_config = 15;
}

message ProcessBatchResponse {
    
    
    bytes new_state_root = 1;
    bytes new_acc_input_hash = 2;
    bytes new_local_exit_root = 3;
    uint64 new_batch_num = 4;
    uint32 cnt_keccak_hashes = 5;
    uint32 cnt_poseidon_hashes = 6;
    uint32 cnt_poseidon_paddings = 7;
    uint32 cnt_mem_aligns = 8;
    uint32 cnt_arithmetics = 9;
    uint32 cnt_binaries = 10;
    uint32 cnt_steps = 11;
    uint64 cumulative_gas_used = 12;
    repeated ProcessTransactionResponse responses = 13;
    ExecutorError error = 14;
    map<string, InfoReadWrite> read_write_addresses = 15;
}
message ProcessTransactionResponse {
    
    
    // Hash of the transaction
    bytes tx_hash = 1;
    // RLP encoded transaction
    // [nonce, gasPrice, gasLimit, to, value, data, v, r, s]
    bytes rlp_tx = 2;
    // Type indicates legacy transaction
    // It will be always 0 (legacy) in the executor
    uint32 type = 3;
    // Returned data from the runtime (function result or data supplied with revert opcode)
    bytes return_value = 4;
    // Total gas left as result of execution
    uint64 gas_left = 5;
    // Total gas used as result of execution or gas estimation
    uint64 gas_used = 6;
    // Total gas refunded as result of execution
    uint64 gas_refunded = 7;
    // Any error encountered during the execution
    RomError error = 8;
    // New SC Address in case of SC creation
    string create_address = 9;
    // State Root
    bytes state_root = 10;
    // Logs emited by LOG opcode
    repeated Log logs = 11;
    // Trace
    repeated ExecutionTraceStep execution_trace = 13;
    CallTrace call_trace = 14;
}

4. 作为StateDB RPC server

StateDB服务:

  • 提供了访问system state(为a Merkle tree)的接口,以及访问该state所存储database的接口
  • 供Executor和Prover模块使用,作为state的唯一源。可用于获取state details,如account balances。

详细的接口规范见statedb.proto

/**
 * Define all methods implementes by the gRPC
 * Get: get the value for a specific key
 * Set: set the value for a specific key
 * SetProgram: set the byte data for a specific key
 * GetProgram: get the byte data for a specific key
 * Flush: wait for all the pendings writes to the DB are done
 */
service StateDBService {
    
    
    rpc Set(SetRequest) returns (SetResponse) {
    
    }
    rpc Get(GetRequest) returns (GetResponse) {
    
    }
    rpc SetProgram(SetProgramRequest) returns (SetProgramResponse) {
    
    }
    rpc GetProgram(GetProgramRequest) returns (GetProgramResponse) {
    
    }
    rpc LoadDB(LoadDBRequest) returns (google.protobuf.Empty) {
    
    }
    rpc LoadProgramDB(LoadProgramDBRequest) returns (google.protobuf.Empty) {
    
    }
    rpc Flush (google.protobuf.Empty) returns (FlushResponse) {
    
    }
}

其提供了6个RPC接口:

  • 1)rpc Set(SetRequest) returns (SetResponse) {}:

    /**
     * @dev SetRequest
     * @param {old_root} - merkle-tree root
     * @param {key} - key to set
     * @param {value} - scalar value to set (HEX string format)
     * @param {persistent} - indicates if it should be stored in the SQL database (true) or only in the memory cache (false)
     * @param {details} - indicates if it should return all response parameters (true) or just the new root (false)
     * @param {get_db_read_log} - indicates if it should return the DB reads generated during the execution of the request
     */
    message SetRequest {
          
          
        Fea old_root = 1;
        Fea key = 2;
        string value = 3;
        bool persistent = 4;
        bool details = 5;
        bool get_db_read_log = 6;
    }
    /**
     * @dev SetResponse
     * @param {old_root} - merkle-tree root
     * @param {new_root} - merkle-tree new root
     * @param {key} - key to look for
     * @param {siblings} - array of siblings
     * @param {ins_key} - key found
     * @param {ins_value} - value found (HEX string format)
     * @param {is_old0} - is new insert or delete
     * @param {old_value} - old value (HEX string format)
     * @param {new_value} - new value (HEX string format)
     * @param {mode}
     * @param {proof_hash_counter}
     * @param {db_read_log} - list of db records read during the execution of the request
     * @param {result} - result code
     */
    message SetResponse {
          
          
        Fea old_root = 1;
        Fea new_root = 2;
        Fea key = 3;
        map<uint64, SiblingList> siblings = 4;
        Fea ins_key = 5;
        string ins_value = 6;
        bool is_old0 = 7;
        string old_value = 8;
        string new_value = 9;
        string mode = 10;
        uint64 proof_hash_counter = 11;
        map<string, FeList> db_read_log = 12;
        ResultCode result = 13;
    }
    
  • 2)rpc Get(GetRequest) returns (GetResponse) {}:

    /**
     * @dev GetRequest
     * @param {root} - merkle-tree root
     * @param {key} - key to look for
     * @param {details} - indicates if it should return all response parameters (true) or just the new root (false)
     * @param {get_db_read_log} - indicates if it should return the DB reads generated during the execution of the request
     */
    message GetRequest {
          
          
        Fea root = 1;
        Fea key = 2;
        bool details = 3;
        bool get_db_read_log = 4;
    }
    /**
     * @dev GetResponse
     * @param {root} - merkle-tree root
     * @param {key} - key to look for
     * @param {siblings} - array of siblings
     * @param {ins_key} - key found
     * @param {ins_value} - value found (HEX string format)
     * @param {is_old0} - is new insert or delete
     * @param {value} - value retrieved (HEX string format)
     * @param {proof_hash_counter}
     * @param {db_read_log} - list of db records read during the execution of the request
     * @param {result} - result code
     */
    message GetResponse {
          
          
        Fea root = 1;
        Fea key = 2;
        map<uint64, SiblingList> siblings = 3;
        Fea ins_key = 4;
        string ins_value = 5;
        bool is_old0 = 6;
        string value = 7;
        uint64 proof_hash_counter = 8;
        map<string, FeList> db_read_log = 9;
        ResultCode result = 10;
    }
    
  • 3)rpc SetProgram(SetProgramRequest) returns (SetProgramResponse) {}:

    /**
     * @dev SetProgramRequest
     * @param {key} - key to set
     * @param {data} - Program data to store
     * @param {persistent} - indicates if it should be stored in the SQL database (true) or only in the memory cache (false)
     */
    message SetProgramRequest {
          
          
        Fea key = 1;
        bytes data = 2;
        bool persistent = 3;
    }
    /**
     * @dev SetProgramResponse
     * @param {result} - result code
     */
    message SetProgramResponse {
          
          
        ResultCode result = 1;
    }
    
  • 4)rpc GetProgram(GetProgramRequest) returns (GetProgramResponse) {}:

/**
 * @dev GetProgramRequest
 * @param {key} - key to get program data
 */
message GetProgramRequest {
    
    
    Fea key = 1;
}
/**
 * @dev GetProgramResponse
 * @param {data} - program data retrieved
 * @param {result} - result code
 */
message GetProgramResponse {
    
    
    bytes data = 1;
    ResultCode result = 2;
}
  • 5)rpc LoadDB(LoadDBRequest) returns (google.protobuf.Empty) {}:

    /**
     * @dev LoadDBRequest
     * @param {input_db} - list of db records (MT) to load in the database
     * @param {persistent} - indicates if it should be stored in the SQL database (true) or only in the memory cache (false)
     */
    message LoadDBRequest {
          
          
        map<string, FeList> input_db = 1;
        bool persistent = 2;
    }
    
  • 6)rpc LoadProgramDB(LoadProgramDBRequest) returns (google.protobuf.Empty) {}:

    /**
     * @dev LoadProgramDBRequest
     * @param {input_program_db} - list of db records (program) to load in the database
     * @param {persistent} - indicates if it should be stored in the SQL database (true) or only in the memory cache (false)
     */
    message LoadProgramDBRequest {
          
          
        map<string, bytes> input_program_db = 1;
        bool persistent = 2;
    }
    
  • 7)rpc Flush (google.protobuf.Empty) returns (FlushResponse) {}:

    /**
     * @dev FlushResponse
     * @param {result} - result code
     */
    message FlushResponse {
          
          
        ResultCode result = 1;
    }
    

其中:


/**
 * @dev Array of 4 FE
 * @param {fe0} - Field Element value for pos 0
 * @param {fe1} - Field Element value for pos 1
 * @param {fe2} - Field Element value for pos 2
 * @param {fe3} - Field Element value for pos 3
*/
message Fea {
    
    
    uint64 fe0 = 1;
    uint64 fe1 = 2;
    uint64 fe2 = 3;
    uint64 fe3 = 4;
}

/**
 * @dev FE (Field Element) List
 * @param {fe} - list of Fe
*/
message FeList {
    
    
    repeated uint64 fe = 1;
}

/**
 * @dev Siblings List
 * @param {sibling} - list of siblings
*/
message SiblingList {
    
    
    repeated uint64 sibling = 1;
}

/**
 * @dev Result code
 * @param {code} - result code
*/
message ResultCode {
    
    
    enum Code {
    
    
        CODE_UNSPECIFIED = 0;
        CODE_SUCCESS = 1;
        CODE_DB_KEY_NOT_FOUND = 2; // Requested key was not found in database
        CODE_DB_ERROR = 3; // Error connecting to database, or processing request
        CODE_INTERNAL_ERROR = 4;
        CODE_SMT_INVALID_DATA_SIZE = 14; // Invalid size for the data of MT node
    }
    Code code = 1;
}

参考资料

[1] zkEVM Prover说明

附录:Polygon Hermez 2.0 zkEVM系列博客

猜你喜欢

转载自blog.csdn.net/mutourend/article/details/130924652