Lattice Cipher LLL Algorithm: How to Solve the Shortest Vector SVP Problem (3) (End)

Table of contents

1. The operation of each orthogonal basis process is polynomial time

2. The operations on the original lattice base are polynomial time

3. The total running time of the algorithm

4. Open questions


1. The operation of each orthogonal basis process is polynomial time

When calculating the coefficients corresponding to the orthogonal basis, according to Cramer's theorem, the calculation formula of the coefficients can be obtained as follows:

 This result shows a_jthat it is a rational number, the denominator is (it \Lambda_{i-1})^2, this result is used to calculate the orthogonal basis, as follows:

\tilde b_i=b_i+\sum_{j=1}^{i-1}a_jb_j

This shows that the following two expressions must be in integer form:

D_{B,i}^2\tilde b_i,\quad D_B^2\tilde b_i

Obviously it is easy to get the lower bound of the orthonormal basis:

||\tilde b_j||\geq \frac{1}{D_{B,j}^2}

From this, it is easy to obtain the upper limit of the orthogonal basis calculation as follows:

2. The operations on the original lattice base are polynomial time

This part mainly wants to prove that the length of the original grid base will not change much during the reduction iteration. 

The primordial vectors appearing in each step b_ican be represented by M-related polynomial bits. The reason can be seen in the following inequality:

The first equal sign in the above formula \tilde b_1,\cdots,\tilde b_ncan be obtained based on the fact that they are perpendicular to each other, and the first inequality sign |\mu_{i,j}|\leq \frac{1}{2}can be obtained based on the upper limit of the orthogonal basis and the LLL reduced basis discussed above.

This result shows that the length of the original lattice basis vector has an upper limit \sqrt n D_B^2, expressed as a polynomial time bit log(\sqrt n D_B^2). So far it can be proved that the original lattice basis b_ican be expressed as a polynomial relation about M.

Next, we need to prove that the length of will not change too much when performing reduction iterations b_i. Considering the inner loop coefficient of the reduction operation, we can get:

The first inequality sign in the above formula follows the definition of rounding and the Cauchy-Schwartz inequality, and the second inequality sign follows the lower bound problem of the orthogonal basis. 

Then the following inequality can be obtained:

The first inequality sign follows the "Triangle Inequality" theorem; the second inequality sign is based on |c_{ij}|the satisfied inequality; the third inequality sign ||b_j||substitutes the upper limit of the length of the vector;

The triangle inequality theorem can be obtained through the following vector relations:

|\overrightarrow m-\overrightarrow n|\leq |\overrightarrow m|+|\overrightarrow n|

This theorem can be understood by drawing a triangle, where the equal sign is obtained when the two vectors go in opposite directions.

Finally the inequality states that after n iterations, b_ithe length of the vector grows at most (4nD_B)^{4n}times. Apparently, the result is also polynomial poly(M) time bit representable after performing logarithmic operation.

3. The total running time of the algorithm

When proving the above corollary, there is only one place where the following inequality is used:

|\mu_{i,j}|\leq\frac{1}{2},\quad j<i

In other places, the following inequalities are used to prove the relevant theorems clearly:

|\mu_{i+1,i}|\leq \frac{1}{2}

This actually shows that the time complexity can be optimized by only manipulating the relationship between two adjacent vectors. In the optimization algorithm of this idea, the number of iterations must still be polynomial time, but it is not known whether the reduction operation process is polynomial time algorithm.

So far, it can be concluded that there is indeed a polynomial time algorithm relationship between the LLL algorithm and the input scale, and this algorithm is an effective algorithm.

4. Open questions

Historically, Gama and Ngujen have done many experiments using typical lattices (this experiment was published in this paper "N. Gama and PQ Nguyen. Predicting lattice reduction. In EUROCRYPT, pages 31–51, 2008"), and found that the LLL algorithm eventually The results obtained were better than the worst-case predictions. The output result has an exponential relationship with the dimension, but the base number will be (\delta-1/4)^{-1/2}much smaller than (this number can be obtained by the LLL algorithm). As for why, it is worth further investigation.

How to apply the LLL algorithm to special lattices (such as Z^nrotators of integer lattices, ideal lattices) is also a question worth exploring.

Guess you like

Origin blog.csdn.net/forest_LL/article/details/125108376