LWE problem based on lattice cipher

LWE

The LWE problem, Learning With Errors, the first version with safety proof was proposed by Oded Regev in 2005, Kawachi et al. gave efficiency improvements, and then some very important improvements in efficiency were proposed by Peikert et al.

lattice theory knowledge

Lattice-based Crypto is a relatively popular branch of cryptography, and it has the ability to resist quantum computing (the anti-quantum computing ciphers collected in recent years generally include ciphers based on HASH functions and ciphers based on error-correcting codes. , lattice-based ciphers, multivariable quadratic ciphers, secret-key ciphers).

Integer lattice : Let v 1 , ⋯ , vn ∈ R m v_1,\cdots,v_n \in \mathbb{R}^mv1,,vnRm is a set of linearly independent vectors. Byv 1 , ⋯ , vn v_1,\cdots,v_nv1,,vnGenerated lattice LLL refers to the vectorv 1 , ⋯ , vn v_1,\cdots,v_nv1,,vnA vector set composed of linear combinations, and the coefficients used are all in Z \mathbb{Z}Z中,即 L = { a 1 v 1 + a 2 v 2 + ⋯ + a n v n : a 1 , a 2 , ⋯   , a n ∈ Z } L= \{a_1v_1+a_2v_2+\cdots+a_nv_n:a_1,a_2,\cdots,a_n\in\mathbb{Z}\} L={ a1v1+a2v2++anvn:a1,a2,,anZ}

Grid LLThe dimension of L is equal to the number of vectors in the lattice.

假定 v 1 , v 2 , ⋯   , v n v_1,v_2,\cdots,v_n v1,v2,,vnis grid LLL 的基, w 1 , w 2 , ⋯   , w n ∈ L w_1,w_2,\cdots,w_n \in L w1,w2,,wnL , then there must be an integer coefficientaij a_{ij}aijmakes:

{ w 1 = a 11 v 1 + a 12 v 2 + ⋯ + a 1 n v n w 2 = a 21 v 1 + a 22 v 2 + ⋯ + a 2 n v n ⋮ w n = a n 1 v 1 + a n 2 v 2 + ⋯ + a n n v n \begin{cases} w_1=a_{11}v_1+a_{12}v_2+\cdots+a_{1n}v_n \\ w_2=a_{21}v_1+a_{22}v_2+\cdots+a_{2n}v_n \\ \vdots \\ w_n=a_{n1}v_1+a_{n2}v_2+\cdots+a_{nn}v_n \end{cases} w1=a11v1+a12v2++a1nvnw2=a21v1+a22v2++a2 nvnwn=an 1v1+an 2v2++annvn

Thus, the lattice problem is a matrix operation.

Related Mathematical Definitions

Theorem 3.1 Lattice LLAny two bases of L can be transformed into each other by multiplying on the left by a specific matrix. This matrix is ​​composed of integers, and its determinant value is± 1 \pm1±1

Definition 3.2 A lattice whose coordinates of all vectors are integers is called an integer lattice. Equivalently, when m ≥ 1 m\ge1mWhen 1 , Z m \mathbb{Z}^mwhen the integer gridZAn additive subgroup of m .

Definition 3.3 If R m \mathbb{R}^mRA subset of m LLIf L is closed under addition and subtraction, it is an additive subgroup. If the following conditions are satisfied, then it is a discrete additive subgroup:
there is a constantε > 0 \varepsilon>0e>0 , for anyv ∈ L v \in LvL ,ifL ⋂ { ω ∈ R m : ∣ ∣ ν − ω ∣ ∣ < ε } = { v } L\bigcap \{ \omega\in\mathbb{R}^m:||\nu-\omega| |<\varepsilon\} = \{v\}L{ ωRm:∣∣νω∣∣<e }={ v }
In other words, if inLLTake any vectorvv in Lv , and start withvvv is the center of the circle, with radiusε \varepsilonε draws a solid circle, thenLLNo other point in L will fall inside the circle.

Theorem 3.2 R m \mathbb{R}^mRA subset of m is a lattice if and only if it is a discrete additive subgroup.

Definition 3.4 Let LLL is a dimension ofnnThe lattice of n , andv 1 , v 2 , … , vn v_1,v_2,…,v_nv1,v2,,vnis LLbase of L. The basis region corresponding to this basis is the set of vectors as follows:F ( v 1 , v 2 , … , vn ) = { t 1 v 1 + t 2 v 2 + … + tnvn : 0 ≤ ti < 1 } F(v_1 ,v_2,…,v_n)=\{t_1v_1+t_2v_2+…+t_nv_n:0\le t_i <1\}F(v1,v2,,vn)={ t1v1+t2v2++tnvn:0ti<1}

Theorem 3.3 Let L ⊂ R n L\subset\mathbb{R}^nLRn is a lattice of dimension n, andFFF is its base area. Then, for any vectorw ∈ R nw\in\mathbb{R}^nwRn , there exists a uniquet ∈ F t\in FtF and the uniquev ∈ L v \in LvL , fullw = t + vw=t+vw=t+v。也可写为 F + v = { t + v : t ∈ F } F+v = \{t+v:t\in F\} F+v={ t+v:tF}

Definition 3.5 Let LLL is a lattice of dimension n,FFF is forLLA base region of L. FFThe n-dimensional volume of F is called LLThe determinant of L (sometimes also called co-volume), denoted asdet L detLd e t L .

Difficult problem

The Shortest Vector Problem (SVP, The Shortest Vector Problem):

Looking for a grid LLThe shortest nonzero vector in L. That is, to find av ∈ L v \in LvL satisfies its Euclidean norm∣∣ v ∣ ∣ \mid\mid v \mid\mid∣∣v∣∣ Min.

The closest vector problem (CVP, The Closest Vector Problem):

For a non-grid LLvector wwin Lw , looking for a vector vvin the latticev, make∣∣ w − v ∣ ∣ \mid\mid wv \mid\mid∣∣wv∣∣ Min.

Both CVP and SVP are NP-complete problems, so it is very difficult to solve them, so these two problems can be used as the basis of the key system.

LWE-related mathematical foundation

LWE is a CVP problem

insert image description hereinsert image description hereinsert image description here
insert image description here

LWE has search LWE (SLWE) and decision-making LWE (DLWE). The last problem of SLWE is that we need to find s, and DLWE only needs to let us distinguish whether what we see is the error product in the LWE problem or a randomly generated vector.

Public Key Cryptosystem Based on LWE

Parameters : n, m, l, t, r, q and a real number α \alphaα (α > 0 \alpha>0a>0)

Private key : Uniformly and randomly select S ∈ Z qn ∗ l S \in \mathbb{Z}^{n*l}_qSZqnl

Public key : uniformly and randomly select A ∈ Z qm ∗ n A \in \mathbb{Z}^{m*n}_qAZqmn E ∈ Z q m ∗ l E \in \mathbb{Z}^{m*l}_q EZqml。公钥是 ( A , P = A S + E ) ∈ Z q m ∗ n ∗ Z q m ∗ l (A,P=AS+E)\in \mathbb{Z}^{m*n}_q* \mathbb{Z}^{m*l}_q AP=AS+EZqmnZqml

Encryption : Given an element v ∈ Z tlv \in \mathbb{Z}^l_t in the original spacevZtl, and a public key (A,P), randomly select a vector a ∈ { − r , − r + 1 , … , r } ma \in \{-r,-r+1,…,r\}^ ma{ r,r+1,,r}m , then output a ciphertext (U = AT ∗ a , c = PT ∗ a + f ( v ) U = A^T*a,c = P^T*a+f(v)U=ATa,c=PTa+f(v) ∈ Z q n ∗ Z q l \in \mathbb{Z}^n_q* \mathbb{Z}^l_q ZqnZql

Decryption : Given a ciphertext ( u , c ) ∈ Z qn ∗ Z ql (u,c)\in \mathbb{Z}^n_q* \mathbb{Z}^l_q(u,c)ZqnZqlAnd a private key S ∈ Z qn ∗ l S \in \mathbb{Z}^{n*l}_qSZqnl,输出 f − 1 ( c − s T u ) f^{-1}(c-s^Tu) f1(csTu)

There is a certain positive decryption error probability in this cryptosystem, and this probability can be made very small by selecting appropriate parameters. Furthermore, if an error correcting code is used to encode a message prior to encryption, the probability of error can be reduced to a level where it cannot be detected. (For details on the probability assessment of decryption errors, see "Anti-Quantum Computing Cryptography")

appearing in ctf

Randomly select a matrix A ∈ Z qm ∗ n A \in \mathbb{Z}^{m*n}_qAZqmn, a random vector s ∈ Z qns \in \mathbb{Z}^n_qsZqn, and a random noise e ∈ ε me \in \varepsilon^meem

The output g of an LWE system A ( s , e ) = A s + emod q g_A(s,e) = As+e\mod qgA(s,e)=As+emodq

A LWE problem is, given a matrix A, and the output of the LWE system g A ( s , e ) g_A(s,e)gA(s,e ) , restore s

(The error vector of LWE is a small vector that satisfies a normal distribution)

Because some errors are added, if the Gaussian elimination method is used, these errors will be gathered, making the solution much different from the actual value.

solve

Construct the matrix:
insert image description here

Using the LLL algorithm and the Babai nearest plane algorithm , an approximate solution to the SVP can be found in polynomial time.

Using Embedding Technique to construct an Embedding Lattice can also solve SVP.

LLL algorithm

Gauss proposed an algorithm to find a set of high-quality bases in a two-dimensional lattice. The basic idea of ​​this algorithm is to alternately subtract multiples of another base vector from one base vector until there is no better improvement. (The so-called no better improvement is to meet a specific condition requirement in the algorithm):

Gaussian lattice reduction algorithm

Input: a set of bases v 1 v_1 of lattice Lv1and v 2 v_2v2

Output: a set of basis vectors with good orthogonality

  1. cycle

  2. 如果 ∥ v 2 ∥ < ∥ v 1 ∥ \parallel v_2\parallel<\parallel v_1\parallel v2∥<∥v1 , exchangev 1 and v 2 v_1 and v_2v1and v2

  3. 计算 m = ⌈ v 1 ∗ v 2 ∥ v 1 ∥ 2 ⌋ m = \left\lceil\frac{v_1*v_2}{\parallel v1\parallel^2}\right\rfloor m=v12v1v2 (calculate the coefficients of Schmidt orthogonalization)

  4. If m = 0, return the basis vector v 1 v_1v1and v 2 v_2v2

  5. v 2 − mv 1 v_2 - mv_1v2mv1replace v 2 v_2v2

  6. continue to loop

More precisely, when the algorithm terminates, the vector v 1 v_1v1It is the shortest non-zero vector in the lattice L, so this algorithm can solve the SVP problem very well

sagemath code

def Gauss(x,y):
    # step 1
    v1 = x; v2 = y
    finished = False
    # step 2
    while not finished:
        # (a)
        m = round(( v2.dot_product(v1) / v1.dot_product(v1) ))
        # (b)
        v2 = v2 - m*v1
        # (c)
        if v1.norm() <= v2.norm():
            finished = True
        else:
            v1, v2 = v2, v1
    
   return v1, v2

LLL algorithm

The LLL algorithm, born in 1982, can be regarded as the promotion of Gaussian algorithm in high-dimensional grid

This algorithm can solve SVP and CVP problems in some lower-dimensional lattices. However, as the dimension of the lattice increases, the operation effect of the algorithm is also weakened, so that for high-dimensional lattices, even if the LLL algorithm is used, the SVP and CVP problems cannot be solved well. Therefore, the security of most lattice theory-based cryptosystems depends on whether the LLL algorithm and other lattice-based reduction algorithms can efficiently solve the difficulty of apprSVP (approximate shortest vector problem) or apprCVP (approximate closest vector problem).

Using LLL to reduce bases requires that the bases meet two conditions:

1. size-reduce : for all 1 ≤ j < i ≤ n 1 ≤ j < i ≤ n1j<in,有∣ μ i , j ∣ ≤ 1 2 |\mu_{i,j}| \le \frac{1}{2}μi,j21μ i , j = ∣ vi ∗ vj ∗ ∣ ∣ ∣ vj ∗ ∣ ∣ \mu_{i,j}=\frac{|v_i*v_j^*|}{||v_j^*||}mi,j=∣∣vj∣∣vivjare the coefficients in the Schmidt orthogonalization.

2. Lovász condition : For all 1 < i ≤ n 1< i ≤ n1<in∣ ∣ vi ∗ ∣ ∣ 2 ≥ ( 3 4 − μ i , i − 1 2 ) ∣ ∣ vi − 1 ∗ ∣ ∣ 2 ||v_i^*||^2\ge(\frac{3}{4 }-\mu^2_{i,i-1})||v_{i-1}^*||^2∣∣vi2(43mi,i12)∣∣vi12 established

img

The sagemath code is simple to implement:

def max(a, b):
    return a if a > b else b

def LLL_v0(M, delta=0.75):
    B = deepcopy(M)
    Q, mu = B.gram_schmidt()
    n, k = B.nrows(), 1
    
    while k < n:
        
        # size reduction step
        for j in reversed(range(k)):
            if abs( mu[k][j] ) > 0.5:
                B[k] = B[k] - round( mu[k][j] ) * B[j]
                Q, mu = B.gram_schmidt()
        
        # swap step 
        if Q[k].dot_product(Q[k]) >= (delta - mu[k][k-1]^2) * Q[k-1].dot_product(Q[k-1]):
            k = k + 1
        else:
            B[k], B[k-1] = B[k-1], B[k]
            Q, mu = B.gram_schmidt()
            k = max(k-1,1)
    
    return B 

conventional implementation

After an exchange step or a reduction step, it is actually only necessary to modify μ \muIndividual values ​​for μ (Schmidt orthogonalization coefficient) and Q (group of orthogonal vectors). In a simple implementation, the entire Schmidt orthogonalization is recalculated each time, which is inefficient.

def LLL_v1(M, delta=0.75):

    if delta < 0.25:
        print("delta should be greater than 0.25. Choose delta = 0.75 now.")
    alpha = delta if 0.25 < delta < 1 else 0.75
    
    x = M
    n = M.nrows()
    
    def reduce(k, l):
        do_reduce = False
                   
        if abs(mu[k,l]) > 0.5:
            do_reduce = True
            
            y[k] = y[k] - mu[k,l].round() * y[l]
            for j in range(l):
                mu[k,j] -=  mu[k,l].round() * mu[l,j]
            mu[k,l] = mu[k,l] - mu[k,l].round()       

        return
    
    def exchange(k):
        
        y[k-1], y[k] = y[k], y[k-1]
        NU = mu[k,k-1]
        delta = gamma[k] + NU ^ 2 * gamma[k-1]
        mu[k,k-1] = NU * gamma[k-1] / delta    # all above is right
        gamma[k] = gamma[k] * gamma[k-1] / delta
        gamma[k-1] = delta

        for j in range(k-1):
            mu[k-1,j], mu[k,j] = mu[k,j], mu[k-1,j]
        for i in range(k+1, n):
            xi = mu[i,k]
            mu[i,k] = mu[i,k-1] - NU * mu[i,k]
            mu[i,k-1] = mu[k,k-1] * mu[i,k] + xi      
            
        return
    
    # step (1) 
    y = deepcopy(x)
    # step (2) 
    y_star, mu = y.gram_schmidt()
    gamma = [y_star[i].norm() ^ 2 for i in range(n)]
    
    # step (3)
    k = 1
    
    # step (4)
    while k < n:      
        # step (4)(a)    
        reduce(k, k-1)

        # step (4)(b)
        if gamma[k] >= (alpha - mu[k,k-1]^2) * gamma[k-1]:
            # (i)
            for l in reversed(range(k-1)):
                reduce(k, l)
            # (ii)
            k = k + 1
        else:
            # (iii)
            exchange(k)
            # (iv)
            if k > 1:
                k = k-1

    return y

Babai Nearest Plane Algorithm

The algorithm mainly consists of two steps. The first step is to output the LLL reduced basis for the input lattice; at this time, for this reduced basis vector, an integer linear combination is formed to ensure that it is close enough to the given vector t. This step is very similar to the inner loop reduction operation of the LLL algorithm

img

def BabaisClosestPlaneAlgorithm(L, w):
    '''
    Yet another method to solve apprCVP, using a given good basis.
    INPUT:
    * "L" -- a matrix representing the LLL-reduced basis (v1, ..., vn) of a lattice.
    * "w" -- a target vector to approach to.
    OUTPUT:
    * "v" -- a approximate closest vector.
    Quoted from "An Introduction to Mathematical Cryptography":
    In both theory and practice, Babai's closest plane algorithm
    seems to yield better results than Babai's closest vertex algorithm.
    '''
    G, _ = L.gram_schmidt()
    t = w
    i = L.nrows() - 1
    while i >= 0:
        w -= round( (w*G[i]) / G[i].norm()^2 ) * L[i]
        i -= 1
    return t - w

Embedding Technique

(I don’t know how to use software to draw pictures. I wrote everything on the paper, but I can’t find the blog when I look back to sort it out. Let’s just read it)

insert image description here
insert image description here

problem solving code

Example 1: 2020 Xiangyun Cup Easy Matrix

import numpy as np
from secret import *

def random_offset(size):
    x = np.random.normal(0, 4.7873, size)
    return np.rint(x)

secret = np.array(list(flag))

column = len(list(secret))
row = 128
prime = 2129

matrix = np.random.randint(512, size=(row, column))
product = matrix.dot(secret) % prime
offset = random_offset(size=row).astype(np.int64)
result = (product + offset) % prime

np.save("matrix.npy", matrix)
np.save("result.npy", result)

Using LLL algorithm and Babai nearest plane algorithm

import numpy as np
from sage.modules.free_module_integer import IntegerLattice

def BabaisClosestPlaneAlgorithm(L, w):
    G, _ = L.gram_schmidt()
    t = w
    i = L.nrows() - 1
    while i >= 0:
        w -= round( (w*G[i]) / G[i].norm()^2 ) * L[i]
        i -= 1
    return t - w

row = 128
col = 42
p = 2129

M = Matrix(list(np.load('matrix.npy')))
R = vector(list(np.load('result.npy')))

A = [[0 for _ in range(row)] for _ in range(row)]
for i in range(128):
    for j in range(128):
        if i==j:
            A[i][j] = p
A = Matrix(A)
L = Matrix(A.stack(M.transpose()))
lattice = IntegerLattice(L, lll_reduce=True)
closest_vector = BabaisClosestPlaneAlgorithm(lattice.reduced_basis, R)

FLAG = Matrix(Zmod(p), M)
flag = FLAG.solve_right(closest_vector)
print(''.join( chr(i) for i in flag))

Example 2: Using Embedding Technique to construct an Embedding Lattice code

# Sage
DEBUG = False
m = 44
n = 55
p = 2^5
q = 2^10

def errorV():
  return vector(ZZ, [1 - randrange(3) for _ in range(n)])

def vecM():
  return vector(ZZ, [p//2 - randrange(p) for _ in range(m)])

def vecN():
  return vector(ZZ, [p//2 - randrange(p) for _ in range(n)])

def matrixMn():
  mt = matrix(ZZ, [[q//2 - randrange(q) for _ in range(n)] for _ in range(m)])
  return mt

A = matrixMn()
e = errorV()
x = vecM()
b = x*A+e

if DEBUG:
  print('A = \n%s' % A)
  print('x = %s' % x)
  print('b = %s' % b)
print('e = %s' % e)

z = matrix(ZZ, [0 for _ in range(m)]).transpose()
beta = matrix(ZZ, [1])
T = block_matrix([[A, z], [matrix(b), beta]])
if DEBUG:
  print('T = \n%s' % T)

print('-----')
L = T.LLL()
print(L[0])
print(L[0][:n] == e)

application

The construction of the GSW system is mainly based on the LWE problem assumption in lattice cryptography

Other lattice-related encryption

GGH、NTRU

reference

https://lazzzaro.github.io/2020/11/07/crypto-%E6%A0%BC%E5%AF%86%E7%A0%81/

https://zhuanlan.zhihu.com/p/150920501

https://blog.csdn.net/qq_42667481/article/details/118332181

http://blog.k1rit0.eu.org/2021/03/31/The-learning-of-LWE/

"Lattice Theory and Cryptography"

"Quantum Computing Resistant Cryptography"

Guess you like

Origin blog.csdn.net/m0_57291352/article/details/130311373
Recommended