100 Basic Concepts of Cryptography

The cryptography column systematically introduces related concepts from traditional cryptography to modern cryptography, as well as quantum cryptography. This column mainly refers to "Applied Cryptography" by Bruce Schneier and "Modern Cryptography Course" by Gu Lize and Yang Yixian.

As the basis of information security, cryptography is extremely important. This article reviews and summarizes 100 basic concepts in cryptography for your reference!

1. History of cryptography

1. Cryptography

Cryptography, derived from the Greek kryptós "hidden" and gráphein "writing", is a discipline that studies information security and confidentiality, involving cipher coding and cryptanalysis .

The development of cryptography is generally divided into two stages: traditional cryptography (also divided into classical cryptography and modern cryptography) and modern cryptography . The classic paper "Communication Theory of Secrecy Systems" published by CE Shannon in 1949 is the dividing line. .

2. Classical ciphers

Classical ciphers mainly use substitution and permutation methods, and are realized by hand or simple equipment. Its use period lasted for thousands of years from ancient times to the end of the 19th century, including chessboard ciphers and Caesar ciphers.

3. Caesar cipher

The Caesar cipher was invented by the Roman emperor Julius Caesar around 50 BC for secret communications in times of war. The Caesar cipher forms a sequence chain of letters in alphabetical order, and then connects the last letter with the first letter to form a ring. The encryption method is to use each letter in the plaintext with the following kkth letterk letters instead.

4. Modern Cipher

Modern ciphers generally refer to the encryption methods implemented by mechanical or electric equipment from the beginning of the 20th century to the 1950s. Although the technology has made great progress, encryption still relies on substitution and replacement methods, including single-table substitution ciphers (such as Affine ciphers), multi-table substitution ciphers (such as Vigenère ciphers and Hill ciphers, etc.).

5. Modern passwords

In 1949, Shannon published "Communication Theory of Secrecy System", marking the beginning of modern cryptography. Shannon introduced information theory into the study of cryptography, using the concept of probability and statistics and the concept of entropy to mathematically describe and quantitatively analyze the security of information sources, key sources, transmitted ciphertexts, and cryptosystems, and proposed a symmetric code The model of the system laid the mathematical foundation for modern cryptography, which was the first leap in cryptography .

In 1976, Diffie and Hellman published the classic paper "New Directions in Cryptography" (New Directions in Cryptography), which proposed the idea of ​​public key cryptography, opened up a new direction for the development of modern cryptography, and brought the first cryptography Second leap .

2. Basics of Cryptography

6. Interruption

Interruption, also known as denial of service, refers to preventing or prohibiting the normal use or management of communication facilities, which is an attack on availability. This kind of attack generally has two forms: one is that the attacker deletes all protocol data units passing through a certain connection, thereby suppressing all messages from pointing to a specific destination; the other is to paralyze or collapse the entire network, and possible means It is the spamming that overloads it and prevents the network from working properly.

7. Interception

Interception is the unauthorized eavesdropping or monitoring of transmitted messages to gain access to a resource, which is an attack on confidentiality. Attackers typically eavesdrop on the network by "tapping wires" to obtain the content of their communications.

8. Tampering

Tampering (Modification) is to modify the data flow, some parts of a legal message are changed, the message is delayed or the order is changed to produce an unauthorized, special-purpose message, which is aimed at the authenticity of the connected protocol data unit , integrity and orderly attacks.

9. Forgery

Fabrication refers to disguising an illegal entity as a legal entity, which is an attack on the authenticity of the identity. It is usually combined with other forms of active attack to have an attack effect, such as the attacker replaying the initialization sequence of the previous legal connection. Records, thereby gaining certain privileges that they themselves do not have.

10. Replay

Replay intercepts a data unit and retransmits it, generating an unauthorized message. In this attack, an attacker records a communication session and then replays the entire session or a portion of it at a later point.

11. Confidentiality

Confidentiality refers to ensuring that information is not leaked to unauthorized users or entities, ensuring that stored information and transmitted information can only be obtained by authorized parties, and that non-authorized users cannot know the content of the information even if they obtain the information. Usually access control is used to prevent unauthorized users from obtaining confidential information, and encryption is used to prevent unauthorized users from obtaining information content.

12. Integrity

Integrity refers to the characteristics that information cannot be tampered with without authorization to ensure the consistency of information, that is, there should be no artificial or non-human unauthorized tampering (insertion, modify, delete, reorder, etc.). Generally, access control is used to prevent tampering, and at the same time, it is checked by a message digest algorithm .

13. Authentication

Authentication, or authenticity, refers to ensuring that the source of a message or the message itself is correctly identified, and at the same time ensuring that the identification has not been forged. It is achieved by **digital signature, message authentication code (MAC)**, etc. . Authentication is divided into message authentication and entity authentication.

  • Message authentication refers to the ability to assure the receiver that the message is indeed from the source it claims.
  • Entity authentication refers to ensuring that the two entities are credible when the connection is initiated, that is, each entity is indeed the entity they claim, and the third party cannot impersonate any of the two legal parties.

14. Non-repudiation

Non-Repudiation refers to the ability to ensure that users cannot deny the generation, issuance, and reception of information after the fact. It is a security requirement for the authenticity and consistency of information between all parties in communication. In order to prevent the sender or receiver from repudiation of the transmitted message, it is required that neither the sender nor the receiver can repudiate the actions performed. Provide anti-repudiation service through digital signature .

  • When sending a message, the receiver can verify that the message was indeed sent by the intended sender, which is called source non-repudiation
  • When the receiver receives a message, the sender can verify that the message has indeed been sent to the designated receiver, which is called sink non-repudiation.

15. Availability

Availability refers to the ability to ensure that information resources can provide services at any time, that is, authorized users can access the required information in a timely manner as needed, and ensure that legitimate users are not illegally denied the use of information resources.

16. Password system

A cryptographic system (system) consists of at least five parts: plaintext, ciphertext, encryption algorithm and decryption algorithm, and key.
insert image description here
17. Plaintext

The original form of information is known as plaintext (Plaintext).

18. Ciphertext

The plaintext encrypted by transformation is called ciphertext (Ciphertext).

19. Encryption

The process of encoding plaintext to generate ciphertext is called encryption (Encryption), and the encoding rules are called encryption algorithms.

20. Decryption

The process of restoring the ciphertext to the plaintext is called decryption, and the decryption rule is called the decryption algorithm.

21. Key

Key (Key) is the only key that can control the conversion between plaintext and ciphertext, and is divided into encryption key and decryption key.

22. Kirckhoff Hypothesis

Kerckhoffs Assumption (Kerckhoffs Assumption), also known as Kerckhoffs Principle (KerckhoffsPrinciple) or Kerckhoffs Axiom (Kerckhoffs Axiom) is a Dutch cryptographer Auguste Kerckhoffs (August Kerckhoffs) in 1883 A basic assumption about cryptanalysis stated in his masterpiece "Military Cryptography" in 1999: the security of a cryptographic system should not depend on an algorithm that cannot be easily changed, but on a key that can be changed at any time. This is the design and use of The password system must be followed. That is, the security of the encryption and decryption algorithm depends on the security of the key , and the encryption/decryption algorithm is public. As long as the key is safe, the attacker cannot deduce the plaintext.

23. Ciphertext-only attack

Ciphertext Only Attack means that the cryptanalyst has no other available information except the intercepted ciphertext (the cryptographic algorithm is public). The task of the cryptanalyst is to recover as much of the plaintext as possible, or preferably information from which the key for encrypting the message can be deduced. Generally, an exhaustive search of French is used until a meaningful plaintext is obtained. Theoretically exhaustive search can be successful, but in fact any kind of complexity that can guarantee security requirements is unbearable for actual attackers. In this case, it is the most difficult to decipher the password, and the password system that cannot withstand this kind of attack is considered to be completely insecure.

24. Known plaintext attack

Known plaintext attack (Known Plaintext Attack) means that the cryptanalyst not only has a considerable amount of ciphertext, but also some known plaintext-ciphertext pairs are available. The task of the cryptanalyst is to use an encrypted message to derive a key or an algorithm that will decrypt any new message encrypted with the same key. Modern cryptographic systems require not only to withstand ciphertext-only attacks but also to withstand known-plaintext attacks.

25. Chosen plaintext attack

Chosen Plaintext Attack means that the cryptanalyst can not only obtain a certain number of plaintext pairs, but also choose any plaintext and obtain the corresponding ciphertext under the condition of using the same unknown key. If the attacker can choose a specific plaintext message in the encryption system, it is possible to determine the structure of the key or obtain more information about the key through the ciphertext corresponding to the plaintext message. The chosen plaintext attack is more effective than the known plaintext attack. This situation is often that the cryptanalyst temporarily controls the encryption machine by some means. This type of attack is mainly used for public key algorithms, which means that public key algorithms must withstand this attack.

26. Chosen ciphertext attack

Chosen Ciphertext Attack means that the cryptanalyst can choose different encrypted ciphertexts, and also obtain the corresponding plaintext. If the attacker can select a specific ciphertext message from the ciphertext, it is possible to deduce the structure of the key or generate more information about the key through the plaintext corresponding to the ciphertext message. This situation is often that the cryptanalyst temporarily controls the decryption machine by some means.

27. Chosen text attack

Chosen Text Attack is a combination of chosen plaintext attack and chosen ciphertext attack, that is, the cryptanalyst can not only choose the plaintext and get the corresponding ciphertext under the premise of mastering the cryptographic algorithm, but also choose the ciphertext to get the corresponding plaintext. This situation is often that the cryptanalyst temporarily controls the encryption machine and the decryption machine by some means.

28. Unconditional Safety

If in a cryptographic system, no matter how much ciphertext the cipher breaker knows and what method he uses, he will not be able to obtain the information of the plaintext or the key, which is called unconditional security. Unconditional security has nothing to do with the attacker's computing power and time.

29. Conditionally safe

Conditional security is to evaluate its security according to the amount of calculation required to crack the cryptographic system, which is divided into computational security, practical security and provable security.

30. Computational Security

If it is feasible to crack a cryptographic system, but it is impossible to complete the required amount of calculation using known algorithms and existing computing tools, that is, the effort required to crack the cryptographic system by the best method exceeds the time of the cracker The cracking ability of space and funds, etc., said that the cryptographic system is computationally secure.

31. Practical security

Practical security means that a cryptographic system satisfies one of the following two criteria: the cost of cracking the cryptosystem exceeds the value of the encrypted information itself; the time to decipher the cryptosystem exceeds the effective life cycle of the encrypted information.

32. Provable Security

Provable security is to attribute the security of a cryptographic system to solving a mathematical problem that has been thoroughly researched but not yet solved, that is, to equate the security problem of a certain cryptographic system to a solving problem of a mathematical problem. The problem with this judgment method is that it only shows that the security of the cryptographic system is related to a certain mathematical problem, but it does not fully prove the security of the problem itself.

33. Symmetric cryptosystem

Symmetric Key Cryptosystem (Symmetric Key Cryptosystem) is also called Single Key Cryptosystem (One Key Cryptosystem) or Secret Key Cryptosystem (Secret Key Cryptosystem). In a symmetric cryptosystem, the key must be kept completely secret, and the encryption key and the decryption key are the same, or one of them can easily deduce the other. Typical algorithms include DES, 3DES, AES, IDEA, RC4, A5, SEAL, etc.

The key of the symmetric encryption system is relatively short, and the length of the ciphertext is often the same as that of the plaintext, which has a faster encryption and decryption speed and is easy to implement in hardware. But how the sender sends the key to the receiver safely and efficiently is the weakness of the symmetric cryptosystem, which often requires a "safe channel"; in addition, it is said that the cryptosystem has a large number of keys and is difficult to manage.

34. Asymmetric cryptosystem

In 1976, Whitefield Diffie and Martin Hellman pioneered the public key cryptosystem in New Directions in Cryptography, also known as asymmetric key cryptosystem (Asymmetric Key Cryptosystem), also known as double key cryptosystem ( Double Key Cryptosystem), Public Key Cryptosystem (Public Key Cryptosystem).

In this system, user A has a pair of keys: encryption key ( public key ) P k P_kPkand the decryption key ( private key ) S k S_kSk, the two are different, and the decryption key (private key) cannot be deduced from the encryption key (public key). If B wants to send encrypted information to A, it needs to use A's encryption key (public key) P k P_kPk(It can be found in the public directory) to encrypt the message; after receiving the ciphertext, A uses his own decryption key (private key) S k S_kSkDecrypt the ciphertext.

The asymmetric cryptosystem is mainly to solve the key distribution and management problems in the symmetric cryptosystem. Compared with the symmetric cryptosystem, the encryption and decryption speed of the asymmetric cryptosystem is slower, the key is longer, and the ciphertext length is often greater than the plaintext length. Common public-key cryptosystems include the RSA public-key cryptosystem based on the factorization problem of large integers , the ElGamal public-key cryptosystem based on the discrete logarithm problem on finite-field multiplicative groups , and the elliptic curve public-key cryptosystem based on the discrete logarithm problem on elliptic curves. password system, etc.

3. Block cipher

35. Block Cipher

Block cipher (block cipher) is essentially a plaintext space PPP m m m long bit string set) to the ciphertext spaceCCC n n One-to-one mapping (generally m = n = tm=n=t for a set of n- length bit strings)m=n=t ). Binary sequencep 0 , p 1 , … , pi , … p0, p1, …, pi, …p 0 p 1 pi... divided into fixed-length groups (or blocks)p = ( p 0 , p 1 , … , pm − 1 ) p=(p_0, p_1, …, p_{m-1})p=(p0p1pm1) , each group is in the keyk = ( k 0 , k 1 , … , kt − 1 ) k=(k_0, k_1, …, k_{t-1})k=(k0k1kt1) under the control of the conversion into lengthnnThe ciphertext block of n c = ( c 0 , c 1 , … , cn − 1 ) c=( c_0, c_1, …, c_{n-1})c=(c0c1cn1)

insert image description here
Block ciphers mainly provide data confidentiality and can also be used to construct pseudo-random number generators, stream ciphers, authentication codes, and hash functions. Block ciphers are divided into symmetric block ciphers and asymmetric block ciphers (public key ciphers), which generally refer to symmetric block ciphers in many occasions.

36. Diffusion

Diffusion refers to the algorithm that makes the change of each bit of plaintext affect the change of the output ciphertext sequence as much as possible, so as to conceal the statistical characteristics of the plaintext; and the influence of each bit of key is also expanded to as many The output ciphertext bits go in. That is, the purpose of diffusion is to hope that any bit in the ciphertext should be associated with the plaintext and the key as much as possible, or that any change in the value of any bit in the plaintext and the key will affect the value of the ciphertext to some extent. Variation (also known as the avalanche effect) to prevent the key from being broken into several small isolated parts and then broken individually.

37. Chaos

Confusion refers to making the relationship between plaintext, key and ciphertext as complicated as possible during the encryption transformation process, so as to prevent password breakers from using statistical analysis methods to decipher attacks. Chaos can be vividly explained by "stirring", a set of plaintext and a set of ciphertext are input into the algorithm, after thorough mixing, they finally become ciphertext. At the same time, it is required that each step of performing this "chaos" operation must be reversible, that is, the ciphertext can be obtained after the plaintext is chaotic, and on the contrary, the plaintext can be recovered after the ciphertext is reversed and chaotic.

38. Whitening

Whitening is a technique of XORing the input of a block cipher algorithm with a part of the key, and its output with another part of the key, which is used to prevent the cryptanalyst from obtaining a set of keys on the premise of knowing the basic cipher algorithm. A plaintext/secrettext pair.

39. Avalanche effect

The avalanche effect (Avalance Criteria) means that even a small change in the input (plaintext or key) will cause a huge change in the output (ciphertext). Strict avalanche effect (Strict Avalance Criteria) means that when one input bit changes, half of the output bits will change.

40. Substitution-Permutation Networks

Substitution-Permutation Network (Subsituation Permutation Network), referred to as SP network, is a transformation network composed of multiple **S transformation (S box, chaos) and P transformation (P box, diffusion)**, which is a kind of product cipher Common manifestations. The S-box in the SP network is the only non-linear part of many cryptographic algorithms, and its cryptographic strength determines the strength of the entire cryptographic algorithm .
insert image description here
41. Feistel network

The Feistel network was proposed by Feistel, and the length is nnn bit (nnn is an even number) the plaintext grouping is divided into left and right halves with lengthn/2 n/2Two parts of n /2 :LLL andRRR. _ Define the iterative algorithm as follows: L i = R i − 1 L_i = R_{i-1}Li=Ri1 R i = L i − 1 ⊕ f ( R i − 1 , K i ) R_i = L_{i-1}\oplus f(R_{i-1},K_i) Ri=Li1f(Ri1Ki) , whereK i K_iKiright iiThe subkey used in round i , fff is an arbitrary round function.

The Feistel network guarantees the reversibility of the algorithm, that is, encryption and decryption can be implemented with the same algorithm.

42.DES

DES (Data Encryption Standards) is the first cryptographic algorithm widely used in commercial data security, and it is a symmetric encryption algorithm. The US National Bureau of Standards (NBS, National Bureau of Standards) began to collect federal data encryption standards in 1973. Many companies submitted algorithms, and IBM's Lucifer encryption system finally won. After more than two years of public discussion, on January 15, 1977, the NBS decided to utilize this algorithm and renamed it Data Encryption Standard (DES) .

43.AES

In 1997, the US National Institute of Standards and Technology (NIST, National Institute of Standards and Technology) publicly solicited Advanced Encryption Standard (AES, Advanced Encryption Standard), the basic requirement is that the security performance is not lower than triple DES, the performance is faster than triple DES, and In particular, the advanced encryption standard must be a symmetric block cipher with a block length of 128 bits, and supports keys with a length of 128 bits, 192 bits, and 256 bits. Furthermore, if an algorithm is selected, it must be freely available worldwide.

On October 2, 2000, NIST announced the final selection results, based on security (stable mathematical foundation, no algorithm weakness, resistant to cryptanalysis), performance (fast speed), size (small memory and storage space), and easy implementation (Good software and hardware adaptability) and other standards, the "Rijndael data encryption algorithm" proposed by Belgian cryptographers Joan Daemen and Vincent Rijmen finally won. The modified Rijndael algorithm became the Advanced Encryption Standard AES. On November 26, 2001, NIST officially announced the Advanced Encryption Standard AES, and it came into effect on May 26, 2002.

44.IDEA

The International Data Encryption Algorithm (IDEA, International Data Encryption Algorithm) was announced by Xuejia Lai and James Massey of Switzerland in 1990, and it was called the Proposed Encryption Standard (PES, Proposed Encryption Standard) at that time. In 1991, in order to combat differential encryption attacks, they improved the algorithm, called the Improved Recommended Encryption Standard (IPES, Im proven PES), and changed its name to the International Data Encryption Algorithm IDEA in 1992.

IDEA is protected by patents and can only be used in commercial applications after obtaining a license. The famous email privacy technology PGP is based on IDEA.

45. Working mode of block cipher

The actual message length is generally larger than the block length of the block cipher, which divides the message into fixed-length data blocks for block-by-block processing. People have designed many different block processing methods, called block ciphers , which are usually a combination of basic cryptographic modules, feedback, and some simple operations. These working modes also provide some other properties for ciphertext grouping, such as hiding the statistical properties of plaintext, error propagation control, keystream generation of stream ciphers, etc.

46. ​​Electronic Code Book

Electronic Code Book (ECB, Electronic Code Book) mode processes one plaintext group at a time, and each plaintext group is independently encrypted into a corresponding ciphertext group, which is mainly used for **short and random messages (such as keys)** encrypted transmission.

  • The same plaintext (under the same key) yields the same ciphertext, vulnerable to statistical analysis attacks, packet replay attacks, and substitution attacks
  • Link dependency: Encryption of each group is independent of other groups, enabling parallel processing
  • Error Propagation: One or more bit errors in a single ciphertext block only affect the decryption result for that block

47. Cipher block chaining

Cipher Block Chaining (CBC, Cipher Block Chaining) mode applies a feedback mechanism, and the plaintext needs to be XORed with the previous ciphertext before encryption, that is, each ciphertext block not only depends on the plaintext block that generated it, but also depends on its previous all groups of . CBC is suitable for file encryption and is the best choice for software encryption.

  • The same plaintext, even under the same key, will get different ciphertext groups, which hides the statistical characteristics of the plaintext
  • Link dependency: The correct decryption of a correct ciphertext block requires that the previous ciphertext block is also correct, and parallel processing cannot be achieved
  • Error propagation: A single-bit error in the ciphertext group will affect the decryption of this group and subsequent groups, and the error propagation is divided into two groups

48. Password Feedback

Cipher Feedback Block (CFB, Cipher Feedback Block) regards messages as bit streams, and can only be encrypted and decrypted after receiving the entire data packet. It is a typical example of a self-synchronizing sequence cipher algorithm and is usually used to encrypt character sequences .

  • Can be used to synchronize serial ciphers, with the advantages of CBC mode
  • Sensitive to channel errors and can cause error propagation
  • The rate of data encryption and decryption is reduced, and the data rate will not be too high

49. Output Feedback

Output Feedback Block (OFB, Output Feedback Block) is an example of a synchronous sequence cipher algorithm based on a block cipher.

  • An improvement of the CFB mode, which overcomes the problems caused by error propagation, but it is difficult to detect the tampering of the ciphertext
  • OFB mode does not have self-synchronization capability, requiring the system to maintain strict synchronization, otherwise it is difficult to decrypt

4. Serial password

50. Serial password

Sequence cipher, also known as stream cipher, belongs to symmetric cipher system. It only encrypts and decrypts a single character (usually a binary bit) of a plaintext message at a time. It has the characteristics of simple implementation, fast speed, and less error propagation.

In the sequence cipher, the plaintext messages are grouped by a certain length, and each group is encrypted bit by bit with a related but different key to generate the corresponding ciphertext. The same plaintext group will correspond to different ciphertexts due to different positions in the plaintext sequence. The receiver uses the same key sequence to decrypt the ciphertext sequence bit by bit to recover the plaintext.

insert image description here
Let plaintext sequence p = pn − 1 . . . p 1 p 0 p=p_{n-1}...p_1p_0p=pn1...p1p0Key sequence k = kn − 1 . . . k 1 k 0 k=k_{n-1}...k_1k_0k=kn1...k1k0
Ciphertext sequence c = cn − 1 . . . c 1 c 0 = E kn − 1 ( pn − 1 ) . . . E k 1 ( p 1 ) E k 0 ( p 0 ) c=c_{n-1} ...c_1c_0=E_{k_{n-1}}(p_{n-1})...E_{k_1}(p_1)E_{k_0}(p_0)c=cn1...c1c0=Ekn1(pn1)...Ek1(p1)Ek0(p0) c i = E k i ( p i ) = p i ⊕ k i c_i=E_{k_i}(p_i)=p_i\oplus k_i ci=Eki(pi)=pikiThis is called an additive sequence cipher.

51. Feedback Shift Register

A feedback shift register (FSR, Feedback Shift Register) generally consists of a shift register and a feedback function (Feedback Function). The shift register is a sequence of bits, and its length is represented by bits. Each time all bits in the shift register are shifted to the right by one bit, the leftmost bit is calculated based on some bits in the register, and the leftmost bit is calculated by some bits in the register. The bit part is called the feedback function, and the value shifted out of the rightmost register is the output bit. The period of a shift register is the length of the output sequence from the beginning to when it repeats.

52. Linear Feedback Shift Register

The feedback function of the Linear Feedback Shift Register (LFSR, Linear Feedback Shift Register) is a simple XOR of certain bits in the register. These bits are called the Tap Sequence (Tap Sequence), sometimes called the Fibonacci Configuration (Fibonacci Configuration).

53. m m m sequence

The nature of the output sequence of a linear feedback shift register is completely determined by its feedback function, a nnn -bit LSFR can be in2 n − 1 2^{n}-12nOne of 1 internal states, i.e. theoretically , nnAn n -bit LFSR is capable of generating 2 n − 1 2^{n}-1before repeating2n1 -bit long pseudo-random sequence(because the state of all 0 will make LFSR output 0 sequence endlessly, so it is2 n − 1 2^{n}-12n1 instead of2 n 2^{n}2n)。

As long as the appropriate feedback function is selected, the period of the sequence can reach the maximum value 2 n − 1 2^{n}-12n1 , that is, only an LFSR with a certain tap sequence can cycle through all2 n − 1 2^{n}-12n1 internal state, this output sequence is called **mmm -sequence**. In order to make the LFSR a maximum period LFSR, the polynomial formed by adding a constant 1 to the tap sequence must be a primitive polynomial, and the order of the polynomial is the length of the shift register.

54.RC4

RC4 (Ron RivestCipher) is based on random permutation. It is a sequence cipher with variable key length and byte-oriented operations . This algorithm is widely used due to its fast encryption and decryption (about 10 times faster than DES) and easy software implementation In software such as Microsoft Windows, Lotus Notes, and Secure Sockets Layer (SSL, Secure Sockets Layer) to transmit information.

Different from the sequence cipher based on the shift register, RC4 is a typical sequence cipher based on nonlinear array transformation. It is based on a sufficiently large array, and performs nonlinear transformation on it to generate a nonlinear key sequence. Generally, this large array is called an S box. The size of the S box of RC4 is according to the parameter nnn (usuallyn = 8 n=8n=8 ) value changes, the RC4 algorithm can theoretically generate a total ofN = 2 n N=2^nN=2n S-boxes.

55. A5

The A5 algorithm is one of the serial cipher encryption algorithms to be used in the GSM system. It is used to encrypt the voice and data transmitted between the base stations of mobile phone terminals. It has been broken at present.

The A5 algorithm is a typical serial cipher algorithm based on a linear feedback shift register. It generates two 114-bit long parameters from a 22-bit long parameter (frame number, Fn) and a 64-bit long parameter (session key, Kc). sequence (keystream), then XOR with each frame (228 bit/frame) of the GSM session.

5. Hash function

56. Hash function

Hash function, also known as hash function/hash function, hash function, is an irreversible mapping from message space to image space, which can transform an input of "arbitrary" length to obtain a fixed-length output . It is a one-way cryptographic system, that is, there is only an encryption process and no decryption process.

57. Message Digest

The one-way and fixed output length of the Hash function make it possible to generate the "digital fingerprint" (Digital Fingerprint) of the message, also known as the message digest (MD, Message Digest) or hash value/hash value (Hash Value) , It is mainly used in message authentication , digital signature , secure transmission and storage of passwords , file integrity verification , etc.

58.MD5

The MD5 algorithm was designed by Rivest, a famous cryptographer at the Massachusetts Institute of Technology. He made a detailed elaboration on MD5 in RFC1321 submitted to the IETF in 1992. MD5 is developed on the basis of MD2, MD3, and MD4. Since Safety-Belts are added to MD4, MD5 is also called "MD4 with safety belts".

The input of the algorithm is that the maximum length is less than 2 64 2^{64}264 bit message, input message to512 512512 +bit packets are processed in units, and the output is128 bit 128bit128 bit message digest.

59. SHA1

In 1993, the National Institute of Standards and Technology NIST announced the Secure Hash Algorithm SHA0 (Secure Hash Algorithm) standard. On April 17, 1995, the revised version was called SHA-1, which is an algorithm required in the digital signature standard. .

The input of the SHA1 algorithm is that the maximum length is less than 2 64 2^{64}264 bit message, input message to512 512512 bit packets are processed in units, and the output is160 160160 bit message digest, so it is more resistant to exhaustion.

60. Message authentication

Message authentication refers to verifying the authenticity of the message, including verifying the authenticity of the source of the message , generally referred to as information source authentication; verifying the integrity of the message , that is, verifying that the message has not been tampered with or forged during transmission and storage.

61. Message authentication code

Message Authentication Code (MAC, Message Authentication Code) is used to check whether the message has been maliciously modified, using the message and the shared key of both parties to generate a fixed-length short data block through the authentication function , and append the data block to the message.

Cipher block chaining mode (CBC) using symmetric block cipher systems such as DES and AES has always been the most common method for constructing MAC, such as CBC-MAC defined in FIPS PUB 113. Since the execution speed of Hash function software such as MD5 and SHA-1 is faster than that of symmetric block cipher algorithms such as DES, many message authentication algorithms based on Hash functions have been proposed. Among them, HMAC (RFC 2014) has been released as the FIPS 198 standard, and in SSL Used in message authentication.

6. Public key cryptography

62. RSA public key cryptography

In 1978, Rivest, Shamir, and Adleman of the Massachusetts Institute of Technology jointly proposed the RSA public-key cryptosystem, which is the first safe and practical public-key cryptosystem. Its security depends on the difficulty of factoring large integers. The RSA public key cryptosystem can be used for encryption and digital signature, and has the characteristics of security and easy implementation.

(1) Public-private key pair generation

Choose two large prime numbers p, qp, qp , q (not leakable), calculaten = pqn=pqn=pq andnnEuler function of n φ ( n ) = ( p − 1 ) ( q − 1 ) \varphi(n) = (p-1)(q-1)φ ( n )=(p1)(q1)

Randomly select an integer e ( 1 < e < φ ( n ) ) e(1<e<\varphi(n))e(1<e<φ ( n )) as the public key, satisfyinggcd ( e , φ ( n ) ) = 1 gcd(e, \varphi(n))=1g c d ( e , φ ( n ))=1immediatelyeφ ( n ) \varphi(n)φ ( n ) mutual prime

Use the Euclid extension algorithm to calculate the private key d ≡ e − 1 mod φ ( n ) d \equiv e^{-1} \bmod \varphi(n)de1modφ ( n ),即eeinverse of e

(2) Encryption and decryption algorithm

Public key: ( e , n ) (e, n)(en)

private key: ddd

Encryption: c ≡ me mod nc \equiv m^e \bmod ncmemodn

Decryption: m ≡ cd mod nm \equiv c^d \bmod nmcdmodn

63. ElGamal public key cryptography

The ElGamal public key cryptosystem was proposed by T.ElGamal in 1985. The system is based on the discrete logarithm problem over a finite field and can be used for both encryption and digital signature. Due to its better security, and the same plaintext will generate different ciphertexts at different times, it has been widely used in practice, especially in the application of digital signatures. The famous **Digital Signature Standard (DSS, Digital Signature Standard)** is actually a variant of the ElGamal signature scheme.

(1) Public-private key pair generation

Randomly choose a large prime number ppp , and requiresp − 1 p-1p1 has a large prime factor,g ∈ Z p ∗ g \in \boldsymbol Z^{*}_pgZpis a primitive element ( Z p Z_pZpis a ppA finite field of p elements,Z p ∗ Z^{*}_pZpis Z p Z_pZpThe multiplicative group formed by the non-zero elements in

Pick a random number x ( 1 < x < p − 1 ) x(1<x<p-1)x(1<x<p1 ) As a private key, calculatey ≡ gx mod py \equiv g^x \bmod pygxmodp ,the public key is ( y , g , p ) (y, g, p)(ygp)

(2) Encryption and decryption algorithm

Public key: ( y , g , p ) (y, g, p)(ygp)

private key: xxx

Encryption: C = ( c , c ′ ) C = (c, c^{'})C=(cc),其中 c ≡ g r   m o d   p , c ′ ≡ m y r   m o d   p c \equiv g^{r} \bmod p, c^{'} \equiv m y^{r} \bmod p cgrmodpcmyrmodp r ( 1 < r < p − 1 ) r(1<r<p-1) r(1<r<p1 ) is a random number,

Decryption: m ≡ ( c ′ / cx ) mod pm \equiv (c^{'}/c^{x}) \bmod pm(c/cx)modp

The ElGamal public key system needs to select a random number for each encryption operation. The ciphertext depends on both the plaintext and the selected random number. Therefore, for the same plaintext, different ciphertexts are generated at different times . In addition, ElGamal encryption makes the message expand twice, that is, the length of the ciphertext is twice the length of the corresponding plaintext .

7. Digital signature

64. Digital signature

Digital Signature, also known as electronic signature, refers to a set of specific symbols or codes attached to an electronic document. It uses cryptographic technology to extract relevant information from the electronic document and conduct authentication to form it, which is used to identify the identity of the issuer and the issuer's approval of the electronic document, and can be used by the receiver to verify whether the electronic document has been transmitted during transmission. tampering or forgery.
insert image description here

  • The sender A uses the Hash algorithm to generate a message digest (Message Digest)
  • The sender A encrypts the message digest with its own private key, and the encrypted message digest is the digital signature
  • Sender A sends the message and signature to receiver B
  • After receiving the message and its name, the receiver B decrypts the signature with the public key of the sender A to obtain the message digest generated by the sender A
  • Receiver B uses the Hash algorithm used by sender A to regenerate the digest of the obtained message, and compares the two digests. If the same, it means that the signature is a valid signature of the sender A for this message, otherwise the signature is invalid

65. Proxy signature

Proxy signature means that the original signer authorizes his signature right to the agent, and the agent exercises his signature right on behalf of the original signer. When the verifier verifies the proxy signature, the verifier can not only verify the validity of the signature, but also be sure that the signature is approved by the original signer.

Proxy signature can be divided into fully entrusted proxy signature, partially authorized proxy signature (proxy signature of non-protected proxy, proxy signature of protected proxy) and proxy signature with power of attorney according to the authorization form given by the original signer to the proxy signer.

66. Blind signature

Blind signature is a digital signature with special properties first proposed by D.Chaum in 1982. This signature requires the signer to be able to sign the message without knowing the content of the signed file .
Even if the signer later sees the signed message and its signature, the signer cannot tell when and for whom the signature was generated. Intuitively speaking, the generation process of this kind of signature is like signing a message with the signer's eyes closed, so it is vividly called "blind" digital signature.

67. Multiple digital signatures

In digital signature applications, sometimes multiple users are required to sign and authenticate the same file. A digital signature scheme that enables multiple users to sign the same file is called a digital multi-signature (Digital Multi-signature) scheme .

68. Group signature

In 1991, Chaum and Heyst proposed the Group Signature ( Group Signature ) scheme for the first time. The group signature scheme allows legitimate users in the group to sign in the name of the user group, and has many characteristics such as the anonymity of the signer, and only the authoritative person can identify the identity of the signer. Generally speaking, the participants of a group signature are composed of group members (signers) , **group administrators (GC, Group Center) and signature acceptors (signature verifiers)**.

69. Non-repudiation signature

The essence of an undeniable signature is that it is impossible to verify the validity of the signature without the cooperation of the signer , thereby preventing the copying or distribution of the document he signed . This property enables the signer to control the distribution of the product. In electronic publishing systems and knowledge Used in property rights protection.

8. Password protocol

70. Cryptography protocol

An agreement refers to a series of steps performed by two or more parties to complete a task, and each step must be executed in sequence, and the subsequent steps cannot be executed until the previous step is completed.

A cryptographic protocol is a protocol that completes a certain task and meets security requirements based on cryptographic algorithms (including symmetric cryptographic algorithms, public key cryptographic algorithms, hash functions, etc.), also known as security protocols. Common cryptographic protocols include: key establishment protocol, authentication protocol, zero-knowledge proof protocol, bit commitment, secure multi-party computing protocol, etc.

71. Participants

Participants refer to the parties involved in the protocol, and each participant is abstracted as a Turing machine with a probabilistic polynomial time algorithm. Participants may be friends or people who trust each other completely, or they may be enemies or people who do not trust each other at all. According to the behavior of the participants in the protocol, they are divided into: honest participants, semi-honest participants and malicious participants.

72. Honest Participants

Honest participants complete each step in the execution process exactly as required, while keeping all their inputs, outputs and intermediate results confidential. Honest participants will also derive other participants' information based on their own input, output, and intermediate results, but will not be corrupted by attackers.

73. Semi-Honest Participants

During the execution of the protocol, the semi-honest participants complete each step of the protocol completely in accordance with the requirements of the protocol, but at the same time may leak their own input, output and intermediate results to the attacker.

74. Malicious actors

During the execution of the protocol, the malicious participant executes each step of the protocol completely according to the will of the attacker. He not only leaks all his input, output and intermediate results to the attacker, but also changes the input information, Forge intermediate and output information, or even terminate the agreement.

75. Attacker

During the execution of the protocol, the attacker will destroy the security or correctness of the protocol, and control a subset of participants to attack the protocol by corrupting it. Attackers can be classified differently based on criteria such as the attacker's degree of control over the malicious actor, the attacker's computing power, the degree of control over the malicious actor, the state of network synchronization and asynchrony, and adaptability. According to the degree of control over malicious participants, attackers can be divided into the following three categories:

76. Passive Aggressive

Passive attackers, also known as semi-honest attackers, only monitor the input, output, and intermediate calculation results of malicious participants, and do not control the behavior of malicious participants (such as modifying the input and output of malicious participants).

77. Active attacker

In addition to monitoring the input, output and intermediate results of any participant, the active attacker also controls the behavior of the malicious participant (such as maliciously tampering with the participant's input, controlling the malicious participant's output according to his own intention, etc.).

78. Conceal attackers

Convert adversary refers to the attacker type between the passive attacker and the active attacker. The hidden attacker takes the probability of being discovered as the risk (called the containment factor) to attack the protocol. Participants can engage in active corruption.

79. Semi-honest model

If all participants are semi-honest or honest, this model is called semi-honest model. A semi-honest member fully abides by the protocol, but it collects all records during the protocol execution and tries to infer the input of other members. The attackers in the semi-honest model are passive.

80. Malicious models

A model with malicious actors is called a malicious model. The malicious participant executes each step of the protocol completely according to the attacker's wishes. He not only leaks all his input, output and intermediate structure to the attacker, but also changes the input information, forges the intermediate and output information according to the attacker's intention, even terminate the agreement. The attacker in the malicious model is active.

81. Stealth attack model

The model with the participation of the hidden attacker is called the hidden attacker model, and the corrupted participant can carry out active corruption behaviors, but only when the corrupted participants can not be detected with less than a certain probability, they can carry out the corruption behaviors.

82. Zero-Knowledge Proofs

Zero Knowledge Proof (Zero Knowledge Proof) was first proposed by S.Goldwasser, S.Micali and C.Rackoff in the paper "The Knowledge Complexity of Interactive Proof Systems" in 1985. It is a A cryptographic protocol used by the prover to prove the correctness of his knowledge without revealing any other information.

One party to the protocol is called the prover (Prover) , with PPP means; the other party is calledthe verifier (Verifier), withVVV said. Zero-knowledge proof refers toPPP tries to makeVVV believes that a certain assertion is correct, but does not report toVVV divulges any useful information, i.e.PPP in the process of argumentVVV does not get any useful information. Zero-knowledge proof does not disclose any other information or knowledge except to prove the correctness of the prover's assertion.

83. Secure Multi-Party Computation

The problem of secure multi-party computation (SMC, Secure Multi-party Computation) was proposed by Professor Qizhi Yao, a Chinese computer scientist and winner of the Turing Award in 2000, in the paper "Protocols for secure computations" in 1982 with the millionaire problem (two millionaires Alice and Bob want to know which of them is richer, but they don't want each other or other third parties to know any information about their wealth), creating a new field of cryptography research.

Secure multi-party computing means that in a multi-user network that does not trust each other, nnn participantsP 1 , P 2 , . . . , P n P_1, P_2, ..., P_nP1P2...Pn, each holds secret data xi x_ixi, hoping to jointly calculate the function f ( x 1 , x 2 , . . . , xn ) = ( y 1 , y 2 , . y_2,...,y_n)f(x1x2...xn)=(y1y2...yn) P i P_i Piget only result yi y_iyi, and do not disclose xi x_ixito other participants.

9. Key management

84. Key management

Key management is a set of technologies and procedures for establishing and maintaining key relationships between authorized parties, involving the entire process from key generation to final destruction, including key generation, storage, distribution, negotiation, and use. , backup and restore, update, revoke and destroy, etc.

85. Session key

The session key (Session Key) is mainly used to encrypt the exchange data of two communication terminal users , also known as the data encryption key (Data Encrypting Key) . The life cycle of the session key is very short. It is usually generated when the session is established and destroyed after the session ends. It is mainly used to protect the transmitted data. Most of the session keys are temporary and dynamically generated, which can be negotiated by the communication parties or distributed by the key distribution center .

86. Key encryption key

The key encryption key (Key Encrypting Key) is mainly used to encrypt the session key to be transmitted , and is also called the secondary key (Secondary Key), secondary key or auxiliary key. The lifetime of the key encryption key is relatively long, because it is mainly used to negotiate or transmit the session key, once it is leaked, all the session keys in its usage period will be leaked.

87. Master key

The master key (Master Key) is mainly used to protect the key encryption key or session key, so that these keys can be distributed online. The master key corresponds to the highest level in the hierarchical key structure, which is a secret key selected by the user or assigned by the system, which can be exclusive to the user for a long time, to some extent , the master key also plays a role in identifying the user. It has the longest life cycle and is strictly protected.

88. Key Lifecycle

The lifecycle of a key refers to the entire process from key generation to final destruction. In this life cycle, the key is in 4 different states: the pre-use state (the key cannot be used for normal cryptographic operations), the use state (the key is available and in normal use), and the post-use state (the key is no longer in normal use, but it is possible to access it offline for some purpose), expired state (the key is no longer in use, and all key records have been deleted)

89. Key distribution

Through the key distribution mechanism, one of the communication parties or the key distribution center selects a secret key, and transmits it to the communication parties without letting others (except the key distribution center) see the key. the other side. In order to prevent the attacker from obtaining the key, the key must be updated frequently, and the strength of the cryptographic system depends on the key distribution technology.

90. Key agreement

The purpose of the key agreement is that the communication parties exchange information in the network to generate a session key shared by both parties. A typical key agreement is the Diffie-Hellrman key exchange protocol, which is a two-party key agreement scheme without identity authentication requirements. The improved end-to-end protocol (Station-to-Station Protocol) based on this agreement is a A more secure key agreement protocol.

91. Key Escrow

Key escrow, also known as escrow encryption, refers to providing better secure communication for the public and users, and at the same time allowing authorized persons (including government secrecy departments, enterprise technical personnel and special users, etc.) Some communication content and can decrypt related ciphertext. Key escrow is also called "key recovery", or understood as "trusted third party", "data recovery" and "special access". It has no absolute privacy and absolutely untraceable anonymity for individuals. The means of its realization is to link the encrypted data with the data recovery key. The data recovery key does not have to be the key for direct decryption, but the decryption key can be obtained from it.

92. Managed Encryption Standards

In order to effectively control the use of encryption technology, the US government proposed the Clipper plan and key escrow encryption technology in April 1993. The essence of key escrow technology is to suggest that the federal government and industry use a new federal encryption standard with key escrow function, namely Escrowed Encryption Standard (EES, Escrowed Encryption Standard), also known as Clipper proposal. The EES standard was officially announced and adopted by the US government in February 1994. The core of the EES standard is a cryptographic component implemented by software and hardware developed under the auspices of the National Security Agency (NSA), called Clipper's anti-tampering chip.

93. Public Key Infrastructure

Public Key Infrastructure (PKI, Public Key Infrastructure) is based on public key cryptography and provides infrastructure for security services. The core is to solve the trust problem in the information network space and determine a reliable digital identity .

PKI technology is based on public key technology, uses digital certificates as the medium, combines symmetric encryption and asymmetric encryption technology, and binds the identity information of individuals, organizations, and devices with their respective public keys. Keys and certificates establish a safe and trusted network operating environment for users, enabling users to easily use encryption and digital signature technologies in various application environments to ensure the confidentiality, integrity, authenticity and authenticity of transmitted information Denial is widely used in e-commerce, e-government and other fields.

94.CA

CA (Certificate Authority) , also known as the certificate authority, is the core component and executive body of PKI. It issues a digital certificate for each user using the public key, proving that the users listed in the certificate legally own the certificate. Listed public keys . The certification center should also include a certificate application registration authority RA (Registration Authority), which is a registration and approval authority for digital certificates.

95. Digital certificates

A digital certificate, also known as a public key certificate, is a document signed by the authoritative certification center CA that contains the public key and the identity information of the public key holder . to help users securely obtain the public key of the other party.

The content of the digital certificate includes: version number (distinguishing different versions of x.509), serial number (CA assigns a unique digital identification to each certificate), certification authority identification (the X.500 name of the unique CA of the organization that issued the certificate) ), subject identification (the name of the certificate holder), subject public key information (the public key corresponding to the subject's private key), certificate validity period and other information.

96. Certificate chain

In the application of PKI, the user's trust comes from the verification of the certificate, and this trust is based on the trust of the trusted third-party CA that issues the certificate itself. X.509 stipulates that CAs are organized in a directory information tree (DIT). The highest-level CA is called the root CA (Root-CA). Certificate verification between users requires a chain ( certificate chain ) generated by the other party's certificate. Starting from the root certificate, through layers of trust (A trusts B, B trusts C, and so on), the holder of the end-entity certificate can obtain the trust of the transfer to prove the identity.

97. Timestamp Authority

Since the user's desktop time is easy to change, the timestamp generated by the time is unreliable, so a trusted third party is needed to provide a reliable and non-repudiable timestamp service. Time Stamp Authority (TSA, Time StampAuthority), which is an important part of PKI, as a trusted third-party time authority, its main function is to provide reliable time information to prove that a document (or a piece of information) A time (or before) exists to prevent users from falsifying data for fraudulent activities before or after this time.

98. Digital Envelope

The digital envelope is a technology that utilizes the advantages of both symmetric encryption technology and asymmetric encryption technology for secure information transmission. The advantage of convenient algorithm key management.

The digital envelope encrypts the message to be transmitted with a session key, and encrypts the session key with the recipient's public key.
insert image description here

10. Quantum cryptography

99. Quantum cryptography

Quantum cryptography is an emerging science that combines quantum physics and cryptography. Quantum cryptography communication is not used to transmit ciphertext or plaintext, but to establish and transmit a key, which is absolutely safe. Quantum cryptography communication is currently the only communication method recognized by science that can achieve absolute security. It can ensure that both parties in legal communication can detect potential eavesdroppers and take corresponding measures to prevent eavesdroppers from cracking quantum cryptography, no matter how powerful the cracker is. .

Quantum cryptography uses quantum uncertainty to construct a secure communication channel so that both parties in communication can detect whether the information has been eavesdropped. This property provides absolute security for both parties in key negotiation or key exchange. Quantum cryptography does not rely on the computational difficulty of the problem, but provides provable unconditional security using the fundamental laws of physics. And unlike a one-time pad, it is impossible for anyone eavesdropping on quantum key exchanges and replicating keys to go undetected.

The security of quantum cryptography is based on the Uncertainty Principle of Heisenberg in quantum mechanics, so breaking the quantum cryptography protocol means negating the laws of quantum mechanics, so quantum cryptography is a theoretically safe cryptographic technology .

100. Quantum Cryptosystem

A quantum cryptography system is a cryptographic system that enables the sender to share secret information with the receiver using a quantum channel, and unauthorized third parties cannot steal the information. A quantum cryptography system consists of quantum information source, quantum channel, classical channel, sender, receiver and other parts.

Guess you like

Origin blog.csdn.net/apr15/article/details/128770449