Learn QUIC series from Brother Jian: Address Validation

QUIC is a new generation of Internet transport protocol. When designing protocol standards, the IETF QUIC working group not only pays attention to optimizing performance, but also needs to focus on security. This article introduces QUIC's Address Validation.

Address verification is mainly used to ensure that endpoints cannot be used for traffic amplification attacks. If the attacker forges the source address of the data packet to be the victim's address and sends a large number of data packets to the server, if the server does not perform address verification and directly responds to a large number of data packets to the source address (victim), the attacker will be attacked. Use and conduct traffic amplification attacks.

QUIC's main defense against amplification attacks is to verify that an endpoint can receive packets at its declared transport address. Address validation occurs during connection establishment and during connection migration.

  • When the connection is established, in order to verify whether the client's address is forged by the attacker, the server will generate a token and respond to the client with a Retry packet. The client needs to bring this token in the subsequent initial packet (Initial packet) so that the server can verify the address.
  • The server can issue a token in advance through the NEW_TOKEN frame in the current connection, so that the client can use it in subsequent new connections, which is an important function of QUIC to implement 0-RTT.
  • When our network path changes (such as switching from cellular to WIFI), QUIC provides connection migration to avoid connection interruptions. QUIC verifies the reachability of new addresses on the network through Path Validation, preventing addresses in connection migration from being forged by attackers.

Below we will learn more about how QUIC performs Address Validation:

Address verification at connection establishment

The establishment of the connection implicitly provides address validation for both endpoints. In particular, receiving a packet protected with a Handshake key means that the peer successfully processed the Initial packet. Once the server has successfully processed a Handshake packet sent by the client, it can consider that the client's address has been verified as valid.

If the connection id used by the peer is the one chosen by the endpoint, and the connection id contains at least 64 bits of entropy, then the endpoint can consider the peer's address to be verified . For the client, the Destination Connection ID field in the first Initial packet can be used to verify the server address.

The server may wish to verify the validity of the client's address before starting the crypto handshake. QUIC uses a token in the Initial packet for address verification. This token (token) can be passed to the client during connection establishment via a Retry packet or in a previous connection using a NEW_TOKEN frame. Subsequent handshakes can only be performed after the token verification is passed.

Precautions

  • Before verifying the client's address, the server cannot send more than three times the number of bytes it receives, which prevents an attacker from performing an amplification attack before the address is verified.
  • The client MUST ensure that the payload of UDP datagrams containing initial packets is at least 1200 bytes, and MADDING frame padding may be added if it is less than 1200 bytes.
  • The client loses the initial (Initial) or handshake (Handshake) packets from the server, which may lead to a deadlock (deadlock). To prevent this kind of deadlock, the client must send a packet on probe timeout (PTO, probe timeout). If the client does not have a Handshake key, it SHOULD send a UDP datagram (at least 1200 bytes) containing an Initial packet. If the client has the Handshake key, it SHOULD send a Handshake packet.

Validate addresses using Retry Packets

After receiving the initial packet from the client, the server can request address verification by sending a Retry packet containing a token. After the client receives the token for this Retry packet, it MUST attach the token to all subsequent Initial packets in the connection.

The following diagram shows the use of the Retry packet:

Client                                                  Server

Initial[0]: CRYPTO[CH] ->

                                                <- Retry+Token

Initial+Token[1]: CRYPTO[CH] ->

                                 Initial[0]: CRYPTO[SH] ACK[1]
                       Handshake[0]: CRYPTO[EE, CERT, CV, FIN]
                                 <- 1-RTT[0]: STREAM[1, "..."]

When the server receives the initial packet with the token (Initial packet), it cannot resend another retry packet (Retry packet). The server can only choose to abort the connection, or allow the packet to continue processing.

The token is randomly generated, and it is impossible for an attacker to generate a valid token for his address. So it can be used to prove to the server that the client received the token and that the address is valid.

Precautions

  • Servers can also use Retry packets to delay connection establishment state and processing overhead. This requires the server to provide a different connection ID and the transmission parameter original_destination_connection_id, forcing the server to prove that it or the entity it interacts with received the client's original initial packet (Initial packet). Providing a different connection ID also allows the server to control how subsequent packets are routed. This can be used to direct connections to other server instances.
  • If the server receives the client's initial packet containing an invalid token, it knows that the client failed address verification. The server MAY drop the packet, close the connection, and return an INVALID_TOKEN error.

Subsequent connections use the token of the NEW_TOKEN frame

The server can provide the client with an address verification token (token) in one connection, which can be used for subsequent connections. This is especially important for 0-RTT, subsequent new connections can directly use the token for address verification without the need for additional 1-RTT.

The server uses the NEW_TOKEN frame to provide the client with an address verification token, which can be used to verify subsequent connections. On subsequent connections, the client includes this token in the Initial packets to provide address verification.

The token provided in the Retry packet can only be used immediately and cannot be used for address verification on subsequent connections. The token generated by the NEW_TOKEN frame can be used within a time range. This token should have an expiration time, which can be an explicit expiration time or a timestamp that can be used to dynamically calculate the expiration time. The server can store the expiration time or include it in encrypted form in the token.

The token received in the NEW_TOKEN frame is valid for any connection from an authoritative server (eg, the certificate contains the server name). When connecting to a server, the client reserves an applicable and unused token for it, which it SHOULD include in the Token field of its Initial packets. Including the token allows the server to verify the client address without additional RTT. A client MUST NOT send an inapplicable token to a connecting server unless the client knows that the token-issuing server and the client-connecting server are jointly managing tokens.

In a stateless design, the server can use encrypted and authenticated tokens to pass information to the client, which the server can later restore and use to verify the client address. Tokens are not integrated into the cryptographic handshake, so they are not authenticated. For example, the client may reuse the token. To avoid related attacks exploiting this feature, the server can restrict the token to contain only the information needed to verify the client's address.

Precautions

  • The client MUST include this token in all Initial packets it sends, unless a Retry packet replaces the token with a new one.
  • The client cannot use the token provided by the Retry on subsequent new connections, because the token of the Retry packet can only be used immediately on the current connection. The server MAY discard any initial packet that does not carry the expected token.
  • Tokens issued by NEW_TOKEN frames MUST NOT contain leaked connection information. For example, it cannot include previous connection ID or address information unless those values ​​are encrypted.
  • The server MUST ensure that each NEW_TOKEN frame it sends is unique among all clients, unless it is resent to repair a previously lost NEW_TOKEN frame.
  • Tokens allow the server to associate the connection that created the token with the connection that used the token. If the client does not want to continue using the server's token, it can discard the token obtained from the NEW_TOKEN frame. The token obtained from the Retry packet MUST be used immediately on the current connection attempt and should not be used on subsequent connection attempts.
  • Clients SHOULD NOT reuse the NEW_TOKEN token across different connection attempts.
  • A client may receive multiple tokens on a connection. The server MAY send additional tokens to enable address verification for multiple connection attempts, or to replace old expired tokens. For the client, this ambiguity means that sending a token that has not been used recently is most likely valid. While saving and using the old token has no negative effects, the client may consider the old token less useful for server-side address verification.
  • When the server receives an Initial packet with an address verification token, it MUST attempt to verify the token unless it has already completed address verification. If the token is invalid, the server SHOULD proceed as if the client had not authenticated the address, including possibly sending a Retry packet. If authentication is successful, the server SHOULD allow the handshake to proceed.
  • Servers SHOULD use different encodes for tokens for NEW_TOKEN frames and Retry packets, and validate the latter more strictly.

Token integrity

The token must be hard to guess, just include a large enough random value in the token. The server needs to remember the value it sent to the client for subsequent address verification.

Tokens must be covered by integrity protection to prevent modification or forgery by clients. Without integrity protection, malicious clients can generate or guess tokens acceptable to the server.

The token does not need to be in a well-defined format, as the server that generates the token will also use it. Tokens sent in Retry packets SHOULD contain information that allows the server to verify that the source IP address and port in the client's packets remain the same.

The token sent in the NEW_TOKEN frame must contain information that allows the server to verify that the client IP address has not changed since the token was issued. The server can use the token in NEW_TOKEN to decide not to send a Retry packet, even if the client's address has changed. If the client IP address has changed, the server MUST respect anti-amplification restrictions. Note that in the presence of NAT, this requirement may not be sufficient to protect other hosts sharing the NAT from amplification attacks.

Attackers can replay tokens to allow servers to act as amplifiers for DDoS attacks. To prevent such attacks, the server must ensure that token replay is prevented or restricted. Servers SHOULD ensure that tokens sent in Retry packets are only accepted for a short period of time. Tokens provided in NEW_TOKEN frames require a longer validity period, but should not be accepted multiple times in a short period of time, and the server is encouraged to allow the token to be used only once. If possible, the token can contain additional information about the client to further narrow the applicability or reuse.

Path Validation

Path verification is used for both peers to verify reachability after address changes during connection migration. In path verification, endpoints test reachability between a local address and a peer address, where an address is a two-tuple of IP address and port.

Path verification is used to ensure that packets received from the migrator do not carry forged source addresses. Any endpoint can use path validation at any time. For example, an endpoint might check to see if the peer is still using the same address after a quiet period.

Path validity is not designed to be a NAT traversal mechanism, although path validity may be efficient for establishing NAT bindings that support traversal, the expectation is that the peer does not send data first. Packets can be received in the case of packets. Efficient NAT traversal requires additional synchronization mechanisms, which are not provided here.

An endpoint may send PATH_CHALLENGE and PATH_RESPONSE frames in combination with other frames during path verification. In particular, an endpoint may add PADDING frames to PATH_CHALLENGE frames up to 1200 bytes, or may send PATH_RESPONSE frames in combination with its own PATH_CHALLENGE frames.

The endpoint uses the new connection ID for probes sent from the new local address. When probing new paths, endpoints want to ensure that the peer has an unused connection ID to send a response to. An endpoint can send NEW_CONNECTION_ID and PATH_CHALLENGE frames in the same packet if the peer's active_connection_id_limit allows it, which ensures that the peer has an unused connection ID for sending a response.

Initiating Path Validation

To initiate path verification, the endpoint will send a PATH_CHALLENGE frame, which must contain an unpredictable payload so that it can correlate the peer's response with the corresponding PATH_CHALLENGE.

An endpoint MAY send multiple PATH_CHALLENGE frames to prevent packet loss. However, multiple PATH_CHALLENGE frames should not be sent in the same packet, but in separate packets.

Endpoints SHOULD NOT send PATH_CHALLENGE frames more frequently than initial packets, this ensures that connection migration is not more load than establishing a new connection on a new path.

Endpoints MUST extend datagrams containing PATH_CHALLENGE frames to at least 1200 bytes.

Path Validation Responses

Upon receiving a PATH_CHALLENGE frame, the endpoint MUST echo the data in the PATH_CHALLENGE frame via a PATH_RESPONSE frame. Unless restricted by congestion control, endpoints MUST NOT delay transmission of packets containing PATH_RESPONSE frames.

PATH_RESPONSE frames MUST be sent on the network path where the PATH_CHALLENGE was received. This ensures that path validation will only succeed if it works in both directions of the path. Endpoints that enable path validation MUST NOT enforce this requirement, as this would lead to an attack on connection migration.

Endpoints MUST extend datagrams containing PATH_RESPONSE frames to at least 1200 bytes.

An endpoint MUST NOT send multiple PATH_RESPONSE frames in response to a PATH_CHALLENGE frame. The peer can send more PATH_CHALLENGE frames as needed to cause additional PATH_RESPONSE frames.

Successful Path Validation

Path verification is successful when a PATH_RESPONSE frame is received and it contains data sent in a previous PATH_CHALLENGE frame.

An acknowledgment (ACK) received for a packet containing a PATH_CHALLENGE frame does not prove valid path verification, as the ACK may be spoofed by a malicious peer.

Failed Path Validation

Path verification will only fail if the party trying to verify it voluntarily gives up.

Endpoints SHOULD actively abandon path validation based on timers. When setting the timer, be aware that the RTT of the new path may be longer than the original path. It is recommended to use the current probe timeout (PTO, Probe Timeout) or three times the larger value of the new path PTO.

This timeout allows multiple PTOs to expire before path validation fails, so the loss of a single PATH_CHALLENGE or PATH_RESPONSE frame will not cause path validation to fail.

Note that the endpoint MAY receive packets containing other frames on the new path, but a PATH_RESPONSE frame containing the correct data is required in order to successfully verify that the path is valid.

Paths are confirmed to be unavailable when an endpoint abandons path validation. This does not necessarily mean that the connection fails, the endpoint can continue sending packets through other paths as needed. If no paths are available, the endpoint can choose to wait until a new path is available, or close the connection.

There are other reasons for abandoning path validation. For example, when a connection starts to migrate to the new path, the validation of an old path is being processed.


The full name of QUIC  is the Quick UDP Internet Connections protocol, which was designed and proposed by Google and is currently being promoted by the IETF working group. Its design goal is to replace TCP as the data transport layer protocol of HTTP/3. In the Internet of Things (IoT) and edge computing (Edge Computing) scenarios, Xile Technology has also been building YoMo , an edge computing microservice framework based on the QUIC communication protocol at the bottom, and has been paying attention to the development of the QUIC protocol for a long time. time knowledge points.

Online community: discord/quic

Maintainer: YoMo

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324083579&siteId=291194637