Why HTTP/3 uses UDP protocol under QUIC instead of TCP?

Why HTTP/3 uses UDP protocol under QUIC instead of TCP?

HTTP/3's use of UDP provides several advantages over TCP,including faster connection,better performance on high-latency networks,and improved security

Let's begin with a quick overview of HTTP/3.

HTTP/3:

HTTP/3 was introduced in 2020 and is the latest version of HTTP. It is based on the QUIC protocol, which was developed by Google. Like HTTP/2, HTTP/3 uses a binary format and supports multiplexing and server push. However, it also includes several other features that aim to improve performance and security, such as improved congestion control and built-in encryption.

What is QUIC?

QUIC (Quick UDP Internet Connections) is a transport layer network protocol that was developed by Google with the aim of improving web performance and security. QUIC is designed to work over UDP (User Datagram Protocol), which is a lightweight and connectionless protocol, unlike TCP (Transmission Control Protocol), which is a connection-oriented protocol.

QUIC combines features of both transport layer and application layer protocols, allowing it to perform multiple tasks in parallel, such as encryption, error correction, congestion control, and multiplexing. It is optimized for the modern web, with support for features such as server push, stream prioritization, and connection migration.

One of the key benefits of QUIC is that it reduces latency and improves reliability by allowing multiple requests and responses to be sent over a single connection. This is achieved through the use of stream multiplexing, where multiple streams of data can be sent and received simultaneously over a single QUIC connection.

Another advantage of QUIC is its built-in encryption, which provides enhanced security and privacy compared to unencrypted protocols like HTTP/1. Since QUIC is designed to work over UDP, it can also bypass certain types of network congestion that can affect TCP-based protocols like HTTP/2.

QUIC is a promising protocol that has the potential to significantly improve web performance and security. It is already being adopted by major web companies like Google, Microsoft, and Cloudflare, and is expected to become more widely used in the coming years.

Why HTTP/3 is using UDP over TCP?

HTTP/3 is using UDP (User Datagram Protocol) instead of TCP (Transmission Control Protocol) because of several reasons:

  1. Faster connections: Unlike TCP, UDP is a connectionless protocol that does not require establishing and maintaining a connection between the client and server. This allows for faster connections and reduces latency, as it eliminates the need for the round-trip time (RTT) required for establishing and tearing down a TCP connection.

  2. Better performance on high-latency networks: TCP was designed in an era when networks were much less complex and had much lower latency than they do today. On high-latency networks, TCP can suffer from long connection setup times and slow start-up, which can negatively impact performance. UDP, on the other hand, can better handle high-latency networks, making it a better choice for modern web applications.

  3. Improved security: HTTP/3 is based on the QUIC (Quick UDP Internet Connections) protocol, which includes built-in encryption and other security features. Since UDP does not require a connection setup, it is less susceptible to certain types of attacks, such as SYN flooding, which can be used to overwhelm TCP connections.

HTTP/3's use of UDP provides several advantages over TCP, including faster connections, better performance on high-latency networks, and improved security. However, it also requires different handling by network equipment, and network providers and firewalls need to support it in order to benefit from its advantages.

How UDP handles high-latency networks?

UDP (User Datagram Protocol) is a simple, lightweight, and connectionless protocol that does not provide many of the features that TCP (Transmission Control Protocol) offers, such as reliability, flow control, and congestion control. However, UDP can be better suited for high-latency networks in several ways:

  1. No handshake: Unlike TCP, which requires a three-way handshake to establish a connection, UDP does not require any connection setup. This means that UDP packets can be sent immediately, without waiting for the handshake to complete, which can reduce latency and improve responsiveness.

  2. No retransmission: TCP uses a technique called retransmission to ensure that packets are delivered reliably. When a packet is lost, TCP will automatically retransmit the packet until it is successfully delivered. While this provides reliable delivery, it can also cause significant delays on high-latency networks, as the sender must wait for the ACK (acknowledgment) from the receiver before sending the next packet. UDP does not offer retransmission, so lost packets are not resent, which can reduce latency.

  3. No congestion control: TCP uses a variety of congestion control algorithms to prevent network congestion and ensure fair sharing of bandwidth. However, these algorithms can be overly aggressive on high latency networks, leading to slow start-up and slow recovery from congestion. UDP does not perform congestion control, so it can be more responsive on high-latency networks.

UDP is better suited for certain types of applications, such as real-time streaming, online gaming, and other applications that require low latency and do not require reliable delivery. However, it is important to note that UDP is less reliable than TCP, as it does not provide error checking or retransmission, and packets can be lost or delivered out of order. Therefore, it is important to carefully consider the specific requirements of your application before choosing between UDP and TCP.

Without retransmission, how QUIC ensures reliable delivery?

Unlike TCP (Transmission Control Protocol), QUIC does not rely on retransmission for reliable delivery of data. Instead, it uses a combination of mechanisms to provide reliable delivery:

  1. Packet numbering: QUIC uses packet numbers to identify and order packets. Each packet is assigned a unique packet number that is incremented for each new packet sent. The receiver uses these packet numbers to detect missing or out-of-order packets.

  2. Acknowledgments: QUIC uses selective acknowledgments (ACKs) to notify the sender which packets have been received successfully. Unlike TCP, which sends cumulative ACKs that acknowledge all packets up to a certain point, QUIC can send selective ACKs that acknowledge specific packets, allowing the receiver to indicate which packets have been received successfully and which packets are missing.

  3. Forward error correction (FEC): QUIC can use forward error correction to provide additional reliability. FEC adds redundancy to the data stream, allowing the receiver to reconstruct missing or damaged packets from the redundant data.

  4. Retransmission: While QUIC does not rely on retransmission for reliability, it can still perform limited retransmission if necessary. If the receiver detects missing or out-of-order packets, it can request the retransmission of those packets from the sender.

QUIC's combination of packet numbering, selective acknowledgments, FEC, and limited retransmission provides reliable delivery without the latency penalties of TCP's full retransmission mechanism. QUIC's reliability mechanisms are optimized for the modern web, allowing it to provide improved performance and security compared to TCP-based protocols.

original post