QUIC isn't fast enough even with fast internet

@Toryn
Go confront all the organizations operating middleboxes that ignore DNS or won’t forward anything other than TCP and UDP

TCP has enjoyed decades of routing optimizations; thus, any new protocol improving on it lacks similar system-wide boosts. QUIC is indeed an improvement over what you can do with TCP and UDP; it’s faster, has better latency, and has encryption built in. The issue is that annoying, non-compliant boxes are slowing it down and the answer is getting them to change, not abandoning a better protocol.

@Toryn
Unless you’re Microsoft. They release half-finished software, including the OS itself, then gradually add back the features present in earlier versions.

I think Google created QUIC mainly to save server resources, not necessarily to improve the client experience.

Noah said:
I think Google created QUIC mainly to save server resources, not necessarily to improve the client experience.

Where did you hear that? I thought it was to enable interleaving content chunks, making packet loss less of a problem.

Noah said:
I think Google created QUIC mainly to save server resources, not necessarily to improve the client experience.

You’re completely mistaken. QUIC performs better on slow links with congestion, which most people experience. The fast fiber connections where QUIC doesn’t perform as well were incredibly rare over 10 years ago when its design started, and it still makes up a smaller share (except in a few wealthy countries at this time).

@Hollis
I don’t feel it’s a small share anymore. Do you have any proof?

Rowan said:
@Hollis
I don’t feel it’s a small share anymore. Do you have any proof?

The paper mentions that on Chrome, QUIC begins performing worse at bandwidth beyond ~500 Mbps. So looking at current average user internet bandwidth: Speedtest, worldpopulationreview.com, Statista. You can see the median speed is not exceeding 500 Mbps.

In summary: 1) this issue only occurs with very large file downloads (not common for general web browsing experiences), and 2) it mainly affects users with high-speed low-latency internet connections like fiber or 5G networks. Yes, it does affect me because I work from home, have a fiber connection, and have to download big Docker images often, but for the vast majority of the world, it isn’t a problem.

And when I mentioned it only affects a few wealthy areas (mostly in large cities), check out this ISP in Switzerland offering symmetrical 25Gbps fiber for just 65 CHF (75 USD) per month: Fiber7 – Unrivalled peak performance.

@Hollis
So far, it seems QUIC is designed around user experience. It offers noticeable speed when useful, and performs poorly mostly in situations where most people won’t gain from it and those who do likely won’t notice. I think that human factors are underrated in this discussion.

That said, the paper seems important for providing QUIC with room for improvement. But I fear that the headline may lead to incorrect conclusions; QUIC still stands out as their optimal choice in the real world.

@Zacky
I suspect this trend will lead to 1) the QUIC group working on improvements taking time to roll out and 2) any parties needing to ensure fast downloads for large files will opt to disable HTTP/3 at those points for now.

Noah said:
I think Google created QUIC mainly to save server resources, not necessarily to improve the client experience.

How does it save server resources if it allows dozens or even hundreds of requests at once?

How does QUIC’s performance measure against simply increasing the number of concurrent requests per domain to eight?

@Brenner

  • 0-RTT handshakes

  • back pressure per substream.

  • the protocol manages clients switching connections

  • packet reassembly isn’t line-blocking.

  • not all clients are web browsers, and so aren’t limited by a per-domain cap.

QUIC is a mess, and so many intermediaries block UDP packets that services like Cloudflare Images (where you can’t disable the protocol) break when the client can’t downgrade (for example, with react native).

I had to use Wireshark to find that my mobile packets were using UDP while the desktop emulator was using TCP, and I noticed Cloudflare would sometimes not respond for as long as 60 seconds, gradually delaying the upload until the session ID eventually expired. I had to resolve this by using S3 direct uploads and then activating a server-side upload to Cloudflare Images. It’s complete insanity and nobody believes what I’m describing.

@Sai
I’ve never seen UDP blocked. If UDP were blocked, DNS wouldn’t work, neither would VoIP or VPNs. Some dumb middleboxes wrongly block UDP port 443 simply because vendors were too slow or lazy to implement TLS checking for QUIC. That’s sorted now, but some users still block it, usually due to these subpar third-party vendors.

@Zyler
I meant specifically UDP 443. It seems randomly blocked depending on the route the packet takes. I can’t manage or predict it, and I can’t put my mobile app users through this since uploads can take a long time to fail and can’t be easily recovered.

Still no HTTP/3 support in Node