The design of QUIC provides for multiplexing the transport of a large number of streams between the server and the client. Google’s QUIC developer, Jim Roskind, explained that the multiplexing function ensured unifying the traffic communication and the reporting and responses to channel characteristics, such as packet losses. QUIC is thus developed to advance the previously designed by Google multiplexing protocol SPDY (pronounced SPeeDY) which became the foundation of HTTP/2 (approved by the IETF in 2015).
In 2015, Google provided an update on the process of testing the QUIC protocol on Google users. It was declared that ‘[r]esults so far are positive, with the data showing that QUIC provides a real performance improvement over TCP thanks to QUIC's lower-latency connection establishment, improved congestion control, and better loss recovery’. The greatest gains were announced to come from zero-round-trip connection establishment between endpoints. In terms of video streaming services like YouTube, Google announced that users on QUIC reported 30 % less rebuffs. Recently, however, there were reports suggesting that some of the previous versions of QUIC in Chromium allowed ads to be shielded from ad-blocking applications.
According to Google’s update, already in 2015 ‘roughly half of all requests from Chrome to Google servers [were] served over QUIC’. The company declared its interest to formally propose QUIC to the IETF where a large technical community would be involved in its formal standardisation. A key issue in the standardisation process appears to be the negotiation of the level of encryption vs network management possibilities. One side has argued that ‘QUIC poses a problem for mobile network operators (MNOs)’. The problem seen here is that the ‘modern security measures that are integrated with QUIC are encryption based. And because it is encrypted, MNOs can’t see the traffic that is flowing on their networks.’ MNOs thus fear that this can negatively impact on their monitoring capacities for network management and performance optimisation, including congestion control, trouble shooting. While some QUIC working group members in IETF have not found this necessarily problematic and favoured metadata protection through encryption (see Heise Online), others have raised an issue with the effect of pervasive encryption on operators. Kathleen Moriarty (Dell) and Al Morton (AT&T Labs) have edited an IETF Internet-Draft which tackles the issue of ‘increased use of encryption’ and aims ‘to help guide protocol development in support of manageable, secure networks’. The document shows that, following the pervasive surveillance revelations and the IETF declaration in RFC7258 that ‘pervasive monitoring is an attack’, internet traffic encryption became a key focus. The editors have reminded, however, that RFC7258 has agreed on the need for an increased privacy protection for users; yet, acknowledged that ‘making networks unmanageable to mitigate PM [pervasive monitoring] is not an acceptable outcome’. If the right balance is not reached, the authors have argued, some unfavourable security and network management practices might result.