International Professional Fora:

A study of civil society organisation participation in internet governance

International Professional Fora:

A study of civil society organisation participation in internet governance

In 2013, Google announced that it was working on a way of reducing latency on the web by developing a new transport protocol called QUIC (Quick UDP Internet Connections). As explained here, network performance improves with the decrease of the round trip time (RTT) for establishing connection between the client and the server. The QUIC protocol is intended to outperform the Transmission Control Protocol (TCP) and instead use a new version of the Transport Layer Security (TLS) protocol (TLS 1.3) for encryption over UDP (User Datagram Protocol) which has been relied upon for faster transportation of Internet Protocol (IP) traffic. More clearly, '[t]he standard way to do secure web browsing involves communicating over TCP + TLS, which requires 2 to 3 round trips with a server to establish a secure connection before the browser can request the actual web page. QUIC is designed so that if a client has talked to a given server before, it can start sending data without any round trips, which makes web pages load faster' (Chromium Blog, 17 April 2015). 

The design of QUIC provides for multiplexing the transport of a large number of streams between the server and the client. Google’s QUIC developer, Jim Roskind, explained that the multiplexing function ensured unifying the traffic communication and the reporting and responses to channel characteristics, such as packet losses. QUIC is thus developed to advance the previously designed by Google multiplexing protocol SPDY (pronounced SPeeDY) which became the foundation of HTTP/2 (approved by the IETF in 2015).

In 2015, Google provided an update on the process of testing the QUIC protocol on Google users. It was declared that ‘[r]esults so far are positive, with the data showing that QUIC provides a real performance improvement over TCP thanks to QUIC's lower-latency connection establishment, improved congestion control, and better loss recovery’The greatest gains were announced to come from zero-round-trip connection establishment between endpoints. In terms of video streaming services like YouTube, Google announced that users on QUIC reported 30 % less rebuffs. Recently, however, there were reports suggesting that some of the previous versions of QUIC in Chromium allowed ads to be shielded from ad-blocking applications.

According to Google’s update, already in 2015 ‘roughly half of all requests from Chrome to Google servers [were] served over QUIC’. The company declared its interest to formally propose QUIC to the IETF where a large technical community would be involved in its formal standardisation. A key issue in the standardisation process appears to be the negotiation of the level of encryption vs network management possibilities. One side has argued that ‘QUIC poses a problem for mobile network operators (MNOs)’. The problem seen here is that the ‘modern security measures that are integrated with QUIC are encryption based. And because it is encrypted, MNOs can’t see the traffic that is flowing on their networks.’ MNOs thus fear that this can negatively impact on their monitoring capacities for network management and performance optimisation, including congestion control, trouble shooting. While some QUIC working group members in IETF have not found this necessarily problematic and favoured metadata protection through encryption (see Heise Online), others have raised an issue with the effect of pervasive encryption on operators. Kathleen Moriarty (Dell) and Al Morton (AT&T Labs) have edited an IETF Internet-Draft which tackles the issue of ‘increased use of encryption’ and aims ‘to help guide protocol development in support of manageable, secure networks’. The document shows that, following the pervasive surveillance revelations and the IETF declaration in RFC7258 that ‘pervasive monitoring is an attack’, internet traffic encryption became a key focus. The editors have reminded, however, that RFC7258 has agreed on the need for an increased privacy protection for users; yet, acknowledged that ‘making networks unmanageable to mitigate PM [pervasive monitoring] is not an acceptable outcome’. If the right balance is not reached, the authors have argued, some unfavourable security and network management practices might result.