International Professional Fora:

A study of civil society organisation participation in internet governance

International Professional Fora:

A study of civil society organisation participation in internet governance

In 2013, Google announced that it was working on a way of reducing latency on the web by developing a new transport protocol called QUIC (Quick UDP Internet Connections). As explained here, network performance improves with the decrease of the round trip time (RTT) for establishing connection between the client and the server. The QUIC protocol is intended to outperform the Transmission Control Protocol (TCP) and instead use a new version of the Transport Layer Security (TLS) protocol (TLS 1.3) for encryption over UDP (User Datagram Protocol) which has been relied upon for faster transportation of Internet Protocol (IP) traffic. More clearly, '[t]he standard way to do secure web browsing involves communicating over TCP + TLS, which requires 2 to 3 round trips with a server to establish a secure connection before the browser can request the actual web page. QUIC is designed so that if a client has talked to a given server before, it can start sending data without any round trips, which makes web pages load faster' (Chromium Blog, 17 April 2015). 

The design of QUIC provides for multiplexing the transport of a large number of streams between the server and the client. Google’s QUIC developer, Jim Roskind, explained that the multiplexing function ensured unifying the traffic communication and the reporting and responses to channel characteristics, such as packet losses. QUIC is thus developed to advance the previously designed by Google multiplexing protocol SPDY (pronounced SPeeDY) which became the foundation of HTTP/2 (approved by the IETF in 2015).

In 2015, Google provided an update on the process of testing the QUIC protocol on Google users. It was declared that ‘[r]esults so far are positive, with the data showing that QUIC provides a real performance improvement over TCP thanks to QUIC's lower-latency connection establishment, improved congestion control, and better loss recovery’The greatest gains were announced to come from zero-round-trip connection establishment between endpoints. In terms of video streaming services like YouTube, Google announced that users on QUIC reported 30 % less rebuffs. Recently, however, there were reports suggesting that some of the previous versions of QUIC in Chromium allowed ads to be shielded from ad-blocking applications.

According to Google’s update, already in 2015 ‘roughly half of all requests from Chrome to Google servers [were] served over QUIC’. The company declared its interest to formally propose QUIC to the IETF where a large technical community would be involved in its formal standardisation. A key issue in the standardisation process appears to be the negotiation of the level of encryption vs network management possibilities. One side has argued that ‘QUIC poses a problem for mobile network operators (MNOs)’. The problem seen here is that the ‘modern security measures that are integrated with QUIC are encryption based. And because it is encrypted, MNOs can’t see the traffic that is flowing on their networks.’ MNOs thus fear that this can negatively impact on their monitoring capacities for network management and performance optimisation, including congestion control, trouble shooting. While some QUIC working group members in IETF have not found this necessarily problematic and favoured metadata protection through encryption (see Heise Online), others have raised an issue with the effect of pervasive encryption on operators. Kathleen Moriarty (Dell) and Al Morton (AT&T Labs) have edited an IETF Internet-Draft which tackles the issue of ‘increased use of encryption’ and aims ‘to help guide protocol development in support of manageable, secure networks’. The document shows that, following the pervasive surveillance revelations and the IETF declaration in RFC7258 that ‘pervasive monitoring is an attack’, internet traffic encryption became a key focus. The editors have reminded, however, that RFC7258 has agreed on the need for an increased privacy protection for users; yet, acknowledged that ‘making networks unmanageable to mitigate PM [pervasive monitoring] is not an acceptable outcome’. If the right balance is not reached, the authors have argued, some unfavourable security and network management practices might result. 

 

The Secure Sockets Layer (SSL) standard and its successor - Transport Layer Security (TLS) - have provided security for internet traffic.

The MIKEY-SAKKE (Multimedia Internet KEYing – Sakai Kasahara Key Exchange) protocol was designed by the then known as the Information Security arm of UK’s GCHQ, CESG (now National Cyber Security Centre - NCSC), and published in IETF RFCs 6507, 6508, 6509.

On 7th July 2016, the W3C’s Device and Sensors Working Group returned the specification of the Battery Status API, previously published as a Proposed Recommendation in March 2016, to the status of Candidate Recommendation. The document referred to concerns that have been raised for ‘possible privacy-invasive usage of the Battery Status API’.

On 30 August 2016, the Body of European Regulators for Electronic Communications (BEREC) published its guidelines for the implementation of the EU net neutrality Regulation by the national regulatory agencies (NRAs). The number of responses (481,547 in total) received to the public consultation on the draft guidelines was unprecedented for BEREC; albeit not in comparison to the announced of 3.7 million replies to US FCC’s net neutrality proposals two years ago and the 800,000 emails to the Indian telecoms regulator sent in less than a week, in relation to the same policy issue.

On 29-30 June 2016, the W3C held a workshop on Blockchains and the Web, hosted by the MIT Media Lab. The sponsors of the event included NTT and Blockstream. It was designed to focus on issues on the integration of blockchains into the web and their utilisation.

In relation to broadcasting spectrum, in Europe, the decision-making process in the lead up to the WRC-15 demonstrated the formation of generally three groups of actors – the broadcasters, the mobile cellular players and the wireless broadband industry representatives.

This debate dates back to 2012 when a group of technical experts from Google, Microsoft and Netflix put forward a proposal within the World Wide Web Consortium (W3C) to introduce specifications for Encrypted Media Extensions (EME) in advanced HTML5.

On 29th February 2016, the Internet Engineering Task Force (IETF) published a memo titled IETF Trends and Observations, written by Jari Arkko (Ericsson), Avri Doria (APC), Tobias Gondrom (Huawei), Olaf Kolkman (Internet Society), Steve Olshansky (Internet Society), Benson Schliesser (Brocade Communications), Robert Sparks (Oracle) and Russ White (LinkedIn).

On 5th February 2016 the European Commission (EC) published the results of the public consultation on Standards in the Digital Single Market: setting priorities and ensuring delivery. A total of 156 replies were received from individuals, SMEs, large enterprises and industrial associations, global and regional standardisation organisations and technical standard setting fora, public authorities, research centres. 

Following the decisions made at the ITU’s WRC in November 2015, the European Commission (EC) released its Proposal for a decision on the use of the 470-790 MHz frequency band in the EU.

Preliminary trends are observed in the public consultation on Standards in the Digital Single Market. This consultation relates to developing standards and interoperability in the ICT domain. Stakeholders' views are very important to identify problems and obtain possible future priority-setting policy on ICT standardisation.

Cybercrime affects over 1 million people worldwide a day, and cyber attacks on public institutions and businesses are increasing. George Christou's new book titled Cybersecurity in the European Union. Resilience and Adaptability in Governance Policy (published by Palgrave) interrogates the European Union's evolving cybersecurity policies and strategy and argues that while progress is being made, much remains to be done to ensure a secure and resilient cyberspace in the future.

Page 1 of 2