Design, Implementation and Analysis of Networking Architectures

Internet Citizen Rights Observatory 

Internet users are highly interested in knowing the expected and/or actual quality of experience and in detecting potential privacy leakages. These are two essential Internet citizen rights we plan to address in the Diana team. However, the Internet is based on the best effort model and therefore provides no quality of service support. The perceived quality depends on many factors as network and service provisioning, the behavior of the other users, peering agreements between operators, and the diverse practices of network administrators in terms of security and traffic engineering done manually today and probably automatically on programmable infrastructure tomorrow. The proliferation of wireless and mobile access have complicated further this unpredictability of the Internet by adding other factors such as the mobility of end users, the type of wireless technology used, the coverage and level of interference. In addition, the Internet does not have a standard measurement and control plane. Apart from basic information on routing tables, all the rest (delays, available bandwidth, loss rate, anomalies and their root cause, network topology, ISP commercial relationships, etc.) are to be discovered. Several monitoring tools were developed by projects such as CAIDA or Google's M-Lab to understand the performance of the Internet and provide end users with information on the quality of their access. However, existing tools and techniques are mostly host-oriented and provide network-level measurements that can hardly be interpreted by the end users in terms of Quality of Experience (QoE). In fact, as the usage model shifts toward Information-centric networking, there is a need to define solutions to monitor and even predict application-level performance at the access based on objective measurements from the network. In the future Internet, there should be some minimum level of transparency allowing end users to evaluate their Internet access regarding the different services and applications they are interested in, and in case of trouble, to identify its origin. This migration of measurements to contents and services, which can be qualified as "Future Internet Observatory", requires understanding the traffic generated by the applications, inferring the practices of content providers and operators, defining relevant QoE metrics, finding low cost techniques to avoid measurement traffic explosion and redundancy (based, for example, on crowd sourcing) and leveraging spatiotemporal correlations for better localization of network anomalies.

Unfortunately, the quality of Internet applications as perceived by end users depends on numerous factors influenced directly by the home network, the access link (either wireless or wired), the core network, or even the content provider infrastructure. The perceived quality also depends on the application requirements in terms of network characteristics and path performances. This multiplicity of factors makes it difficult for the end user to understand the reasons for any quality degradation. Understanding the reasons of the degradation is getting even more difficult with the mobility of end users and the complexity of applications and services themselves. Nevertheless, it is essential for end users to understand the quality they obtain from the Internet and in case of dissatisfaction, to identify the root cause of the problem and pinpoint responsibilities. This process implies two major challenges. On one hand, there is a need to have a mapping between the quality obtained and the network performance, and to understand the exact behavior of modern applications and protocols. This phase involves the measurements and analysis of applications' traffic and user feedback, and the calibration of models to map the perceived level of quality to network level performance metrics. On the other hand, there is a need for inference techniques to identify the network part hidden behind the observed problem, e.g. knowing which the part of the network causes a bandwidth decrease or high loss rate event. In the literature, this inference problem is often called network tomography, which consists of inferring internal network behavior from edge measurements. Network tomography can be done in two complementary ways. One approach is to run several tests from the end user access excluding each time different network parts, and by intersecting the observations, find the part very likely causing the problem. The advantage of this approach is that the user controls every point of the inference. Unfortunately, this technique requires extensive measurements from each user, which can be difficult to realize when resources are scarce such as on mobile wireless networks. Another approach can be to distribute the measurement among different end points and share their observations. The advantage is clearly to reduce the load for every one but it comes at the expense of higher complexity to successfully performing the inference. A first difficulty is in the distribution of the measurement work among users and devices. Another issue is in the combination of observations (i.e., which weight to give to each end user according to its location, type of access, etc.) particularly as network conditions can vary from one to another.

The shift of measurements toward mobile devices and modern applications and services will require a completely new methodology. We have dealt up to know with network-level measurements to infer the performance of the current Internet architecture. This past measurement effort has mostly targeted well-known protocols and architectures that are mostly standardized. It has targeted laptops and desktops that are often easily programmable and not suffering from bandwidth and computing resource constraints. For this new project, we will deal with a large number of proprietary services and applications that require, each from its side, a considerable measurement effort to understand its behavior, and implement the appropriate network-level measurements to predict its quality. And given the large number of these applications and services, we will face a certain problem of measurement overhead explosion that we will have to solve and reduce by either measurement reutilization or crowd-sourcing approach. The consideration of mobiles with their close operating systems and limited resources will increase even further the complexity of this measurement effort.

QoE and user privacy are, in our vision, the most critical issue for end-users. There are daily headlines on issues linked to citizen rights degradation (such as, Google data retention, PRISM, mobile applications privacy leakages, targeted and differentiated advertisements, etc.) The common belief is that it is not possible to improve the situation as all technological choices are in the hands of big Internet companies and states. The long-term objective of our research is to study the validity of this statement and to propose to end-users (and possibly service providers) architectural solutions to improve transparency by exposing potential citizen rights violations. One way to improve this transparency is to leverage on the end-users set-top-box in order to implement an indirection infrastructure auditing and filtering all traffic from each end-user.

Open Network Architecture 

As discussed above, whereas the Internet can successfully interconnect billions of devices, it fails to provide a transparent and efficient sharing between information producers and consumers. Here Information producers and consumers must be considered in their broad definition, for instance a microphone, a speaker, a digital camera, a TV screen, a CPU, a hard drive, but also services such as email, storage in the cloud, a Facebook account, etc. In addition to classical contents, information can include a flow of content updated in real time, a description of a device, a Web service, etc. Enabling a transparent open access and sharing to information among all these devices will likely revolutionize the way the Internet is used today.

This research direction aims at proposing global solutions for easy and open content access and more generally to information interoperability. This activity will leverage on current efforts on information-centric networking (e.g., CCN, PSIRP, NetInf). In a first stage, the goal will consist in offering to users a personal overlay solution to publish and manage their own contents, at anytime and whatever the available network access technology (cable, Wi-Fi, 3G, 4G, etc.). The main challenge will be to design scalable mechanisms to seamlessly publish and access information in an efficient way, while preserving privacy. Another challenge will be to incrementally deploy these mechanisms and ensure their adoption by end users, content providers, and network operators. In the context of the evolution of the Internet architecture and in particular through Software defined Networking (SDN), there is a risk that some network operators or other tenants use the increased flexibility of the network against the benefits of the users. So, one of our concern will be to design innovative solutions to prevent possible violation of the network neutrality or to prevent illegitimate collection of private data. In parallel, we envision using SDN as an enabling technology to adapt the network in order to maximize user QoE. Indeed, virtualized network appliances are an efficient way to dynamically insert at strategic places in-network functionalities such as caching proxies, load balancers, cyphers, or firewalls. On this purpose, we plan to build a dedicated open infrastructure relying on a mix of middle boxes and mobile devices applications to capture, analyze and optimize traffic between mobile devices and the Internet.

SDN will introduce a deep shift in the way to design and deploy communications mechanisms. Traditionally, and mainly due to the ossification of the Internet, we used to enhance communication mechanisms by designing our solutions as overlays to the network infrastructure. Using SDN, we will have the opportunity to implement and use new functionalities within the network. If we make them available through well-defined API, those new network functions could be used to implement interoperable, transparent and open services for the benefit of the user. Indeed, implementing these functionalities within the network is not only more efficient than overlay solutions but this can facilitate the deployment of standard services. Important challenges will have to be solved to make this happen, and particularly, to ensure consistency, stability, scalability, reliability and privacy.

Our long-term objective in this research direction is to contribute to the design of network architecture providing native support for easy, transparent, secure, privacy preserving access to information. For instance, an objective is to enable end-users to leverage on their home infrastructure (set-top-boxes, computers, smartphones, tablets) to sanitize traffic and host information.